verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has joined #mlpack
vpal has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vpal is now known as vivekp
robertohueso has joined #mlpack
robertohueso has quit [Quit: leaving]
nikhilweee has quit [Quit: Ping timeout (120 seconds)]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 268 seconds]
sumedhghaisas2 has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 240 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
Atharva has joined #mlpack
< Atharva> sumedhghaisas: Hi Sumedh, how do you think I should proceed in order to get ready for the VAE project proposal?
viraj has joined #mlpack
< Atharva> I have already solved some issues and have started to get comfortable with the codebase.
viraj has quit [Client Quit]
viraj has joined #mlpack
< Atharva> Do you think it will help if I implement a convnet using mlpack or should I experiment with generative models?
Atharva has quit [Client Quit]
Atharva has joined #mlpack
< sumedhghaisas> Atharva: Hi Atharva.
< sumedhghaisas> I think the project should focus first on working implementation of VAE
< sumedhghaisas> for this using feedforward network is enough as far as MNIST is concerned
< sumedhghaisas> training of VAEs is not difficult, the architecture must support sampling
< Atharva> I understand, so for now I guess I should get as familiar with the ANN codebase as possible.
< sumedhghaisas> best thing I would suggest is coming up with tentative API of the module
< sumedhghaisas> the API should support the basic necessities, like sampling from the latent posterior or prior
< Atharva> Okay, I will do all the research required and come up with a tentative API.
< Atharva> Also, will it help if I implement a neural network with mlpack and include the link in the proposal, or should I just focus on researching about VAE and working on the API?
Atharva has quit [Quit: Page closed]
Atharva has joined #mlpack
< sumedhghaisas> Atharva: working ANN code will show that you are familiar with the ANN module and how to use it. That would be helpful. But also designing a robust API will also involve knowledge of how the current architecture works and how to use it efficiently...
< sumedhghaisas> a robust API will show your familiarity with whole MLPack
< Atharva> Thanks Sumedh! I will get back with updates about my progress.
< Atharva> Can you tell me when are you generally active on irc?
< Atharva> Can you share your email or should I just contact you on irc if I need to?
viraj has quit [Quit: Page closed]
< sumedhghaisas> Atharva: Sorry for the late response. Stuck in the meeting.
< sumedhghaisas> my email is sumedhghaisas@gmail.com
< sumedhghaisas> please free free to msg me on IRC also... I will try to respond as quickly as I can
< Atharva> Thanks for the help, I will keep all this in mind. :)
Atharva has quit [Quit: Page closed]
rajeshdm9 has joined #mlpack
< zoq> Atharva: We like to keep project discussions public, that way more people can jump in and provide feedback.
aman____ has joined #mlpack
aman_ has quit [Ping timeout: 276 seconds]
robertohueso has joined #mlpack
< robertohueso> Is it defined anywhere what methods,etc. a "RuleType" has to have?
Atharva has joined #mlpack
< rcurtin> robertohueso: I thought that it was documented in one of the tutorials, but it should basically be BaseCase() and Score()
< rcurtin> you can take a look at one of the other existing RuleTypes to get an idea, or you can see the DualTreeTraverser classes and SingleTreeTraverser classes in src/mlpack/core/tree/ to see what methods they are expecting the RuleTypes to have
< Atharva> zoq: Okay, I understand. I will try to keep all the discussions public.
rajeshdm9 has quit [Ping timeout: 260 seconds]
< robertohueso> rcurtin: Thanks :)
< rcurtin> sure, hope it helps---and if you find the documentation lacking, please feel free to make a note of it or improve it
< rcurtin> it's hard to predict what information is useful or needed for a new user or contributor sometimes :)
< dk97[m]> zoq: I have made a loss_functions folder inside the ann folder
< dk97[m]> It contains KL Divergence loss as of now.
< dk97[m]> I saw that the MSE and cross entropy loss are already implemented in the ann_layers folder...
< dk97[m]> Should I shift them to loss_functions folder?
< dk97[m]> Also, I think there are mistakes in the MSE function implemented in mlpack.
< zoq> dk97[m] Yes, let's move them, will take a look at the PR.
< zoq> dk97[m]: Abotu the issue, are you talking about the backward function?
< dk97[m]> yeah...
< dk97[m]> it should be the mean
< zoq> dk97[m]: I see, will have a look and fix it.
< dk97[m]> okay!
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
Atharva has quit [Quit: Page closed]
ImQ009 has joined #mlpack
satyam_2401 has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
vivekp has quit [Ping timeout: 256 seconds]
moksh has joined #mlpack
< moksh> @zoq sorry, I did not see that SARSA has been implemented in the documentation and might've missed it in the code. Really sorry about that. Could you suggest a RL algorithm that would be a good addition to the library?
vivekp has joined #mlpack
< zoq> moksh: One idea is to take a look at the Simple Nearest Neighbor Policy; perhaps that is an option?
< moksh> @zoq Thanks a lot, I'll take a look and get back to you :)
< dk97[m]> zoq: did you have a read of the quantized fully connected layers?
< dk97[m]> If it is okay, I can implement the layer.
< dk97[m]> Here is the paper - https://arxiv.org/pdf/1512.06473.pdf
< moksh> @zoq are you talking about this https://openreview.net/forum?id=ByL48G-AW ?
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
moksh has quit [Ping timeout: 260 seconds]
< zoq> dk97[m]: Sounds like an interesting idea, so if you like to give this a shot please feel free.
< zoq> moksh: Yes, it's not as straightforward as e.g. policy gradients.
< zoq> moksh: So don't feel obligated, I think another great way to get familiar with the codebase is to do something similar as Eugene has done here: https://github.com/mlpack/models/pull/5
moksh has joined #mlpack
< moksh> @zoq: Okay. So you suggest I make a model using mlpack?
< moksh> Also, I recently came across this paper https://openreview.net/forum?id=SJJySbbAZ, which describes optimistic mirror descent for adam optimizer. Having tried it I can say it really improves the GAN training quite a lot. Would it be a good idea to implement this?
satyam_2401 has quit [Quit: Connection closed for inactivity]
< dk97[m]> zoq: okay, thanks!
ShikharJ has joined #mlpack
< ShikharJ> zoq: Can I please have a review of the Bilinear Interpolation PR? I'd like to get that merged as soon as possible.
ShikharJ has quit [Quit: Page closed]
K4k has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas2 has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
moksh has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
< zoq> moksh: Regarding the model yes, regarding mirror descent, agree the paper is really interesting, so if you like to work on that instead, please feel free.
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas3 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas3 has quit [Read error: Connection reset by peer]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
robertohueso has quit [Quit: leaving]
govg has quit [Ping timeout: 265 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#92 (ResizeLayer - 2ad123d : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has quit [Ping timeout: 268 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
ImQ009 has quit [Quit: Leaving]
ShikharJ has joined #mlpack
< ShikharJ> zoq: Thanks for the review. I wanted to talk a bit more on my GAN proposal. Do you have some time?
< zoq> ShikharJ: Can we do that tomorrow? Or if you like post the questions here and we will responde as soon as possible.
< ShikharJ> Cool. In my proposal I was planning on implementing StackGAN (provided enough time was left). But I am not sure which version of the paper would be of interest. The original publication just mentions about two GANs stacked upon one another, whereas the updated paper talks of setting up a tree-like network of GANs for high-resolution imagery generation. Do you think one should go for the latest technique, or otherwise?
< zoq> ShikharJ: StackGAN-v1 and StackGAN-v2 share the same main building blocks, however for me, it looks like StackGAN-v1 is somewhat easier to implement, and it might be helpful to start with StackGAN-v1; in case there is time left we could build StackGAN-v2 based on that experience. On the other side, I guess nobody is going to use StackGAN-v1 if StackGAN-v2 is implemented, but in my opinion, that's not an issue.
< ShikharJ> Hmm, thanks for your thoughts.
< zoq> At the end it's your decision, if you like v2, please feel free to go for it straightaway.
< zoq> I guess since it's a side project ("if there is time left") it's not super important if we miss the goal here.
< ShikharJ> Actually,it is one of those goals than I thought could be extended beyong the GSoC period, and that's why think StackGAN-v2 should be something we invest time in altogether.
< zoq> I think we shouldn't include something that is too ambitious, let's focus on the main goals; At the end, we have to pass/fail someone based on the work done during the summer.
< zoq> If you like the StackGAN-v2 idea, shift the focus on that part.
< ShikharJ> Sure, my main goals include implementing DCGAN and WGAN, but I'm also keeping a considerable amount of time as buffer because I'm not sure how long would the tuning and testing would take. Hence the additional goals, should the need for buffer time not arise in the first place.
vivekp has quit [Ping timeout: 255 seconds]
ShikharJ has quit [Quit: Page closed]