verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
nikhilgoel1997 has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
wenhao has joined #mlpack
csoni has joined #mlpack
ricklly_ has quit [Ping timeout: 240 seconds]
csoni has quit [Ping timeout: 260 seconds]
nikhilgoel1997 has quit [Quit: Connection closed for inactivity]
wenhao has left #mlpack []
wenhao_ has joined #mlpack
wenhao_ has quit [Client Quit]
wenhao has joined #mlpack
< wenhao> zoq: Hi Marcus, I noticed that you put my proposal into a google doc. Do you want me to change the content from markdown to google doc format to make it easier to read?
manthan has joined #mlpack
sujith has joined #mlpack
ImQ009 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
ImQ009 has quit [Ping timeout: 246 seconds]
sujith has quit [Ping timeout: 260 seconds]
sourabhvarshney1 has joined #mlpack
ImQ009 has joined #mlpack
soham has joined #mlpack
sumedhghaisas has joined #mlpack
< soham> i am interested in reinforcement learning project.. could you guide me ?
soham has quit [Quit: Page closed]
< Atharva> sumedhghaisas:
< Atharva> zoq: I have uploaded the draft, please review it when you get the time.
< Atharva> rcurtin: I totally understand that you don’t have the time to review proposals now, but if you do get some time, it will be really helpful if you comment on it.
< sumedhghaisas> @Atharva: Hi Atharva. I will try to review the proposal in sometime and give you some review.
sourabhvarshney1 has quit [Quit: Page closed]
< sumedhghaisas> @Atharva: Hi Atharva
< sumedhghaisas> I looked at your draft. Its looks good, very detailed and I think you have tried to consider every aspect of the project.
< sumedhghaisas> Although I would like to give some comments on the implementation details
< sumedhghaisas> Its better to pass encoder and decoder as LayetTypes object, rather than FFN or RNN object.
< sumedhghaisas> *LayerTypes
< sumedhghaisas> This will remove the template parameter Model
< Atharva> But how do we give the entire encoder/decoder as LayerType?
< sumedhghaisas> We make FFN and RNN LayerTypes compatible
< sumedhghaisas> implenent the functionality needed to be treated as a layer
< sumedhghaisas> I have done it in my NTM pr, although due to some other aspects it hasn't merged yet
< Atharva> Oh, okay, so we implement functionality to consider multiple layers as one layer object?
< sumedhghaisas> precisely
< sumedhghaisas> as we are not using any other functionality of FFN and RNN other than forward and backward
< sumedhghaisas> its just like a layer right?
< Atharva> Okay, yes
< Atharva> Yeah, all we need is the layer data and we will construct our own network using it
< sumedhghaisas> Another major thing is, the new class should be more supportive of the generative nature of VAEs
< sumedhghaisas> output should be defined as a distribution rather than a static value
< sumedhghaisas> So we can sample from it and generate new images
sourabhvarshney1 has joined #mlpack
< sumedhghaisas> what you think?
< Atharva> Okay, yeah I read it when I was reading the paper but forgot to add that functionality
< Atharva> I will make the changes
< sumedhghaisas> There are distributions defined in MLPACK/core
qwebirc48653 has joined #mlpack
qwebirc48653 has quit [Client Quit]
< sumedhghaisas> Take a look at them, I think we can make use of them
< Atharva> Okay, I will. Thanks. Any other comments? What did you think about the timeline?
sourabhvarshney2 has joined #mlpack
< Atharva> Is it too slow?
< sumedhghaisas> Although the overall structure is good.
< sumedhghaisas> Yes about the timeline, I think its too much to be honest :)
sourabhvarshney1 has quit [Ping timeout: 260 seconds]
< sumedhghaisas> I think implementing the VAE would take more time, given the consideration to the API and rigorous testing.
< Atharva> Oh, okay, I have kept the last 8 days buffer, I think I can easily give it one week more.
< Atharva> I think some other things I have given one week for won’t really need it, so it can be adjusted easily, what fo you think?
< sumedhghaisas> Yes. I agree. This is a tentative timeline. Although I would suggest being clear on your goals.
< sumedhghaisas> In my opinion the first goal should be the pure VAE with FFN
< sumedhghaisas> test with reconstruction
sourabhvarshney2 has quit [Ping timeout: 260 seconds]
< sumedhghaisas> we should properly unit test it. Document it. Write a model to try to reproduce the results. Before jumping to conditional and Beta VAE
< Atharva> Oh, okay, and after that VAEs with RNN?
< sumedhghaisas> If time permits, yes.
< sumedhghaisas> but I don't think we will have time for that
< sumedhghaisas> Prepare your timeline such that you have goals to Target. For example, you dedicate almost 2 yo 3 weeks for implementation of VAE
< sumedhghaisas> but what is the end result?
< sumedhghaisas> This would help you while you implement stuff.
< Atharva> Hmm, I will add more detail to the timeline, I think it was in the end so I rushed it a little.
< sumedhghaisas> I think the details are good. :)
< sumedhghaisas> Just needs little restructuring
sourabhvarshney1 has joined #mlpack
< sumedhghaisas> I would like to first achieve a working and we'll tested model of VAE with MNIST and then take the things forward
< sumedhghaisas> *well
< sumedhghaisas> This way we build step by step
< Atharva> Yeah, that seems like a better approach.
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
sourabhvarshney1 has quit [Ping timeout: 260 seconds]
ckeshavabs has joined #mlpack
< ckeshavabs> zoq: Hi, regarding implementation of reward clipping, the API can be similar to that of GradientClipping class right? The user specifies the actual reward, and the min and max bounds on the reward?
< ckeshavabs> Also, Gradient Clipping has been placed here - mlpack/src/mlpack/core/optimizers/sgd/update_policies/gradient_clipping.hpp.
< ckeshavabs> But one clarification, is reward clipping used anywhere else in ML apart from RL? Because then we could place that class in - mlpack/src/mlpack/core/util/ ?
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
rajiv_ has joined #mlpack
< rajiv_> @zoq: I have submitted the proposal with pseudo code and more details in the timeline. If possible, please review it and give your feedback
rajiv_ has quit [Quit: Page closed]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 265 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
ckeshavabs has quit [Ping timeout: 260 seconds]
ckeshavabs has joined #mlpack
ckeshavabs has quit [Quit: Page closed]
Atharva has quit [Quit: Connection closed for inactivity]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sujith has joined #mlpack
ckeshavabs has joined #mlpack
Prabhat-IIT has joined #mlpack
< Prabhat-IIT> zoq: I've updated my draft considering all your comments on my previous rough draft which only had rough implementation details. Now, this is my final proposal. Please review and comment on it so that improvement can be done.
< Prabhat-IIT> rcurtin: I humbly request you to also review my proposal on PSO. Even your one comment will be very valuable to me. Whenever, you get some time please consider reviewing :)
< Prabhat-IIT> sumedhghaisas2: Your review is also highly appreciated. Plz consider reviewing it whenever you get some time. I'll be grateful to you
sujith has quit [Ping timeout: 260 seconds]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
ckeshavabs has quit [Quit: Page closed]
sourabhvarshney1 has joined #mlpack
< sourabhvarshney1> zoq: I have put a tentative interface in my GSoC proposal. Review if possible.
swetha_ has quit [Quit: Connection closed for inactivity]
manthan has quit [Ping timeout: 260 seconds]
hassanmahmood has joined #mlpack
sourabhvarshney1 has quit [Quit: Page closed]
< hassanmahmood> Hello everyone!
< hassanmahmood> I have a query.
< hassanmahmood> For GSOC, I am aiming to do Stacked GANs and Bidirectional RNNs in "Essential Deep Learning Modules" project.
< hassanmahmood> Besides these two, there is "Improved Techniques for Training GANs" part. So, I was thinking of doing just these two: SGANs and Improved Techniques.
< hassanmahmood> This will be better for me, because I will be working on GANs all the time. Moreover, I will be able to focus more on the efficiency of the implemented model rather than focusing on implementing more Deep Neural Networks.
< hassanmahmood> Can I do that for GSOC?
< hassanmahmood> Any advice or suggestions are appreciated.
< hassanmahmood> Also, for testing the GAN, will GPU be provided for training?
ImQ009 has quit [Quit: Leaving]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 264 seconds]
hassanmahmood has quit [Ping timeout: 240 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
zlatkos has joined #mlpack
zlatkos has quit [Quit: Page closed]