verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 248 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 245 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
< zoq> ShikharJ: We can install the missing python packages, just let us know
< zoq> ShikharJ: I think numpy is installed for python3.4
< ShikharJ> zoq: I don't think there is a need now. The dataset is prepared, and the model is currently training. I'll clear up the DCGAN code and get some results on that as well.
< zoq> ShikharJ: Okay, if you need the packages, at some point just let us know.
< ShikharJ> zoq: Sure. BTW, did you happen to review the GAN::Gradients code as you were saying?
< zoq> ShikharJ: Yes, it looks good, as I said it could be improved if we 'outsource' the optimization process.
< zoq> I think once we get the results, we can merge the code
< ShikharJ> zoq: Sure, I'll let you know when I get the results.
sumedhghaisas2 has quit [Ping timeout: 265 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 260 seconds]
sumedhghaisas3 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas3 has quit [Ping timeout: 276 seconds]
sumedhghaisas has quit [Ping timeout: 245 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
< sumedhghaisas> Atharva: Hi Atharva
< sumedhghaisas> how are you?
< Atharva> sumedhghaisas: I am good, what about you? How long will you be in India?
< sumedhghaisas> doing well, still got a week in India :)
< Atharva> Oh nice
< sumedhghaisas> Atharva: is the code compiling on your machine? I mean, the new PR?
< sumedhghaisas> also have you looked at the failed checks on the PR?
< sumedhghaisas> Atharva: Also, I am little bit confused about your Backward and Gradient.
< sumedhghaisas> Does your 'Backward' take into consideration the error signal coming from KL loss?
< Atharva> I did compile it once before pushing, I think the builds are failing because of some of the later commits.
< sumedhghaisas> if it does, then shouldn't it be calling the klBackward?
< Atharva> The memory checks are probably because of the incomplete test I wrote
< sumedhghaisas> if it does not, then it's a linear equation, how can 'input' parameter be used in the 'Backward'?
< Atharva> sumedhghaisas: I didn't get you, are you talking about the klBackward function
< sumedhghaisas> there is namescopeless join_cols in the middle ... that's why I asked :) I think it should be a goodarma::
< sumedhghaisas> *Arma::join_cols
< sumedhghaisas> Atharva: maybe the question is little bit confusing...
< Atharva> Oh, but that compiled on my system, I will change it anyway
< sumedhghaisas> could you just go over the math you used in Backward?
< sumedhghaisas> that's strange... hmmm
< Atharva> Yeah, both of them?
< Atharva> I mean Backward and klBackward?
< sumedhghaisas> I wonder where is it getting the symbol join_cols from
< sumedhghaisas> let's start with Backward
< Atharva> Yeah, wll you be free in like an hour? I will take a shower and have lunch and get back to this.
< sumedhghaisas> I have to go out lunch :( and attend a meeting. is 5 IST good for you?
< Atharva> Also, as Ryan mentioned yesterday, we can use the GaussianDistribution class for univariate as well. The covariance will just be 1 x 1.
< Atharva> sumedhghaisas: Sure! Till then I will check the math and also the failing builds
< sumedhghaisas> Atharva: Sounds good!
< sumedhghaisas> Atharva: My plans got changed a bit so let me know when you are free and we can have a chat
sulan_ has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< jenkins-mlpack> Project docker mlpack nightly build build #340: SUCCESS in 2 hr 36 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/340/
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sulan_ has quit [Quit: Leaving]
< Atharva> sumedhghaisas: You there?
< sumedhghaisas> Atharva: yup :)
< sumedhghaisas> you wanna go over the math one time?
< Atharva> I pushed a commit just now, it should fix the build errors. They were mostly due to the fact that I was still using the old name "sampling" in a lot of other places.
< sumedhghaisas> yes. I also observed that in the PR :)
< Atharva> Yeah, the let's go through the Backward function first
< sumedhghaisas> From next time make sure you do clean build before sending the PR, that should fix all these problems
< sumedhghaisas> I think some make configuration is causing this problem
< sumedhghaisas> better to do clean build
< sumedhghaisas> Sure. Backward has 2 gradients right? Mean and stddev
< Atharva> Yeah, I always do that, I forgot somehow last night, maybe I was sleepy :p
< Atharva> Yeah
< sumedhghaisas> I think the backward for mean in the code is correct, which is gy
< sumedhghaisas> I am little confused about the equation - input - mean / gaissianSample
< Atharva> There is a good chance I got that wrong, maybe you are correct.
< Atharva> What do you think it should be>
< Atharva> ?
< sumedhghaisas> so out = mean + sample * stddev
< sumedhghaisas> so g_stddev = gy / sample
< sumedhghaisas> now you take this error backwards through the Softplus layer and that should give you the error of first vector
< sumedhghaisas> does that make sense?
< Atharva> Okay
< Atharva> I will correct it.
< sumedhghaisas> also I think you don't have to store mean and stddev... as gradient gets input as a parameter
< sumedhghaisas> Another thing is, the current computation only passes the error of the decoder to the encoder
< Atharva> Doesn't the Backward function's input parameter come from the layer after that?
< sumedhghaisas> the KL error has to be added to it
< Atharva> Yes, I will do it
< sumedhghaisas> Atharva: ahh I see. I always get confused by that name. So you mean to say the 'input' parameter to 'Backward' is actually the layer output?
< Atharva> I think yes, because that's how the Backwardvisitor in ffn works
< Atharva> SO i don't think it will get the mean and std unless we store it
< Atharva> So
< sumedhghaisas> okay. Could you make sure that's the case? I vaguely recall this being the case, but just want to make sire
< sumedhghaisas> in that case, we need to store mean and stddev
< Atharva> Yeah I am pretty sure, also it makes sense because the backward function needs the output of the layer and not the input
< Atharva> I will still check that logic
< sumedhghaisas> I think you are right
< Atharva> Yeah, in the Backward function of ffn_impl.hpp, it can be verified
< sumedhghaisas> Let me know when you make these changes, I will go over the PR again
< Atharva> What about the klForward and klBackward, is that correct?
< sumedhghaisas> ahh... I haven't gone over that. Let me check those functions as well.
< sumedhghaisas> I will also recommend checking the gradients before by writing a test for numerical gradient check
< sumedhghaisas> you can check how it's done for other layers
< sumedhghaisas> we need to replicate the same
< sumedhghaisas> this way you can figure out which gradients are wrong
< Atharva> Yeah, that is one test, one is just a simple test to see if output is different every time.
< Atharva> What else can we check?
< sumedhghaisas> Atharva: Also, have you checked how the Deriv function of SoftPlus works? I think it expects input rather than erro
< Atharva> Yeah, so won't (input - mean) / gaussian sample be the input for a backward softplus?
< sumedhghaisas> Atharva: Sorry got caught up in some work
< sumedhghaisas> I think you have to do a submit to get the actual input to the Softplus
< sumedhghaisas> *submat :)
< sumedhghaisas> ahh wait... but you are storing the stddev
< Atharva> But the input to the Backward function will actually be a latenSize sized matrix
< sumedhghaisas> hmm... let me see. Let's do a size analysis.
< sumedhghaisas> so the error coming from up will have a size of B*L
< sumedhghaisas> where B is batch and L is latent size
< Atharva> is it L*B?
< Atharva> I am not sure
< sumedhghaisas> ahh yes... you are right. Sorry for that.
< sumedhghaisas> it's column major.
< sumedhghaisas> so L*B
< Atharva> Yes
< sumedhghaisas> error of mean will be L*B
< sumedhghaisas> error of stddev will be gy / sample... that will be L*B
< sumedhghaisas> softmax Deriv will be L*B if we pass the submat..
< sumedhghaisas> so the multiplication will give the error of input which will be L*B
< sumedhghaisas> this is only the error of second vector of input
< sumedhghaisas> the first vector error will directly from the error of mean
< sumedhghaisas> so the concat will give 2L * B
< sumedhghaisas> does that make sense?
< sumedhghaisas> ahh by multiplication I mean multiplying the stddev error by Softplus deriv
< sumedhghaisas> that should be the error of the softplus's error
sulan_ has joined #mlpack
< Atharva> So, are you saying that input param of the Backward function will be 2L * B
< sumedhghaisas> umm... no no. The input params won't be used here I think, I am saying g will be 2L*B
< sumedhghaisas> maybe we are on the wrong page :P
< Atharva> Yeah, g will be 2L * B
< sumedhghaisas> okay let me suggest changes on the PR
< sumedhghaisas> maybe that will make things clear
< Atharva> sorry, I had something to do
< Atharva> Okay, yeah we can discuss it there
< sumedhghaisas> Atharva: Just added the changes I think should solve the problem. Let me know what you think.
< Atharva> Yes, thank you, I will leave comments there
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
< sumedhghaisas> Atharva: Ahh wait. Input is L*B, it's the output of the layer. we really need to change the name. That was causing the confusion in my head. I am so sorry.
< Atharva> Yes, I used to get confused with that too
< sumedhghaisas> Although, I would still prefer storing stddev than recomputing it again
< Atharva> Yes, that's better, I was double minded about whether to store it or not and hence those mistakes.
< sumedhghaisas> I don't think there were mistakes though. That was my stupidity around 'input'. Sorry for that.
< sumedhghaisas> Although we do need to add error from KL to the error computed in Backward
< Atharva> Yes
sumedhghaisas has quit [Ping timeout: 245 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
sulan_ has quit [Remote host closed the connection]
vivekp has quit [Ping timeout: 260 seconds]
sulan_ has joined #mlpack
vivekp has joined #mlpack
sumedhghaisas has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
sumedhghaisas2 has quit [Ping timeout: 276 seconds]
vivekp has joined #mlpack
sumedhghaisas2 has joined #mlpack
vivekp has quit [Ping timeout: 276 seconds]
vivekp has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas3 has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 256 seconds]
sumedhghaisas3 has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas2 has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
sumedhghaisas has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
sumedhghaisas has quit [Ping timeout: 265 seconds]
sumedhghaisas2 has joined #mlpack
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 265 seconds]
sumedhghaisas2 has quit [Ping timeout: 265 seconds]
sumedhghaisas has joined #mlpack
vivekp has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
< rcurtin> after I merge the documentation fixes on Friday, I'd be happy to release mlpack 3.0.2 because it changes a lot of the documentation (especially the auto-generated Python binding documentation)
< rcurtin> does anyone have anything they'd like me to wait for to include in the release? (or anything I should explicitly not include that was merged to master?)
< rcurtin> I'd be happy to wait a little bit to do the release also, I am in no particular hurry
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
sumedhghaisas2 has joined #mlpack
vivekp has joined #mlpack
sumedhghaisas has quit [Ping timeout: 265 seconds]
< zoq> rcurtin: If we merge the CF modifications, I would exclude that part for now, we could also delay the merge; whatever is easier for you.
vivekp has quit [Ping timeout: 264 seconds]
< Atharva> rcurtin: I think the ann output/input PR would be a good inclusion in the next release. I might just need this weekend. Is that okay? Or should we include it in the next release?
vivekp has joined #mlpack
< zoq> We can make another release after GSoC, that includes all changes.
< zoq> Unless you like to include it now?
ImQ009 has joined #mlpack
< rcurtin> Atharva: zoq: I think it's fine to wait. also I think that would be a significant enough change we would have to call it mlpack 3.1.0
vivekp has quit [Ping timeout: 264 seconds]
< rcurtin> if the CF modifications get merged, I'll leave those out of the release, that's no problem at all
< rcurtin> I'll see if I have time to do the release on Friday, but I may not be able to---I have a paper submission deadline on Friday so I don't know how frantic I will be :)
< rcurtin> the paper is basically ready but I am still waiting for some comments from friends that I might have to work in. hopefully nobody will suggest too major of a change :)
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#15 (lmnn - 4ac77dc : Manish): The build was broken.
travis-ci has left #mlpack []
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 265 seconds]
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
< Atharva> zoq: That PR is independent of my gsoc project, I will try to complete that before Friday so the release does not get delayed. I will let you know by Thursday if I feel it will take more time, in that case it’s better to include it in next release.
vivekp has joined #mlpack
< zoq> Atharva: okay, sounds good
< rcurtin> I'd prefer to release a 3.0.2 version without that change, if only so that the JOSS paper about 'mlpack 3' is still closer to accurate about the version number being new
< rcurtin> but I think it would be no problem to release 3.1.0 shortly after that with the input/output size change
< rcurtin> let me know if you disagree, my opinion is not particularly strong here
< ShikharJ> rcurtin: Since the support for batch sizes would probably not be there before Friday, I think the work for GAN should also be excluded if it gets merged before the weekend.
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 240 seconds]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< rcurtin> ShikharJ: that sounds reasonable to me
sumedhghaisas has quit [Ping timeout: 265 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
< Atharva> rcurtin: Yeah I think we should go with that. We can include it in 3.1.0
sulan_ has quit [Quit: Leaving]
ImQ009_ has joined #mlpack
ImQ009 has quit [Ping timeout: 260 seconds]
ImQ009_ has quit [Read error: Connection reset by peer]
witness_ has joined #mlpack
vivekp has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas2 has quit [Ping timeout: 264 seconds]