ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
Suryo has joined #mlpack
< Suryo> rcurtin, zoq (and everyone else): There was a project on making mlpack smaller in size so that it can be run on the raspberry pi, right? No one has taken up that project yet. But has anyone actually tried mlpack and ensmallen in a raspberry pi?
< Suryo> I'm just curious.
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
Suryo has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #387: STILL UNSTABLE in 3 hr 43 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/387/
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
sakshamB has joined #mlpack
ShikharJ has quit [Ping timeout: 248 seconds]
ShikharJ has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< zoq> Suryo: I build mlpack on the RPI 2 B some time ago, as long as you have enough SWAP configured the build should be fine.
KimSangYeon-DGU has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
sumedhghaisas has joined #mlpack
< sumedhghaisas> KimSangYeon-DGU: Hey Kim :)
< KimSangYeon-DGU> Hi Sumedh!
< KimSangYeon-DGU> I just tested QGMM setting lambda to a trainable variable
< KimSangYeon-DGU> I'm running it
< sumedhghaisas> ummm okay. But I think lambda should be a constant as far as lagrangian is concerned
< KimSangYeon-DGU> Yeah
< sumedhghaisas> We can check again
< sumedhghaisas> I am just downloading the results folder you sent
< sumedhghaisas> actually shall we create a drive folder to upload all results?
< KimSangYeon-DGU> Yes
< KimSangYeon-DGU> I'll make it
< sumedhghaisas> Cool.:)
< sumedhghaisas> okay opening
< sumedhghaisas> Niiice
< KimSangYeon-DGU> Thanks
< sumedhghaisas> I opened NLL with constraint T1
< sumedhghaisas> those are some good results
< KimSangYeon-DGU> Yeah, it is good result
< sumedhghaisas> Just little wuestions
< KimSangYeon-DGU> Yeah
< sumedhghaisas> what are t1 and t2 exactly?
< KimSangYeon-DGU> Their means is different a bit
< sumedhghaisas> initial means?
< KimSangYeon-DGU> yeah
< sumedhghaisas> great.
< sumedhghaisas> thats a good experiment actually
< sumedhghaisas> did you notice the results in t3?
< KimSangYeon-DGU> Yeah, it is not good
< sumedhghaisas> when the clusters are merging... we can see that the constraint is increasing very high
< KimSangYeon-DGU> Right
< sumedhghaisas> no actually I would that is a good result
< sumedhghaisas> lets make a document on this experimentation
< KimSangYeon-DGU> Yes
< sumedhghaisas> what did we find when we ran different initial settings
< KimSangYeon-DGU> Yeah
< sumedhghaisas> so bad results are associated with unconstrained optimization correct?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> very interesting
< sumedhghaisas> did you see T5?
< sumedhghaisas> there one of the alpha has gone down to zero
< sumedhghaisas> fascinating
< sumedhghaisas> technically that will decrease the NLL
< sumedhghaisas> haha
< sumedhghaisas> model is very clever
< KimSangYeon-DGU> Haha
< sumedhghaisas> very good results nice work :)
< KimSangYeon-DGU> The initial normalized gaussian of blue cluster is almost zero
< KimSangYeon-DGU> Thanks :)
< sumedhghaisas> in which experiment? t5?
< KimSangYeon-DGU> So I guess the likelihood of it is very low
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> So it seems to be vanishing
< sumedhghaisas> yeah ... so when the initial cluster likelihood is very low we have problems correct?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> interesting
< KimSangYeon-DGU> Definitely
< sumedhghaisas> very good observations
< sumedhghaisas> I suggest we document all all these observations very carefully with results that can be reproduced
< KimSangYeon-DGU> Ah okay
< KimSangYeon-DGU> I'll do that
< sumedhghaisas> I mean what were the initial means and what happened
< sumedhghaisas> that will be amazing :)
< KimSangYeon-DGU> It would be good idea to write all researches in one document?
< KimSangYeon-DGU> or divide several documents
< sumedhghaisas> with these amazing graphs added to the document and maybe link to the video
< KimSangYeon-DGU> Okay
< sumedhghaisas> I would reccommend creating 1 document for 1 experiment
< KimSangYeon-DGU> Ah yes
< sumedhghaisas> i mean that the methodology I follow
< sumedhghaisas> you are free to choose whatever wors for you :)
< sumedhghaisas> *works
< KimSangYeon-DGU> I think your idea is good
< sumedhghaisas> So the idea is that each experimentation should have an aim
< KimSangYeon-DGU> Yeah
< sumedhghaisas> for example in this the aim was to understand instability
< sumedhghaisas> and then reproducible results and then finally conclusions
< sumedhghaisas> this way its easier to turn this into paper later
< KimSangYeon-DGU> Yes
< sumedhghaisas> okay... so far what we have is this
< KimSangYeon-DGU> Okay
< KimSangYeon-DGU> As a next research, I have plan to make dataset with mlpack's GMM and I'll observe the QGMM accuracy, is it reasonable?
< sumedhghaisas> 1) instability is caused when constraint is no satisfied
< sumedhghaisas> 2) and when one of the alphas goes to zero
< sumedhghaisas> Sure that sounds like a good plan.
< KimSangYeon-DGU> Thanks, I'll do that
< sumedhghaisas> Although I would be careful in measuing the accuracy directly
< KimSangYeon-DGU> Right
< sumedhghaisas> cause GMM and QGMM have their own problems
< sumedhghaisas> we need to carefully address them for fair comparison
< KimSangYeon-DGU> Agreed
< sumedhghaisas> I would recommend doing this same experimentation for GMM and compare
< KimSangYeon-DGU> Ah, great
< sumedhghaisas> I mean do the same initialization and see what happens
< KimSangYeon-DGU> That sounds good
< sumedhghaisas> And add it to the same document
< KimSangYeon-DGU> Good!
< KimSangYeon-DGU> We also do research an impact of lambda
< sumedhghaisas> precisely...
< KimSangYeon-DGU> *can do
< sumedhghaisas> I was just going to say that
< KimSangYeon-DGU> Yeah
< sumedhghaisas> so in t3 the constraint is not satidfied
< sumedhghaisas> one solution for that is the for enough force on the optimizer
< sumedhghaisas> that is solved by increasing the value of lambda
< KimSangYeon-DGU> Right
< sumedhghaisas> so take the case of t3 and setup another experimentation
< KimSangYeon-DGU> Okay
< sumedhghaisas> same theory ... aim results and conclusions ...
< KimSangYeon-DGU> Thanks for reminding me
< sumedhghaisas> check with the same initialization used in t3 and change lambda value to see the effect
< KimSangYeon-DGU> Yeah
< sumedhghaisas> amazing... we already have 2 experimentations
< sumedhghaisas> :)
< KimSangYeon-DGU> Thanks to you, I learned a lot :)
< KimSangYeon-DGU> I'm careful in doing research
< sumedhghaisas> if you wanna go over the top,,, I usually add steps ahead in each my experimentation when I present the results to my collaborators
< KimSangYeon-DGU> Thanks for the tip
< sumedhghaisas> steps ahead basically reflects on the conclusions and provide a further possible experimentation in the direction... maybe even link to the documentation that provides the further experimentation. This way it becomes a chain
< sumedhghaisas> easier to track
< sakshamB> ShikharJ: hi I am here
< KimSangYeon-DGU> Oh really great
< KimSangYeon-DGU> So, our next step is to check the QGMM accuracy and change lambda in t3, right?
< sumedhghaisas> sakshamB: Seems to me that we always have our meetings together :) what a coincidence
< sumedhghaisas> Yes. Lets document everything before we go ahead. We are making lot of progress, lets make sure we don;t forget any information that we might need in future
< KimSangYeon-DGU> I'll keep in mind
< sumedhghaisas> Great. Also there is another folder... 'NLL' which one is that?
< KimSangYeon-DGU> it is just NLL
< KimSangYeon-DGU> without approximation constraint
< sumedhghaisas> ahh ... t2 in that is also interesting
< sumedhghaisas> alphas are increasing constantly
< KimSangYeon-DGU> Right
< sumedhghaisas> seems to me that there should be some penalty on the alphas
< sumedhghaisas> but not able to figure out what
< KimSangYeon-DGU> Yeah
< sumedhghaisas> something else to keep in mind for future experimentation
< ShikharJ> sakshamB: Hey, I was hoping to discuss on the padding PR, is it still in progress?
< sakshamB> ShikharJ: yes just opened
< KimSangYeon-DGU> In t3, if we have a proper lambda, I think the NLL with constraint become more better.
< ShikharJ> sakshamB: Ah, I see you opened one, great! We'll discuss the issues over there.
< ShikharJ> sakshamB: Also, I was thinking how far along is the regularizer PR?
< ShikharJ> Is it complete?
< sumedhghaisas> KimSangYeon-DGU: I agree
< KimSangYeon-DGU> sumedhghaisas: :)
< sakshamB> ShikharJ: the implementation of regularizers is complete. I have only added the regularizers to linear layer so far
< ShikharJ> sakshamB: Hmm, where else do you see a use for them?
< ShikharJ> Might be worthwhile to explore that as well?
< sakshamB> ShikharJ basically it is complete but I need to add them to LinearNoBias and convolutional layers as in the Keras API
< ShikharJ> sakshamB: I see. Is there any place where you're stuck regarding that?
< ShikharJ> sakshamB: After MiniBatch (which is complete), I think I can review regularizer this week.
< sakshamB> ShikharJ: maybe a review on the current regularizers PR would be helpful. It wouldn’t take me much time to add it all the other layers.
< sakshamB> ShikharJ: also virtual norm is complete from side
< sakshamB> my side*
< ShikharJ> sakshamB: You got it, anything else I can help you with?
< sakshamB> no nothing else right now :)
< ShikharJ> sakshamB: Ah, VirtualBatch was also mostly complete, I just wanted to be sure of Backward and Gradient routines :)
< ShikharJ> sakshamB: Okay, let's wrap up here then, we both have tasks planned out for the week. Have a good one :)
< sakshamB> ShikharJ: alright thanks [:)
sumedhghaisas has quit [Ping timeout: 260 seconds]
< sreenik[m]> I see that many activation functions have a Deriv function that calculates its derivative but there is none for log_softmax, is there something I am missing?
< zoq> sreenik[m]: In this case, it's directly integrated into the gradient step (note the log_softmax is meant to be used in combination with the NLL loss).
< zoq> sreenik[m]: If you like you can implement the step as part of the Backward function as well.
< sreenik[m]> Oh I get it. Then for the softmax layer what is preferable, writing it in the Backward function or creating a gradient or Deriv function?
< zoq> sreenik[m]: Both is fine.
< sreenik[m]> Ok. Thanks :)
ImQ009 has joined #mlpack
vivekp has joined #mlpack
lozhnikov has quit [Quit: ZNC 1.7.3 - https://znc.in]
lozhnikov has joined #mlpack
ImQ009 has quit [Quit: Leaving]