ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
akfluffy has joined #mlpack
< akfluffy> can mlpack run on small devices such as the raspberry pi?
< akfluffy> did someone take up the gsoc idea to demonstrate mlpack run on constrained devices?
akfluffy has quit [Ping timeout: 245 seconds]
< rcurtin> akfluffy: unfortunately no, nobody is working on that project this summer
< rcurtin> however, it should be possible to make mlpack run on those devices... there may have to be some build tweaking though to get the programs small enough
< jenkins-mlpack2> Project docker mlpack weekly build build #53: STILL UNSTABLE in 6 hr 11 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/53/
vpal has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vpal is now known as vivekp
KimSangYeon-DGU has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #363: STILL UNSTABLE in 3 hr 26 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/363/
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
KimSangYeon-DGU has joined #mlpack
< saksham189> Toshal: were you able to produce any results from the GAN?
favre49 has joined #mlpack
< favre49> zoq: I found some more issues with NEAT, but still haven't gotten too much of an increase in improvement. There was an issue with the updating of the depths of the nodes. In the case of deletion though, I'm still thinking of a way to do it without completely rebuilding the depths info.
< favre49> increase in performance*
< favre49> Is there any way i could check whether speciation works right?
< Toshal> saksham189: From which GAN you are talking about?
< Toshal> Are you talking about testing Dual Optimizer PR or something else?
< saksham189> Toshal: no not the Dual Optimizer, but the fix that you made to the Gradient have you been able to produce any quality images from the GAN after that so far?
< zoq> favre49: Don't think there is an easy way, what I do is to step through the code snd check if the result is reasonable.
< Toshal> saksham189: Frankly speaking, I just left testing it quite a long time ago as I thought everything will get replaced in Dual Optimizer PR. But yes when I ran it for two days I saw better results after making the change.
< Toshal> Than before.
< Toshal> As I remember there is one more change. So I will push it soon. Thanks for reminding of that.
< Toshal> But why are you asking?
< Toshal> Are you testing it?
Jeffin143 has joined #mlpack
< Jeffin143> zoq : can u take a look at PR#1895, I totally missed it.
< Jeffin143> also lozhnikov : any further comments on 1927 or 1814 , Also please let me know where are you free
sumedhghaisas has joined #mlpack
< Jeffin143> I am not sure how take take input for tokenization from binding
< Jeffin143> when*
< sumedhghaisas> KimSangYeon-DGU: Ready when you are :)
< KimSangYeon-DGU> I'm ready :)
< KimSangYeon-DGU> At first, I worked on implementing QGMM's em algorithm
< KimSangYeon-DGU> However, I got stuck with the verification.
< KimSangYeon-DGU> So, I decided to implement GMM classes first
< sumedhghaisas> okay
< KimSangYeon-DGU> for comparison
< KimSangYeon-DGU> Yeah.
< sumedhghaisas> what exactly did you get stuck with?
< KimSangYeon-DGU> The EM algorithm requires positive definite matrix for Cholesky decomposition
< KimSangYeon-DGU> and clustering algorithm initially
< KimSangYeon-DGU> So, I should implement the algorithms
< KimSangYeon-DGU> I made unit test code, and I checked the code works correctly
< KimSangYeon-DGU> When I finished, I'll implement QGMM.
< sumedhghaisas> i am not sure what you...
< sumedhghaisas> but actually I found the author's code
< sumedhghaisas> it doesn't work but
< sumedhghaisas> maybe something can be salvaged from it
< KimSangYeon-DGU> Oh, really, Can you let me know?
< sumedhghaisas> Sure. Let me find the link
< KimSangYeon-DGU> Oh thanks,
< KimSangYeon-DGU> Thanks!!
< KimSangYeon-DGU> Sumedh, I have a question
< sumedhghaisas> Sure go ahead
< KimSangYeon-DGU> I'm not sure the equation (15) on the paper, o_{i}
< KimSangYeon-DGU> Is the denominator just normalizing constant?
< sumedhghaisas> It is but I have another problem with it
< KimSangYeon-DGU> What is the problem?
< sumedhghaisas> it should be integrated over the whole space that summed over i
< KimSangYeon-DGU> Yeah
< sumedhghaisas> but I think I have an idea which we can try rather than doing what they did
< sumedhghaisas> Could you try that?
< KimSangYeon-DGU> Yeah, go ahead please I'll try
< sumedhghaisas> So are you familiar with EM?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> So look at Equation 16 in the paper
< KimSangYeon-DGU> Yeah
< sumedhghaisas> rather than differentiating it we can optimize it using non convex optimization
< KimSangYeon-DGU> Okay
< sumedhghaisas> but we have a constraint
< sumedhghaisas> the constraint cannot be evaluated directly
< sumedhghaisas> but maybe we can approximate it
< sumedhghaisas> if we can what we can do is
Jeffin143 has quit [Ping timeout: 256 seconds]
< sumedhghaisas> minimize NLL + lambda * approx_constraint
< sumedhghaisas> am I being clear?
< KimSangYeon-DGU> Hmm, I should take some time to understande.
< sumedhghaisas> this follows from lagrangian
< sumedhghaisas> okay maybe ... let me go slow
< sumedhghaisas> so equation 16 gives you the objective to maximize
< sumedhghaisas> right?
< KimSangYeon-DGU> Right
< sumedhghaisas> so we wanna minimize -equation 16
< sumedhghaisas> correct?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> lets just call that NLL cause its easier to compare to other models with
< sumedhghaisas> NLL stands for negative log likelihood
< sumedhghaisas> which -equation16 is
< KimSangYeon-DGU> Ah, Yeah
< sumedhghaisas> now while optimizing we have constraint that area under the distribution should be 1
< sumedhghaisas> now if we had a closed form for this we could do
< sumedhghaisas> NLL + lambda * constraint
< sumedhghaisas> lambda is basically a free parameter
< sumedhghaisas> you assign it a value and optimize NLL + lambda * constraint
< sumedhghaisas> and that guarantees that constraint is bounded
< sumedhghaisas> this is the statement of lagrangian optimization
< sumedhghaisas> little more clear?
< KimSangYeon-DGU> Yeah, Thanks for clarification
< sumedhghaisas> welcome always
< sumedhghaisas> but the trouble is we don't have closed form solution for it
< sumedhghaisas> which we checked last week... correct?
< KimSangYeon-DGU> Yeah, but I checked the probability is 1
< KimSangYeon-DGU> You can check my blog
< sumedhghaisas> ummm... how?
< sumedhghaisas> ahh but that approximate normalization right?
< KimSangYeon-DGU> Wait a moment
< KimSangYeon-DGU> I used the integral function in python
< KimSangYeon-DGU> From -inf to +inf, the integral of probability is 1
< KimSangYeon-DGU> I emailed it to you.
< sumedhghaisas> ahh yes yes
< sumedhghaisas> but its an approximate one
< sumedhghaisas> so the way these tools work is by doing summation over lot of values
< sumedhghaisas> what we need is some functional form of the normalization
< KimSangYeon-DGU> Ah, agreed
< sumedhghaisas> which we can them multiply with the lagrangian
< KimSangYeon-DGU> I'll look into it
< sumedhghaisas> thats a real problem
< sumedhghaisas> now one idea is use the current points available to us to find the normalization
< KimSangYeon-DGU> yeah
< sumedhghaisas> for example if we had 1000 points
< sumedhghaisas> we find NLL over those thousand points based on current value of the parameters
< KimSangYeon-DGU> yeah
< sumedhghaisas> and them find approximate normalization using those 100 points
< KimSangYeon-DGU> Ahh, 100 points?
< sumedhghaisas> sorry 1000
< KimSangYeon-DGU> Yeah,
< sumedhghaisas> and them do gradient descent on those 1000 points
< sumedhghaisas> this may work
< KimSangYeon-DGU> I'll try
< sumedhghaisas> but here we have to understand that the approximate normalization we are using is the lower bound
< KimSangYeon-DGU> Agreed
< sumedhghaisas> so we are bounding the lower bound in which case I am not exactly sure what is the result but we can try
< sumedhghaisas> I will also look more into it
< sumedhghaisas> but for POC of this
< sumedhghaisas> I will suggest implement equation 16 using tensorflow
< sumedhghaisas> or pytorch
< sumedhghaisas> whatever you like which gives us derivatives
< sumedhghaisas> and add loss function as NLL + lambda * constraint
< KimSangYeon-DGU> I'll try it.
< sumedhghaisas> or else you will have to write derivatives of the loss by hand which may introduce more bugs
< sumedhghaisas> hows that sound?
< KimSangYeon-DGU> Can you give me some links for reference later? I think I should study it more...
< sumedhghaisas> have you used tensorflow or pytorch before?
< KimSangYeon-DGU> I think the tensorflow or pytorch would be fne
< KimSangYeon-DGU> fine
< sumedhghaisas> ahh for study
< KimSangYeon-DGU> I used them in my internship.
< sumedhghaisas> I will reccommend quick YouTube study of EM and Lagrangian that should clear the concepts I used here
< sumedhghaisas> and feel free to ask me anything
< KimSangYeon-DGU> Ahh, thanks
< KimSangYeon-DGU> Yeah,
< Toshal> ShikharJ: I am here just in case you want to know.
< KimSangYeon-DGU> Ahh Thanks you are really helpful
< sumedhghaisas> if this works them great. Have you created a fake data yet for training?
< sumedhghaisas> I suggest sticking with 2 clusters for simplicity
< KimSangYeon-DGU> Agreed
< sumedhghaisas> although if it works we don't need to be restricted to 2 clusters only
< KimSangYeon-DGU> I created the fake data using gaussian distribubtion random function
< sumedhghaisas> that the best part about this idea
< sumedhghaisas> but bounding lower bound is not very elegant I presume maybe we can come up with better functional form of the constarints
< KimSangYeon-DGU> I think it is somewhat tricky part...
< KimSangYeon-DGU> But I should try it
< KimSangYeon-DGU> for better implementation
< sumedhghaisas> indeed. If it wasn't tricky people would have done it already :P
< sumedhghaisas> thats how I always think about this
< KimSangYeon-DGU> :)
< KimSangYeon-DGU> I'll try it
< KimSangYeon-DGU> If I have a question, I'll emailed it
< sumedhghaisas> are you familiar with variation approximation?
< sumedhghaisas> *variational
< KimSangYeon-DGU> Ohh.. sorry.. I'm not familiar with...
< sumedhghaisas> maybe we can use it here but I am not sure
< ShikharJ> Toshal: Yeah, sorry for the late beginning.
< saksham189> ShikharJ: yes I am also waiting
< ShikharJ> saksham189: Sorry about that, we can begin now.
< sumedhghaisas> in variational approximation the normalization is bounded with jenson's inequality
< saksham189> ShikharJ: no problem I was just informing
< KimSangYeon-DGU> Yeah
< sumedhghaisas> but we can worry about it later if you are not familiar.
< KimSangYeon-DGU> I'll look into it.
< sumedhghaisas> let focus on bounding functional form with gradient descent for this week :)
< ShikharJ> saksham189: I wanted to talk regarding the latest issue that you opened.
< KimSangYeon-DGU> Yeah, I got it, you mean the N:: + lambda * constraint
< KimSangYeon-DGU> ?
< sumedhghaisas> sure if you get time... just learn about variational approximation ... its super useful
< sumedhghaisas> yes
< KimSangYeon-DGU> Yeah, I'll do my best
< saksham189> ShikharJ: yes
< ShikharJ> Are you facing a failure inspite of the increased tolerance?
< KimSangYeon-DGU> Oops *NLL
< KimSangYeon-DGU> Thanks for meeting
< saksham189> ShikharJ: yes I ran the same test without modifying anything
< KimSangYeon-DGU> While doing it, if I have a question, I'll ping you.
< KimSangYeon-DGU> Is it okay?
< sumedhghaisas> :) nice to catch up SangYeon. Maybe we can meet setup 2 meeting per week if you want?
< sumedhghaisas> if that is helpful
< sumedhghaisas> ping is also fine :)
< KimSangYeon-DGU> 2 meeting per week is really nice
< sumedhghaisas> surely
< sumedhghaisas> lets do Tuesday and Friday?
< KimSangYeon-DGU> Tuesday and Friday
< KimSangYeon-DGU> Ohh
< KimSangYeon-DGU> Great :)
< sumedhghaisas> perfect :) great minds think alike
< sumedhghaisas> :P
< KimSangYeon-DGU> :)
< KimSangYeon-DGU> Thanks Sumedh.
< sumedhghaisas> See you on Tuesday
< KimSangYeon-DGU> Yeah :)
< ShikharJ> saksham189: Ryan opened an issue for that in https://github.com/mlpack/mlpack/issues/1661, since we introduced a linear layer in the Atrous Convolution Gradient Test, I suspect it might also be due to that.
Aryan_ has joined #mlpack
< saksham189> ShikharJ: hmm.. from time to time even without the linear layer the test keeps failing. (I tried locally)
< saksham189> ShikharJ: but not as much with the linear layer
< ShikharJ> saksham189: Wait, so it fails less with the linear layer?
< saksham189> it fails more with the linear layer
< ShikharJ> saksham189: Okay, that's what I'd expect. Let me run a couple of variations on that and I'll see what I can find. You can also look at https://github.com/mlpack/mlpack/pull/1493 by Atharva.
< ShikharJ> It is supposed to improve our implementation of Transposed Convolutions. I'm hopeful that it would also reduce the failures with Transposed Convolutions.
< ShikharJ> In the meantime, I'd suggest you move the issue details to Ryan's issues, just so that we don't duplicate. If you want to take the week to investigate the Atrous Convolution, feel free to do so. I'm pending on that investigation, but i suspect I'd hardly get any time in the next couple of months.
< saksham189> yes I’ll take a look. Also I think when the test fails with the linear layer the numerical gradient comes out to all zeros wheras the original gradient has values like 1e-10
jeffin143 has joined #mlpack
KimSangYeon-DGU has quit [Quit: Page closed]
< ShikharJ> saksham189: Okay, let's get to that for now then.
< ShikharJ> saksham189: I'll converse with Toshal now, feel free to log off.
< saksham189> Toshal: alright sure I will try to investigate the problem and keep you informed. Have a great week 8)
< saksham189> ShikharJ: *
< ShikharJ> saksham189: I'm sorry couldn't review most of your work as I had hoped of doing this week. I was swamped with a huge deadline, and well, I have to impress the guys here in order to obtain a strong recommendation (typical grad school worries), so I'm not able to devote much time to other stuff.
< ShikharJ> Toshal: Same for your work as well, since you've pushed up a substantial amount of it.
< jeffin143> zoq: Sorry I didn't see your comments, Just had a glance now. Thanks for reviewing :)
< Toshal> ShikharJ: No worries
< Toshal> Meanwhile I was working on the failing radical test
< zoq> jeffin143: Just commented on the PR a couple of minutes ago :)
< ShikharJ> Toshal: Yeah, are you stuck somewhere, I think I should be free this weekend to help.
< Toshal> Yes I want to ask you about FID
< ShikharJ> Toshal: Okay, let me know the day and time, and I'll be available for that.
< Toshal> ShikharJ: What about now?
< ShikharJ> Toshal: BTW, why does your website's twitter url head out to https://twitter.com/daattali ?
< ShikharJ> Toshal: I wouldn't remember the low level details at the moment, but if your query is design related, feel free to shoot.
< jeffin143> zoq : for some reasons, it showing 7hours ago :)'
< Toshal> Okay I have went through FID. I saw that FID Frechet` Distance(FD). FD is a metric used to compare two gaussians.
< zoq> jeffin143: Ahh, I thought you were talking about 1895 :)
< Toshal> So I was thinking to implement FD first and then use it for FID
< Toshal> Let me know your thoughts regarding same.
< Toshal> zoq: Please read my last four lines and let me know your thoughts for it as well.
< ShikharJ> Toshal: I'm not sure about FD being used only for gaussians, I think it should work for any distribution in the domain [0, 1].
< jeffin143> zoq : I was taling about 1895 : https://ibb.co/t8VLSGj <- Check this.
< ShikharJ> But yeah, other than that, I don't have an issue with you moving forward in the way you'd prefer :)
< Toshal> ShikharJ: Okay. I will make a PR for it soon.
< zoq> Agreed, personally I think it makes sense.
< zoq> jeffin143: Right, I commented on that one yesterday.
< ShikharJ> jeffin143: That's because your system's clock is not set properly, it is showing 21 minutes to me.
< ShikharJ> I had that issue on my first ever open source PR, which caused my to push commits back in time, and later had to close it and reset the clock :)
< zoq> interessting :)
< ShikharJ> Toshal: I wish I could talk more, but I have to prepare lunch and breakfast and later catch the bus. Please leave any further queries here, and I'll get back to them.
< Toshal> ShikharJ: Actually regarding my Twitter handle. That website is outdated
< Toshal> ShikharJ: Sure
jeffin143 has quit [Ping timeout: 256 seconds]
< ShikharJ> Toshal: Oh okay. Have a fun weekend though :)
< Toshal> ShikharJ zoq: Should Frechet Distance follow metric policy?
jeffin143 has joined #mlpack
< jeffin143> ShikharJ : Wow that was impressive, Probably it is because I restarted my system after a month, And my battery isn't good, so i have to reset my clock everytime i restart
< jeffin143> Thanks :)
< jeffin143> I totally did forget about it.
favre49 has quit [Quit: Page closed]
Aryan_ has quit [Ping timeout: 256 seconds]
jeffin143 has quit [Ping timeout: 256 seconds]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
ImQ009 has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#21 (mc_kde_error_bounds - a771967 : Roberto Hueso Gomez): The build is still failing.
travis-ci has left #mlpack []
jeffin143 has joined #mlpack
< jeffin143> zoa : In Pr1895 , Did you mean instead of return arma::mat<t> output, we should return std::vector<arma::rowvec<t>>output ?
< jeffin143> sry zoq : * :)
< zoq> jeffin143: Do you have a link to the comment?
< zoq> Let me comment on the PR.
jeffin143 has quit [Ping timeout: 256 seconds]
< rcurtin> hey everyone, just FYI, I am leaving for a vacation today for the next two weeks. I will still check my email in the morning to see what is going on, but if you are waiting on a comment from me now you know why it may take a while :)
< rcurtin> gmanlan: unfortunately this means I won't be able to debug the Python Windows build on runtime, although I think I can fix the configuration/build before the day is out
< zoq> Have fun, weather looks good so far.
Yashwants19 has joined #mlpack
< Yashwants19> Hi rcurtin: Have fun :) Enjoy your vacations.
Yashwants19 has quit [Client Quit]
< rcurtin> Yashwants19: thanks :)
< sreenik[m]> My travis build is failing in the build mode where DPYTHON_EXECUTIBLE=/usr/bin/python but passing for the other ones. What could be the error? I remember rcurtin once say about searching something in the job log
< rcurtin> yeah, looking through the log is probably the easiest way to find the problem; it can be really long though
< sreenik[m]> Oh. Then it seems like it's going to take some time. Anyway, have a wonderful vacation! :)
jenkins-mlpack2 has quit [Ping timeout: 258 seconds]
jenkins-mlpack2 has joined #mlpack
vivekp has quit [Ping timeout: 246 seconds]
ImQ009 has quit [Quit: Leaving]
rob has joined #mlpack
rob is now known as Guest34828
< Guest34828> hey y'all, if I can find my microsd reader I will see if I can get mlpack to run on a Raspberry Pi (2 I think) and what the speeds are
< Guest34828> Is it possible to have mlpack run on bare metal? I'll try with some lightweight linux distro first
< zoq> Hello, what do you mean with "run on bare metal"?
< Guest34828> As in, compile and run it without an OS
< zoq> Not at this point no, unless you have blas/lapack.
robbb has joined #mlpack
Guest34828 has quit [Ping timeout: 256 seconds]
robbb has quit [Quit: Page closed]