ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
rick_ has joined #mlpack
rick_ has quit [Remote host closed the connection]
sreenik has joined #mlpack
< sreenik> Hi, I am trying to use mlpack with nvblas. Simply linking with -lnvblas is not showing any cuda profiling. (How) should I change armadillo's config.hpp? Should I add a #define ARMA_USE_NVBLAS?
rick_ has joined #mlpack
< sreenik> Sorry that was a useless idea. Tried it to no avail.
rick_ has quit [Remote host closed the connection]
rick_ has joined #mlpack
rick_ has quit [Client Quit]
< sreenik> [This] https://github.com/mlpack/mlpack/issues/1610 probably discusses some approaches but I am not sure where I am going wrong.
sreenik has quit [Quit: Page closed]
bhavya01 has joined #mlpack
bhavya01 has quit [Client Quit]
witness has quit [Quit: Connection closed for inactivity]
Xain has joined #mlpack
Xain has quit [Client Quit]
Xain has joined #mlpack
< Xain> hey my name is zain from Manipal and I'm interested in the GSoC project on NEAT. I've read up on NEAT and it's variants, but I haven't see anything on using it as an optimizer. Could you show me where it has been used that way?
Xain has quit [Client Quit]
xain has joined #mlpack
< xain> zoq: I think you might have missed my message, so I'm reposting here.
kanishq24 has joined #mlpack
xain has quit [Ping timeout: 256 seconds]
kanishq has joined #mlpack
kanishq24 has quit [Ping timeout: 240 seconds]
kanishq has quit [Remote host closed the connection]
kanishq24 has joined #mlpack
abhishekgoyal1 has joined #mlpack
niteya has joined #mlpack
< niteya> Hello , For gsoc I was thinking of implementing automatic differentiation for differentiable and differentiable separable functions using jets so please let me know if this is a good idea. It seems my previous comment got mixed in between a conversation.
niteya has quit [Client Quit]
abhishekgoyal1 has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
favre49 has quit [Quit: Page closed]
vivekp has quit [Ping timeout: 250 seconds]
vivekp has joined #mlpack
sreenik has joined #mlpack
xyz_ has joined #mlpack
xyz_ has quit [Client Quit]
kanishq24 has quit [Ping timeout: 252 seconds]
kanishq24 has joined #mlpack
Gji has joined #mlpack
Gji has quit [Ping timeout: 256 seconds]
kanishq has joined #mlpack
kanishq24 has quit [Ping timeout: 240 seconds]
< rcurtin> niteya: AD is very popular and hot right now, but if you take a look at the ensmallen paper, it often underperforms compared to a hand-implemented gradient method
< rcurtin> so I think that for a library like ensmallen which is focused on the speed of the implementation, it's reasonable to expect people to implement their own objective and gradient
< rcurtin> and, if they don't want to, I think they could use one of the existing C++ toolkits for AD to do that
< rcurtin> I do think it might be nice to add a little tutorial or documentation somewhere about using an existing C++ AD library with ensmallen... but that's probably not be enough for a GSoC project :(
favre49 has joined #mlpack
rick_ has joined #mlpack
rick_ has quit [Client Quit]
rcurtin_ has joined #mlpack
kanishq has quit [Quit: Leaving]
Suryo has joined #mlpack
< Suryo> zoq: I have an important update for you
< Suryo> I've implemented a method in PSO that can handle constraints
< Suryo> Following rcurtin's recommendation to preserve the current API style for constrained optimization provlems, which is documented here: https://ensmallen.org/docs.html#constrained-functions
< Suryo> I got the idea that one way of handling constraints in PSO would be to incorporate the constraints into the objective
< Suryo> I mean, of course I'm aware of how constrained Optimization is done analytically, but I kept imagining that we should define function classes for each constraint and repeatedly check if the particles satisfy them.
< Suryo> One way of handling constraints in PSO is to preserve the feasibility of the particles over iterations. That's the approach that I had investigated in the past.
< Suryo> In this, your initial set of particles need to be feasible.
< Suryo> So what I've done is basically wrote another initialization class, and while initializing particles, i evaluate each of them to make sure they're feasible.
< Suryo> If a particle is not feasible, it is evaluated as infinity. That's a standard approach.
< Suryo> So when particle positions are updated, the method will basically not update a particle if it's position is unfeasible
< Suryo> Now unlike what I had discussed earlier, i didn't template the feasibility checks separately. In a way, the feasible updating is naturally taken care of if infeasible positions have a value of infinity
< Suryo> I've also written two tests and updated my branch and you can see in PR#86 that the tests are passing.
< Suryo> At this point, I'd request you to review my PR. Also, let me know what you think about this
< Suryo> Having implemented this, I'm left with many more questions than i began with. I will discuss them with you soon.
< Suryo> I'm going to be unavailable for most of today. But i will intermittently check my messages.
< Suryo> Last thing - i ran into problems with the SPSA tests on my computer again.
< Suryo> Sometimes, the SPSA tests pass, sometimes, they don't.
< Suryo> Thanks.
Suryo has quit [Quit: AndroIRC - Android IRC Client ( http://www.androirc.com )]
< favre49> I ran the SPSA logistic regression test with random seeds and found that the test fails pretty often with accuracies below 60
favre49 has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
< favre49> Also, I've read a lot more about topics related to the NEAT project but i still cannot find anything that could be used as an optimizer. Could you please point me in the direction of some papers that do so? Thanks in advance
favre49 has quit [Quit: Page closed]
favre49 has joined #mlpack
< rcurtin_> Suryo: favre49: really, the SPSA test is still fragile? I thought we'd fixed that... let me try to reproduce shortly
< favre49> I read your conversation on the commit, and saw you used arma::arma_rng::set_seed(std::time(NULL)). I used arma::arma_rng::set_seed_random().
< favre49> I don't see how that would make a difference unless you ran it multiple times in the same second while testing, but i thought i should tell you anyway
favre49 has quit [Quit: Page closed]
favre49 has joined #mlpack
< rcurtin_> yeah, should be the same thing
favre49 has quit [Quit: Page closed]
bhavya01 has joined #mlpack
bhavya01 has left #mlpack []
ayesdie has quit [Quit: Connection closed for inactivity]
Suryo has joined #mlpack
favre49 has joined #mlpack
< Suryo> rcurtin: yeah, it kind of is fragile. The PSO tests passed in my computer, the spsa failed. I updated my PSO dev branch and just had my fingers crossed hoping that travis would pass all the tests haha
< rcurtin_> we can merge anyway even if the SPSA test fails, since it's clearly not any of your code causing the problem :)
< rcurtin_> it'll be this afternoon until I can try to reproduce and fix it though
< rcurtin_> today is meetings from me basically straight from 9am to 3pm... I'm not sure how my life got so full of meetings but I can say I'm not a huge fan...
< Suryo> Yes of course. But the green ticks are satisfying to look at :)
< Suryo> Just kidding
< rcurtin_> agreed :)
Suryo has quit [Read error: Connection reset by peer]
Suryo has joined #mlpack
dtrix has joined #mlpack
dtrix has quit [Client Quit]
dak_jay has joined #mlpack
dtrix has joined #mlpack
< dtrix> hi
Suryo has quit [Remote host closed the connection]
dak_jay_ has joined #mlpack
< dak_jay_> hi
dak_jay has quit [Ping timeout: 256 seconds]
Anirudh has joined #mlpack
dak_jay_ has quit [Ping timeout: 256 seconds]
dtrix has quit [Quit: Page closed]
favre49 has quit [Quit: Page closed]
paarmita has joined #mlpack
sumedhghaisas has quit [Quit: Page closed]
Anirudh has quit [Ping timeout: 256 seconds]
paarmita has quit [Ping timeout: 256 seconds]
yogesh has joined #mlpack
< yogesh> hey guys!
yogesh has quit [Ping timeout: 256 seconds]
< sreenik> rcurtin: I was playing around with the DigitRecognizer.cpp under the models repo. Applying those changes and replacing the sigmoid activations with relu, I reached an accuracy ~97-98% on the validation set. However, the model just wouldn't converge if I normalised the training set. On analysing, I realized that most of the values of the arma::normalised matrix were set to 0. Any idea on why this is happening?
< rcurtin> is it being divided by an integer somewhere maybe?
< rcurtin> yogesh: hi there!
< rcurtin> Suryo: favre49: yeah, I see the failures for the SPSA test now... actually I think when I ran it, I grepped the output for 'fail' but it should have been "FAIL", so I never saw the failures :)
< sreenik> rcurtin: May be so, I am using arma::normalise
< sreenik> Another thing, whenever I run the CNN version of it, the CPU usage is limited to just one core, whereas all 4 cores are used in the case of the simpler DigitRecognizer.cpp
< sreenik> I think I should test it with some more models. That should give a better picture
< rcurtin> sreenik: hmm, are you using OpenBLAS? that should use multiple cores
< rcurtin> but it depends on where exactly the bottleneck is
< rcurtin> I think when I did the normalization I just divided the matrix by 255.0, since all values are between 0 and 255
< rcurtin> not sure what arma::normalise() does exactly
< sreenik> I have done that too, divide by 255.0. Same result
< sreenik> Yes, linked with nvblas and openblas
< rcurtin> oh, wait, sorry, I should have thought a little harder
< rcurtin> the MNIST images are mostly black, so they would mostly have values of 0
< rcurtin> for convergence, you might try playing with the step size of the optimizer
< sreenik> Well that's right. Might see that one
< zoq> xain: Sorry for the slow response. The idea loosely follows the idea behind the "Learning to learn by gradient descent by gradient descent" paper, "Is the meta-EA a viable optimization method?" and "Comparing Evolutionary Algorithms for Deep Neural Networks" is another interesting one. Rahul also pointed me to the "NeuroEvolutionary Meta Optimization" paper.
sreenik has quit [Quit: Page closed]
SinghKislay has joined #mlpack
< SinghKislay> Hi