verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq> ironstark: What I meant was to revert the changes regarding the config file and not to delete the file from the repo.
< ironstark> ya that was a mistake
< ironstark> I am just correcting it
< zoq> ironstark: Okay, perfect.
< vivekp> zoq: The Optimize function in SGD class currently prints "SGD: ... " for every update policy line: 78 for example (in sgd_impl.hpp)
< vivekp> I think since there are now other optimizers implemented as update policies, we probably need add a Name function in every update class
< vivekp> then we can do something like updatePolicy.Name() to print the correct name with respect to different update policies rather than just printing "SGD" for every update policy.
< zoq> vivekp: hm, I'm not sure that is necessary since AdaGrad is basically an extension of SGD, but I agree that this could be confusing and misleading. Another option would be to rewrite the message, to something more general.
< vivekp> Yeah, that could also be done but for optimizers like Adam, AdaMax, SMORMS3 etc. what do you think?
< zoq> Adam, RMSProp are all extensions, anyway, I think I would go with another wording.
< zoq> What we would do is basically to swap SGD with the ADAM, RMSProp, etc. right?
< vivekp> Yes, that's my intention. Adding a name function to the existing update policy classes seemed like a plausible solution to me but changing the wording to something general sounds about right :)
< zoq> At least in this situation, I guess changing the wording is just fine, maybe that would change in the future when printing more optimizer-specific messages are necessary.
< ironstark> zoq: In methods/sklearn/svm.py the default value for gamma is given 0.0 that needed to be updated to 'auto' as the new version of sklearn does not support 0.0
< ironstark> A value 0 for gamma will cause a build error
< vivekp> zoq: Yeah, I see your point.
< zoq> ironstark: Thanks again, also do you mind to use more descriptive commit messages in the future?
< ironstark> I'll keep that in mind
< ironstark> Thanks for reviewing my PR :)
< zoq> ironstark: Btw. you are really fast, you already opened another PR for mrpt lib like 30 minutes later.
< zoq> ironstark: I'll take a look at the PR and install mrpt on the benchmark systems tomorrow.
< ironstark> zoq: Thanks for the compliment :). Also the mrpt implementation only returns runtime metric right now
< ironstark> I will try to add other metric like accuracy as well
< zoq> ironstark: It would be great if we could also measure the time to build the tree.
< zoq> and to compute the neighbors
< ironstark> I have placed the index.build() function within the time block
< ironstark> Should I keep them separate? To calculate time taken by them separately
< zoq> I think that would be interesting, that way we could also compare the time to build the tree with mlapack and hlearn, but overall runtime is just fine for now.
< ironstark> okay I'll try to do that.
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
mikeling has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
naxalpha has joined #mlpack
vinayakvivek has joined #mlpack
< naxalpha> hi zoq: should not it be closed now? https://github.com/mlpack/mlpack/issues/919
ironstark has quit [Ping timeout: 246 seconds]
paws_ has joined #mlpack
< paws_> is it too late to try to submit an application for gsoc?
ironstark has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> sagarbhathwar/mlpack#3 (master - bd7f327 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
PAW456 has joined #mlpack
paws_ has quit [Quit: Page closed]
< PAW456> is it too late to try to submit an application for gsoc?
PAW456 has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 264 seconds]
ironstark has quit [Ping timeout: 240 seconds]
ironstark has joined #mlpack
vivekp has joined #mlpack
pawan_sasanka has joined #mlpack
< pawan_sasanka> i was thinking of applying for gsoc , are you still taking considering applicants
< pawan_sasanka> ?
ironstark has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has joined #mlpack
pawan_sasanka_ has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
chenzhe has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has quit [Ping timeout: 260 seconds]
ironstark has joined #mlpack
chenzhe has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
vivekp has quit [Ping timeout: 258 seconds]
vivekp has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka has quit [Ping timeout: 240 seconds]
ironstark has quit [Quit: Leaving]
mikeling has quit [Quit: Connection closed for inactivity]
ironstark has joined #mlpack
< vivekp> zoq: For Adam update policy implementation, we need to pass the current value of "i" as well along with other parameters in the update step.
< vivekp> We could add a new parameter for that to the Update function but that would result in a warning about unused parameter "i" at multiple places.
< vivekp> Would it then make sense to just do something like i = i in the function definition to avoid those warnings?
< zoq> vivekp: You can comment the parameter the unused parameter to supress the warning: void Update(arma::mat& iterate, const double stepSize, const arma::mat& gradient, const size_t /* iteration */)
< vivekp> Oh, nice! Thanks
< vivekp> I'm almost done implementing the Adam update policy. Should be able to open a PR in about an hour.
< zoq> vivekp: Sounds good.
meet30 has joined #mlpack
meet30 has quit [Quit: Page closed]
pawan_sasanka has joined #mlpack
< zoq> pawan_sasanka: Hello there, you can still apply.
ironstark has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
vss has joined #mlpack
mikeling has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 260 seconds]
pawan_sasanka has joined #mlpack
pawan_sasanka_ has quit [Ping timeout: 260 seconds]
pawan_sasanka has quit [Read error: Network is unreachable]
pawan_sasanka has joined #mlpack
Guest77286 has joined #mlpack
vss has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka__ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka__ is now known as pawan_sasanka
pawan_sasanka__ has joined #mlpack
pawan_sasanka_ has quit [Ping timeout: 240 seconds]
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka__ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
Guest77286 has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka has joined #mlpack
pawan_sasanka_ has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
shikhar has joined #mlpack
Trion has joined #mlpack
pawan_sasanka_ has joined #mlpack
< zoq> The last commit should bring us closer to a green matrix build.
< rcurtin> nice
pawan_sasanka has quit [Ping timeout: 240 seconds]
< rcurtin> I was looking at the VanillaNetworkTest fix, and I was thinking, do the labels need to be in [0, num_classes) or [1, num_classes]?
pawan_sasanka_ is now known as pawan_sasanka
< rcurtin> I would expect the first, but when I try that I break the test; haven't managed to dig in completely to see what is going on yet
< zoq> Depending on the output layer [1, num_classes).
< zoq> Do you mean the PR?
< rcurtin> yeah
< rcurtin> ah, I see the documentation now for NegativeLogLikelihood
< rcurtin> do you think we should change it to [0, num_classes) to match the other mlpack methods? (and also NormalizeLabels())
< zoq> I think that is a good idea.
< zoq> I'll have to take a closer look at the PR, I thought I "fixed" the test.
< zoq> Maybe Abhinav's solution is better.
ironstark has joined #mlpack
< zoq> http://masterblaster.mlpack.org/job/mlpack%20-%20nightly%20matrix%20build/ it looks like none of the builds errored because of the cnn test.
< zoq> just looked at the Linux machines
< rcurtin> I saw on the sparc machines VanillaNetworkTest takes a full 10 minutes :)
< rcurtin> I'll do a quick timing comparison between the current code and the PR
< rcurtin> it only takes like 5 seconds on my desktop
< rcurtin> yeah, it looks like they are not being added correctly
< rcurtin> but when you go down a level they seem correct
< rcurtin> I don't see any difference between the current master branch and abhinav's PR, in terms of running time
pawan_sasanka_ has joined #mlpack
< zoq> The current master version runs another time (max 5 times) if the test fails, so maybe you where lucky with weights.
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
sumedhghaisas has joined #mlpack
< zoq> I guess if abhinav's solution is stable, we should go with it.
< rcurtin> yeah; I am running now to get some idea of the failure distribution
ironstark has quit [Ping timeout: 260 seconds]
< rcurtin> it will take a while but I'll let you know what I find
< zoq> great, thanks
ironstark has joined #mlpack
< rcurtin> probably because the current master version runs multiple times, I will not actually get it to fail even once
< rcurtin> I'll give up after 1000 tries... which will be like 90 minutes I guess
nish21 has joined #mlpack
< nish21> came across this interesting visualization of evolution of mlpack :) https://www.youtube.com/watch?v=yQtp3gf5wtY
pawan_sasanka_ has joined #mlpack
< zoq> "this is cool real work in action more grace to you!! mlpack" from the comment
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
< nish21> zoq: i must say i agree with that comment
< zoq> nish21: You're name should popup somewhere if someone uploads an updated version.
nish21 has quit [Ping timeout: 260 seconds]
< rcurtin> I remember playing with gource, it is easy to run on the git repo and fun to watch
< Trion> rcurtin: you are moving faster than speed of light in the visualization :P
< rcurtin> hah
< rcurtin> there was a _lot_ of refactoring to be done in 2010/2011...
< Trion> Explored this today https://blog.openai.com/evolution-strategies/ Finally some algorithm that will make agents that work fast on my old laptop :P
nish21 has joined #mlpack
< nish21> zoq: yeah, but i would probably have to search for it with a microscope :)
sumedhghaisas has quit [Ping timeout: 240 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#2229 (master - a836378 : Marcus Edel): The build was broken.
travis-ci has left #mlpack []
nish21 has quit [Ping timeout: 260 seconds]
benchmark has joined #mlpack
benchmark has quit [Client Quit]
Jatin has joined #mlpack
nish21 has joined #mlpack
nish21 has left #mlpack []
Jatin has quit [Ping timeout: 260 seconds]
Jatin has joined #mlpack
vss has joined #mlpack
Jatin has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
bharath has joined #mlpack
shikhar has quit [Quit: Page closed]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
Trion has quit [Quit: Have to go, see ya!]
pawan_sasanka_ has joined #mlpack
ironstark has quit [Ping timeout: 268 seconds]
ironstark has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
mikeling has quit [Quit: Connection closed for inactivity]
ironstark has quit [Ping timeout: 258 seconds]
chenzhe has joined #mlpack
ironstark has joined #mlpack
< pawan_sasanka> zoq , im looking at the essential deep learning modules project
< pawan_sasanka> can someone help me through that?
bharath has quit [Ping timeout: 246 seconds]
bharath has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
ironstark has quit [Ping timeout: 240 seconds]
bharath has quit [Ping timeout: 260 seconds]
< zoq> pawan_sasan: Hello, The Essential deep learning modules project has been discussed on the mailing list before: http://mlpack.org/pipermail/mlpack/
bharath has joined #mlpack
ironstark has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
bharath_ has joined #mlpack
mentekid has joined #mlpack
mentekid has quit [Read error: Connection reset by peer]
mentekid has joined #mlpack
mentekid has quit [Read error: Connection reset by peer]
bharath has quit [Ping timeout: 264 seconds]
yannis has joined #mlpack
yannis is now known as Guest50661
< vss> I have shared my draft regarding Fast k centers implementation , please have a look at it and let me know what you think about it. :)
Guest50661 has left #mlpack []
Guest50661 has joined #mlpack
Guest50661 has left #mlpack []
Guest50661 has joined #mlpack
Guest50661 is now known as mentekid
pawan_sasanka_ has joined #mlpack
vss has left #mlpack []
mentekid has quit [Client Quit]
pawan_sasanka__ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka__ is now known as pawan_sasanka
< cult-> whats the best use case for adaboost and what's best for hmm?
pawan_sasanka_ has quit [Ping timeout: 240 seconds]
bharath_ has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 260 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
chenzhe has joined #mlpack
ironstark has quit [Ping timeout: 246 seconds]
chenzhe has quit [Ping timeout: 246 seconds]
bharath has joined #mlpack
kris has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
< kris> Him
pawan_sasanka_ has joined #mlpack
< kris> Hi, i updated both xavier init and nag. if someone could review i could push the changes tonight. Thanks
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
shikhar has joined #mlpack
ironstark has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Client Quit]
chenzhe has joined #mlpack
ironstark has quit [Ping timeout: 246 seconds]
chenzhe has quit [Ping timeout: 256 seconds]
chenzhe has joined #mlpack
< kris> I have a question when using sgd with fnn. When the gradient function is called by sgd it returns just the gradient for the last layer. so bascially the parameters for the last layer should be changed. How do we change the paramters for the whole layer.
< kris> So basically i am not able to understand how the whole backprop works in this case. I have intuition that this is done on a per layer so iterate for sgd would be a set of functions.
< kris> and these are all evaluated at the same point
yannis has joined #mlpack
yannis is now known as mentekid
< mentekid> New IRC client! Anyone can read me?
< rcurtin> yep, the messages are going through :)
< mentekid> yay!
sahilc has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
ironstark has joined #mlpack
bharath has quit [Read error: Connection reset by peer]
bharath has joined #mlpack
mentekid has quit [Quit: mentekid]
yannis has joined #mlpack
yannis is now known as mentekid
chenzhe has quit [Ping timeout: 256 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
< rcurtin> it looks like I am going to Aachen in June... I guess it is time for me to refresh my German and remember all that vokabeln :)
sahilc has quit [Ping timeout: 260 seconds]
chenzhe has joined #mlpack
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 246 seconds]
shikhar has quit [Quit: Page closed]
richukuttan has joined #mlpack
< richukuttan> zoq: I have made a few changes to the proposal based on your comments. Please read through it when possible. Also, please check if the class interface is understandable, and if it makes sense. Thanks.
richukuttan has quit [Client Quit]
vinayakvivek has quit [Quit: Connection closed for inactivity]
Alvis__ has joined #mlpack
< Alvis__> Hi
< Alvis__> If I have experience in CUDA but not neural networks, would Parallel stochastic optimization methods be a suitable task for me
< zoq> Alvis__: Hello there, https://github.com/mlpack/mlpack/issues/173 might be helpful, especially the last comment.
< zoq> kris: When the Gradient function is called we go through the complete network and calculate the gradient for each layer, not just the last layer. Here is the function which calculates the gradients: https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/ffn_impl.hpp#L338 for the FFN class. There is one "trick" the gradient parameter does store all gradients, so each layer doesn't hold its own
< zoq> gradient parameter matrix instead it references to the gradient matrix that is passed from the optimizer.
< zoq> richukuttan: I'll take a look once I get a chance, probably tomorrow.
< zoq> rcurtin: Oh, nice, ich glaube, ich war noch nie in Aachen, sicherlich schön besonders zu dieser Jahreszeit :)
< rcurtin> ja, ich hoffe, dass ich kann ein bisschen Spass in Aachen machen
< rcurtin> um, I think that means what I meant it to
< zoq> yeah, I got you
< rcurtin> :)
richukuttan has joined #mlpack
< richukuttan> zoq: This is the first time I have to describe the class template before beginning the project, till now, the template just made itself as my project went on. So, if you find any discrepancies or ways to improve, please share.
richukuttan has quit [Client Quit]
Alvis__ has quit [Ping timeout: 258 seconds]