ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gaurav_ has joined #mlpack
< gaurav_> how can i join this organisation?
gaurav_ has quit [Ping timeout: 256 seconds]
ayesdie has joined #mlpack
< ShikharJ> shadowfox: I think basic implementation skills and familiarity with high school math are the required pre-requisites; anything further is a plus.
< ShikharJ> gaurav_: Please be specific regarding what you mean by "join this organisation"?
sreenik has joined #mlpack
manish7294 has joined #mlpack
< manish7294> zoq: You there?
< sreenik> Where are the optimizers located, aren't they supposed to be in mlpack/src/mlpack/core ? I've made some basic searches but I can't really find them in the source
< manish7294> sreenik: There is now a separate library for optimizers, named ensmallen. You can find them at (https://github.com/mlpack/ensmallen/)
< sreenik> manish7294: Oh. Thanks for the prompt reply. Checking right away
sreenik has quit [Quit: Page closed]
< manish7294> zoq: There are several issues regarding big batch sgd implementation. I am listing them down here, maybe you can have a look once you get a chance.
< manish7294> 1. In stepsize decay calculation of adaptive_stepsize update, according to paper stepSizeDecay shoud be calculated as (some_term)/v where v is calculated by equation 9. where as in the current implemetation we are using v as batchSize at one place (https://github.com/mlpack/ensmallen/blob/5268e7128636e5ae5e2d8b12a50ca289ed06fbc3/include/ensmallen_bits/bigbatch_sgd/adaptive_stepsize.hpp#L98) and function.NumFunctions() at other (li
< manish7294> 2. (overallObjectiveUpdate > (overallObjective + searchParameter * stepSize * gradientNorm)) should be (overallObjectiveUpdate > (overallObjective - searchParameter * stepSize * gradientNorm) ) - line 10 Algorithm 3 Big batch SGD: with BB stepsizes
< manish7294> 3. Though this is not much of an issue, but again maybe it could have importance in some cases - author's mentioned to reduce stepSize by half at each backtracking step (line 11 Algo 3), whereas we are using backtrackStepSize of 0.1 by default which incomparison is a big step.
< manish7294> 4. Current implementation is missing a final iterate update (line 23 Algo 3)
< manish7294> Here is direct link to algo https://arxiv.org/pdf/1610.05792.pdf#subsection.5.2
< manish7294> 5. And shouldn't (1 / ((double) batchSize - 1) * sampleVariance) (line 98 adaptive_stepsize.hpp) just be sampleVariance (as sampleVariance already have term 1 / ((double) batchSize - 1) inside it's formulation (reference Equation 6 of the paper)) )
manish7294 has quit [Quit: Page closed]
ayesdie has quit [Quit: Connection closed for inactivity]
osama has joined #mlpack
favre49 has joined #mlpack
< favre49> rcurtin: You're right, i suppose there is no guarantee. I'll research some more and hopefully find something more concrete.
favre49 has quit [Quit: Page closed]
osama has quit [Ping timeout: 256 seconds]
VISG25 has joined #mlpack
VISG25 has quit [Quit: Page closed]
< zoq> manish7294: 1. Looks like the message got cut off
< zoq> manish7294: 2/3/5. You are right, we should fix that.
< zoq> manish7294: 4. Not sure I see what you mean, line 177 should cover that, maybe I missed something?
sreenik has joined #mlpack
manish7294 has joined #mlpack
< manish7294> zoq: You are right regarding the 4th case. Sorry I missed that line while going through the code.
< manish7294> I am reposting 1
< manish7294> 1. In stepsize decay calculation of adaptive_stepsize update, according to paper stepSizeDecay shoud be calculated as (some_term)/v where v is calculated by equation 9. where as in the current implemetation we are using v as batchSize at one place (https://github.com/mlpack/ensmallen/blob/5268e7128636e5ae5e2d8b12a50ca289ed06fbc3/include/ensmallen_bits/bigbatch_sgd/adaptive_stepsize.hpp#L98)
< manish7294> and function.NumFunctions() at other (see line 102 of adaptive_size.cpp)
< manish7294> see https://pastebin.com/fwndcbMd for better understanding of 1
govg has joined #mlpack
< zoq> manish7294: Ahh, I see, you are right once again.
< zoq> manish7294: Do you like to open a PR?
manish7294 has quit [Ping timeout: 256 seconds]
manish7294 has joined #mlpack
< manish7294> zoq: Sure I can do that. I just want to cross-chck one parameter with you.
< manish7294> zoq: As per my understanding "/nabla lBt(xt)" denote stochastic gradient of current batch at current iterate value and "/nabla lBt(xt-1)" refers to stochastic gradient of current batch at previous iterate value. Right?
< manish7294> i.e we will have to save previous iterate value as well at each iteration.
< manish7294> And if that's right, then what will be the iterate_previous value for iteration 1.
< zoq> zero, should work
< manish7294> zoq: all right, then I will open a pr shortly as I find some time.
< zoq> manish7294: great, thanks!
Sonal has joined #mlpack
< Sonal> help
Sonal has quit [Client Quit]
< manish7294> wow! that was something out of the blue :)
kucuk has joined #mlpack
sreenik has quit [Quit: Page closed]
manish7294 has quit [Quit: Page closed]
< kucuk> Hello everyone, I am coming from mlpack's ideas for gsoc page on github. I want to participate and contribute this organisation, I already read some basic instructions; do you have any tips/recommendations? I'm an applied maths student and I am very interested in data science, machine learning but my C++ knowledge might lack, and I'm not familiar with mlpack yet, so anything would be useful :) thank you
< kucuk> What can I do to prepare myself until application date comes, or even until the coding phase so that I would be ready if I got selected by any luck?
kucuk has quit [Remote host closed the connection]
kucuk has joined #mlpack
kucuk has quit [Client Quit]
addy0309 has joined #mlpack
addy0309 has quit [Ping timeout: 256 seconds]
< rcurtin> kucuk: hi there! most of the tips and preparation advice we have is collected here: http://www.mlpack.org/gsoc.html and http://www.mlpack.org/involved.html
ayesdie has joined #mlpack
favre49 has joined #mlpack
Soonmok has quit [Quit: Connection closed for inactivity]
ayesdie has quit [Ping timeout: 256 seconds]
ayesdie has joined #mlpack
< ayesdie> rcurtin: I ran the code that you wrote (the one using arma::sp_mat and using .cols() on it), that worked kinda fine. Actually, it didnot work on the `groundTruth` matrix and gave `0`s for every value.
< ayesdie> I extended your program and its output in the comment inside the PR.
< ayesdie> Is there an issue with `armadillo` that is triggering this? I'm confused :s
riaash04 has joined #mlpack
< riaash04> In the test/main tests BOOST_FIXTURE_TEST_SUITE is used. The fixture structs for calling this call restore settings from cli.hpp, that says the test name must have some stored setting (using StoreSettings). I couldn't find any StoreSettings calls. Where are they done or are they done automatically (if yes what is the test name string they create)?)
riaash04 has quit [Ping timeout: 256 seconds]
ayesdie has quit [Quit: Page closed]
favre49 has quit [Quit: Page closed]
aman_p has joined #mlpack
< rcurtin> ayesdie: I'm not sure what is happening but I very much doubt that there is a bug in Armadillo here, it's pretty well tested
< rcurtin> it's not necessarily bulletproof, so it's possible there is an Armadillo bug, but I was not able to reproduce the problem you showed
< rcurtin> riaash04: the StoreSettings() calls are done in the relevant Option class... so see src/mlpack/bindings/tests/test_option.hpp
SinghKislay has joined #mlpack
< SinghKislay> Hello
< zoq> SinghKislay: Hello, there!
< SinghKislay> Hi, I'am trying to implement residual layers, I'am using the forward method to do that, but Iam unable to understand how to back prop(adam)
< SinghKislay> also I cannot find optimzer file in mlpack/core
< SinghKislay> could you point me in right direction
< zoq> we moved the optimizer into it's own repo: https://github.com/mlpack/ensmallen
< zoq> not sure what you mean with "unable to understand how to back prop(adam)", can you clarify that part
< SinghKislay> yeah sure, in theory first we forward propagte to the error and from their we calcu In all the examples it is model.Add() and then model.train() since I'am implementing forward() method How do I train.
< SinghKislay> I mean after I have ran the method say model.gradient() what do I do to take an adam step.
< zoq> Are you using the FFN class?
< SinghKislay> yes
< zoq> this will wrap all the necessary functions for you, so will cal the forward,backward,gradient methods
< SinghKislay> No, sorry convolution class
< zoq> using the specified optimizer e.g adam
< SinghKislay> yes
< zoq> do you think you could post your initial model e.g. on github gist?
< zoq> the convolution module doesn't have an Add method
< zoq> perhaps you like to manually call the functions?
< SinghKislay> I was not clear in the begining. what I want to do is get output after each forward op. for that after going through docs I figured FNN won't work. So I checked out convolution module. It has forward() method. My question is, say I got to the error by forward() method, how do I update my weights.
< zoq> AdamUpdate adamUpdate(...); adamUpdate.Initialize(weights.n_rows, weights.n_cols); adamUpdate.Update(weights, stepSize, error);
< SinghKislay> Thanks
SinghKislay has quit [Ping timeout: 256 seconds]
aman_p has quit [Ping timeout: 245 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#191 (ensmallen-1.14.1 - 83e54b3 : Ryan Curtin): The build passed.
< travis-ci> Change view : https://github.com/mlpack/ensmallen/compare/700c7317d7da^...83e54b35529b
travis-ci has left #mlpack []