ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gaurav_ has joined #mlpack
< gaurav_>
how can i join this organisation?
gaurav_ has quit [Ping timeout: 256 seconds]
ayesdie has joined #mlpack
< ShikharJ>
shadowfox: I think basic implementation skills and familiarity with high school math are the required pre-requisites; anything further is a plus.
< ShikharJ>
gaurav_: Please be specific regarding what you mean by "join this organisation"?
sreenik has joined #mlpack
manish7294 has joined #mlpack
< manish7294>
zoq: You there?
< sreenik>
Where are the optimizers located, aren't they supposed to be in mlpack/src/mlpack/core ? I've made some basic searches but I can't really find them in the source
< sreenik>
manish7294: Oh. Thanks for the prompt reply. Checking right away
sreenik has quit [Quit: Page closed]
< manish7294>
zoq: There are several issues regarding big batch sgd implementation. I am listing them down here, maybe you can have a look once you get a chance.
< manish7294>
2. (overallObjectiveUpdate > (overallObjective + searchParameter * stepSize * gradientNorm)) should be (overallObjectiveUpdate > (overallObjective - searchParameter * stepSize * gradientNorm) ) - line 10 Algorithm 3 Big batch SGD: with BB stepsizes
< manish7294>
3. Though this is not much of an issue, but again maybe it could have importance in some cases - author's mentioned to reduce stepSize by half at each backtracking step (line 11 Algo 3), whereas we are using backtrackStepSize of 0.1 by default which incomparison is a big step.
< manish7294>
4. Current implementation is missing a final iterate update (line 23 Algo 3)
< zoq>
manish7294: Ahh, I see, you are right once again.
< zoq>
manish7294: Do you like to open a PR?
manish7294 has quit [Ping timeout: 256 seconds]
manish7294 has joined #mlpack
< manish7294>
zoq: Sure I can do that. I just want to cross-chck one parameter with you.
< manish7294>
zoq: As per my understanding "/nabla lBt(xt)" denote stochastic gradient of current batch at current iterate value and "/nabla lBt(xt-1)" refers to stochastic gradient of current batch at previous iterate value. Right?
< manish7294>
i.e we will have to save previous iterate value as well at each iteration.
< manish7294>
And if that's right, then what will be the iterate_previous value for iteration 1.
< zoq>
zero, should work
< manish7294>
zoq: all right, then I will open a pr shortly as I find some time.
< zoq>
manish7294: great, thanks!
Sonal has joined #mlpack
< Sonal>
help
Sonal has quit [Client Quit]
< manish7294>
wow! that was something out of the blue :)
kucuk has joined #mlpack
sreenik has quit [Quit: Page closed]
manish7294 has quit [Quit: Page closed]
< kucuk>
Hello everyone, I am coming from mlpack's ideas for gsoc page on github. I want to participate and contribute this organisation, I already read some basic instructions; do you have any tips/recommendations? I'm an applied maths student and I am very interested in data science, machine learning but my C++ knowledge might lack, and I'm not familiar with mlpack yet, so anything would be useful :) thank you
< kucuk>
What can I do to prepare myself until application date comes, or even until the coding phase so that I would be ready if I got selected by any luck?
kucuk has quit [Remote host closed the connection]
Soonmok has quit [Quit: Connection closed for inactivity]
ayesdie has quit [Ping timeout: 256 seconds]
ayesdie has joined #mlpack
< ayesdie>
rcurtin: I ran the code that you wrote (the one using arma::sp_mat and using .cols() on it), that worked kinda fine. Actually, it didnot work on the `groundTruth` matrix and gave `0`s for every value.
< ayesdie>
I extended your program and its output in the comment inside the PR.
< ayesdie>
Is there an issue with `armadillo` that is triggering this? I'm confused :s
riaash04 has joined #mlpack
< riaash04>
In the test/main tests BOOST_FIXTURE_TEST_SUITE is used. The fixture structs for calling this call restore settings from cli.hpp, that says the test name must have some stored setting (using StoreSettings). I couldn't find any StoreSettings calls. Where are they done or are they done automatically (if yes what is the test name string they create)?)
riaash04 has quit [Ping timeout: 256 seconds]
ayesdie has quit [Quit: Page closed]
favre49 has quit [Quit: Page closed]
aman_p has joined #mlpack
< rcurtin>
ayesdie: I'm not sure what is happening but I very much doubt that there is a bug in Armadillo here, it's pretty well tested
< rcurtin>
it's not necessarily bulletproof, so it's possible there is an Armadillo bug, but I was not able to reproduce the problem you showed
< rcurtin>
riaash04: the StoreSettings() calls are done in the relevant Option class... so see src/mlpack/bindings/tests/test_option.hpp
SinghKislay has joined #mlpack
< SinghKislay>
Hello
< zoq>
SinghKislay: Hello, there!
< SinghKislay>
Hi, I'am trying to implement residual layers, I'am using the forward method to do that, but Iam unable to understand how to back prop(adam)
< SinghKislay>
also I cannot find optimzer file in mlpack/core
< SinghKislay>
could you point me in right direction
< zoq>
not sure what you mean with "unable to understand how to back prop(adam)", can you clarify that part
< SinghKislay>
yeah sure, in theory first we forward propagte to the error and from their we calcu In all the examples it is model.Add() and then model.train() since I'am implementing forward() method How do I train.
< SinghKislay>
I mean after I have ran the method say model.gradient() what do I do to take an adam step.
< zoq>
Are you using the FFN class?
< SinghKislay>
yes
< zoq>
this will wrap all the necessary functions for you, so will cal the forward,backward,gradient methods
< SinghKislay>
No, sorry convolution class
< zoq>
using the specified optimizer e.g adam
< SinghKislay>
yes
< zoq>
do you think you could post your initial model e.g. on github gist?
< zoq>
the convolution module doesn't have an Add method
< zoq>
perhaps you like to manually call the functions?
< SinghKislay>
I was not clear in the begining. what I want to do is get output after each forward op. for that after going through docs I figured FNN won't work. So I checked out convolution module. It has forward() method. My question is, say I got to the error by forward() method, how do I update my weights.