verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
Mathnerd314 has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 260 seconds]
mentekid has joined #mlpack
< nilay> zoq: yes, the boundary is of size (params->rowSize, params->colSize). maybe i should put that also in the min function, when we are calculating the variable lenLoc, then this issue will not come.
< zoq> nilay: Good idea.
< zoq> nilay: Also, can you push the current state of the inception layer, that would give us the chance to take a look and probably point out issues? Another thing, it would be great if you could add comments to the feature extraction functions.
< nilay> zoq: ok, i'll do that as soon as i get home
< nilay> may take 4-5 hrs
< zoq> okay
< nilay> also one more thing
< nilay> that day you told me where CNN::Gradient gets called, isn't layer.gradient different from CNN::Gradient
< zoq> you mean the CNN::Gradient of the cnn_impl.hpp?
< nilay> yes
< zoq> The CNN::Gradient function is called by the optimizer, using the input, which is unused in our case, the index of the current input sample and the gradient from the complete network. We first Evaluate the network with the given input (forward pass). Afterward, we set the gradient and perform the backward pass. And at the end we call the UpdateGradients function which calls the gradient function from each
< zoq> layer, with the right parameter. So, CNN::Gradient calls another function which calls layer.gradient.
< nilay> by optimizer you mean rmsprop?
< zoq> yeah, or minibatch sgd, adam, etc.
< zoq> at the end line 384 in cnn.hpp is called.
< zoq> layer.Gradient(layer.InputParameter(), delta, layer.Gradient());
< nilay> function.Gradient(iterate, visitationOrder[currentFunction], gradient);
< nilay> is this it, if we take rmsprop as the optimization algorithm
< zoq> yes, this calls the CNN::Gradient function in cnn_impl.hpp.
< zoq> The function is the same for any other optimizer.
< nilay> by function you mean the optimize function?
< zoq> yes, right
< nilay> ok, thanks.
< keonkim> is there any way I can specify multiple parameters into a one flag?
< keonkim> say --param 1 2 3
< keonkim> oh there is a PARAM_VECTOR i'll take a look at this
< zoq> keonkim: Actually, I was thinking the same, would be neat, to specify the dimension that way.
< keonkim> zoq: yup :)
marcosirc has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
marcosirc has quit [Ping timeout: 250 seconds]
marcosirc has joined #mlpack
< keonkim> zoq: I tried with this simple program. but it doesn't seem to work.
< keonkim> running $ ./test -p 1 -v gives vector size of 0 and it says Execution parameters: param: (Unknown data type - St6vectorIiSaIiEE)
< zoq> You should use CLI::ParseCommandLine(argc, argv); before const vector<int> param = CLI::GetParam< vector<int> >("param");
< keonkim> oh, you are right, thank you:)
< keonkim> It works now, but does not accept more than 1 parameter :(
< keonkim> I updated the code
< zoq> yeah, I don't think it's gona work with more than one parameter. I think we have to specialize CLI::GetParam for std::vetor.
< zoq> CLI::GetParam in cli_impl.hpp, what do you think?
< keonkim> sure, and I think I can add that change
< zoq> sounds great :)
Mathnerd314 has joined #mlpack
< keonkim> But for now I am going to go with just one parameter this time for imputer. and make those changes in the other pull request.
< keonkim> what do you think?
< zoq> sounds good, I think, there should also be an option to use all dimensions. I don't like to specify each dimension with dim=1,2,3,4...
< zoq> hm, a range could also work dim=1-10
< keonkim> I modified it so that when --dimension is not specified, the program will apply the changes to all dimensions with a warning.
< zoq> ah, good idea
< keonkim> its not commited yet
< rcurtin> keonkim: I thought that PARAM_VECTOR worked, but I don't see any tests for it in CLITest
< rcurtin> I guess if it doesn't work we should either remove it or fix it; I don't think anywhere else in the codebase uses it
nilay has quit [Quit: Page closed]
nilay has joined #mlpack
mentekid has quit [Ping timeout: 276 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 246 seconds]
mentekid has joined #mlpack
< rcurtin> mentekid: ok, done with comments for LSHSearch parallelization; once we address all of those I'm fine merging it, unless you had something else you wanted to change or do to it
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 246 seconds]
nilay has quit [Ping timeout: 250 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1186 (master - 448b5a8 : Ryan Curtin): The build was broken.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1187 (master - d7287ac : Marcus Edel): The build was broken.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1188 (master - e777a56 : Marcus Edel): The build is still failing.
travis-ci has left #mlpack []