ChanServ changed the topic of #mlpack to: Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there.
cjlcarvalho has joined #mlpack
cjlcarvalho has quit [Ping timeout: 268 seconds]
vivekp has quit [Ping timeout: 250 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 244 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< davida>
zoq: As a way to take advantage of the speed of the cube multiplication, yet still be flexible enough to accept arbitrary sequences, rather than modifying Rho with each step, would it make sense to take in an extra masking cube. Alternatively, feeding in a rowvec of Rho values might be useful. Would need to take care of the "Shuffle" in the optimizer though.
mrohit[m] has quit [Remote host closed the connection]
mrohit[m] has joined #mlpack
< zoq>
davida: I think either we provide another vector with the length or we use arma::field.
< Hemal>
I don't understand the terms 'PROGRAM_INFO' 'PARAM_MATRIX_IN', 'PARAM_MATRIX_OUT'
< Hemal>
What are they? They don't seem to be macros, since macros use #define directive?
< rcurtin>
Hemal: they are macros to describe the parameters that the mlpack_knn program will use
< rcurtin>
they are defined in src/mlpack/core/util/param.hpp
< Hemal>
ok, i will check it out.
< Hemal>
Any tips on getting me started, I've set up the dev environment, read the style guidelines ,run knn and now trying to understand the code base. After understanding the codebase, i would move to PRs for contribution.
< rcurtin>
Hemal: sounds good, it looks like you have done most of what I would recommend
< davida>
I saw earlier in the year there was a question about L2 Regularisation which was not yet implemented in MLPACK. Is there a plan to add this?
< rcurtin>
davida: L2 regularization for neural networks? or something else?
< davida>
Yes, for NN.
< rcurtin>
ok, I see. I don't know of anyone actively implementing it, but it shouldn't be too hard to do
< Hemal>
rcurtin that sounds great :)
< davida>
rcurtin: I believe L1 and L2 regularization are both available in keras.
< rcurtin>
yep, they are; it's a little unclear to me what the best way is to add it to mlpack's framework. Maybe modifying the FFN and RNN classes directly is an option, but it might also be possible to do it through the proposed 'FunctionWrapper' interface
< rcurtin>
where basically you might do, e.g., 'model.Train(data, responses, std::make_tuple(PrintLoss<>(), L2Regularization<>(0.01), ModelCheckpoint<>()))'
< rcurtin>
I'm not sure if I have the syntax there right, but basically this would be somewhat similar to the idea of Keras callbacks, except it will all be worked out at compile time so there should not be any overhead
< davida>
sounds good.
Hemal has left #mlpack []
< rcurtin>
can't make any promises on how long it will take until that idea sees the light of day; I'm currently pretty underwater with the process of moving mlpack's optimization framework into its own library
< rcurtin>
we got most of that done a couple weeks ago and the framework (ensmallen) is now available on its own, but the documentation needs some improvement so this is how I am spending my evenings this week...
Hemal has joined #mlpack
< davida>
No problem. Can use Dropout in the meantime.
< davida>
Why did you want to move the optimizer framework out of MLPACK? Are ppl asking to have access to the optimizers and not the machine learning framewrok?
Hemal has left #mlpack []
< rcurtin>
davida: basically, yeah---the optimizer framework is useful on its own for lots of problems