verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
witness has joined #mlpack
witness has quit [Quit: Connection closed for inactivity]
< naywhayare>
zoq: I just finished my defense (hooray!) so now I'm going to spend some time focusing on the mlpack 1.1.0 release
< naywhayare>
I know you've been working a lot on the ann code, so I wanted to see what you thought about how ready that code is for release
< naywhayare>
I still have a lot I want to do (primarily documentation and refactoring, though)
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#188 (master - 7d64cd6 : Ryan Curtin): The build passed.
< zoq>
I'm refactoring the ann code to make the API simpler. The refactoring of the feedforward network structure is already finished. I guess, we could include the code in the next release. I need to make sure the code builds on every slave.
< zoq>
Right now, I'm about to refactor the convolutional neural network code, which could be included in the next release depending on the release date.
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#190 (master - 6382806 : Ryan Curtin): The build passed.
< naywhayare>
zoq: thanks! I'm hoping that a release might be possible by early September, but I'll keep you updated with what I'm doing
< naywhayare>
I need to sit down and set some goals for what I'm planning to do, then I can set a better timeline for when what I'll be working on is ready, and we can go from there
< naywhayare>
it's also of course feasible to release a 1.1.1 version with updated ANN functionality shortly after a 1.1.0 release too :)
< zoq>
naywhayare: writing down some goals is a great idea ... I guess the feature list is already packed with a lot of changes, additions and improvements (trees, cf, ...)
< zoq>
I'm not completely sure, but I should finish the refactoring of the convolutional neural network modules in the next days.
< zoq>
would be nice to have another demo before the actual release, but let's see
gopala has joined #mlpack
< gopala>
hello <naywhayare>.. how can i check the status of lars .. i want it to have the predict function supported ..
< zoq>
gopala: Hello, naywhayare implemented the predict function in #7d64cd6, you can checkout the latest master or patch your current version.
< gopala>
thats great im rather new at git.. how do i get that branch
< gopala>
ok i see the latest has it.. ill get that.. thanks guys
< naywhayare>
gopala: yep, just implemented that, let me know if you have any problems
< naywhayare>
there's both the Predict() function if you're writing C++, and --test_file if you're using the command-line interface
< gopala>
thanks much.. will let you know
< gopala>
while you guys are here :) --- is there a plan to use openMP wherever applicable, or do these algorithms fall back to armadillo's parallelization where applicable ?
< naywhayare>
yeah; right now density estimation trees use OpenMP
< naywhayare>
I'd like to add support for more OpenMP algorithms as I find time, but, for linear-algebra-heavy algorithms (including LARS), I think the better choice will be to use OpenBLAS with Armadillo
< naywhayare>
I should run some timing simulations to see how much speedup OpenBLAS gives...
< gopala>
yea im using that right now.. but wanted to check your recommendation
< naywhayare>
if you have any comparisons of Armadillo+OpenBLAS vs. Armadillo+regular lapack/blas for lars, let me know, I'd be interested
< gopala>
my other c++ code (my personal nmf variant) using armadillo really gains from openMP+armadillo, Ive not formally noted down the speedup but seems sublinear wrt #cores
< naywhayare>
ah, I should try mlpack's NMF implementation like that
< naywhayare>
maybe the other matrix decomposition algorithms would be benefited too
< gopala>
I'd imagine.. latest armadillo also supports nvblas.. so thats great .. Ive not compared them all to but im super curious
< naywhayare>
nice, I'll have to ask Conrad (the armadillo maintainer) if he has any comparisons and benchmarks
< gopala>
on lars, it'd be good to be able to load in a model file (-m) so I dont have to train muliple times
< gopala>
to test
< gopala>
(but it works on my data, so thanks)
< naywhayare>
yeah, I thought about that too, but I actually need to first refactor LARS to handle boost::serialization, so I put it off for a bit
< naywhayare>
if it works on your data, I won't worry about it for now, but it is definitely on the list of things I need to handle before the next release, which will hopefully be in the next month or so (hopefully sooner) :)