verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
chenzhe has quit [Ping timeout: 248 seconds]
jacknlliu has joined #mlpack
< jacknlliu>
no GMR support?
jacknlliu has quit [Quit: Page closed]
chenzhe has joined #mlpack
< rcurtin>
jacknlliu: no, but there is HMMR support, see src/mlpack/methods/hmm/hmm_regression.hpp
< rcurtin>
maybe that can be adapted to the use-case that you need
stephentu has joined #mlpack
< stephentu>
does anybody know how we might use armadillo to compute the top (and only the top) eigenvector of a dense matrix?
< stephentu>
but this only works for sparse matrices
< stephentu>
(we tried passing in a dense matrix in the case it was undocumented but worked, and it does not)
< stephentu>
this is strange to me b/c arpack works just fine on dense matrices
< stephentu>
(the scipy equivalent supports it just fine, and even better supports a general linear operator class)
< stephentu>
so this is definitely an artificial restriction. the question is, is there an easy way to access the underlying API w/o modifying armadillo code
< rcurtin>
stephentu: when I wrapped ARPACK, you are right, an artificial restriction was introduced
< rcurtin>
I think that it would be relatively straightforward to modify the Armadillo code to compute just the first eigenvector
< rcurtin>
in 2015, a GSoC student rewrote ARPACK inside of Armadillo, so there is a reimplementation in C++ in there that you could *probably* use if you dug a bit (but it's not "externally" documented)
< rcurtin>
alternately---and this would be the easier way to go in my opinion---take a look at eigs_sym_bones.hpp/eigs_sym_meat.hpp and add an overload for dense objects (arma::Base<T1> not arma::SpBase<T1>)
< rcurtin>
we can backport that in src/mlpack/core/arma_extend/, and the patch can be submitted upstream to Armadillo (I see little reason why Conrad wouldn't want to merge it)
< rcurtin>
do you think that idea sounds reasonable?
< rcurtin>
also hello and I hope things are going well :)
chenzhe has quit [Ping timeout: 255 seconds]
< stephentu>
ok i'll try out the 2nd option you suggested, which sounds like the right way to do it
< stephentu>
thanks
< stephentu>
things are quite busy as usual
< rcurtin>
yeah, some things are constant in life I guess :(
< stephentu>
but i definitely want to catch up to see what other cool stuff people have implemented for gsoc
< rcurtin>
my coworkers at Symantec were actually asking me about the NTM and HAM projects today, I wasn't aware that they even knew those projects were going on
< rcurtin>
so apparently some people think those projects are quite cool :) (myself included I guess)
< sumedhghaisas_>
rcutin: Seems like I will be able to complete NTM by this week. The memory heads are complete. And basic framework is also ready.
< stephentu>
rcurtin: i'm looking at /usr/local/Cellar/armadillo/7.950.1/include/armadillo_bits/sp_auxlib_bones.hpp
< stephentu>
it seems like i'll need to add a new version of run_aupd
< stephentu>
but unfortunately this is in a C++ class-- can i use the arma_extend to add new member functions ot a C++ class?
< stephentu>
also teh annoying part is, looking at the impl of sp_auxlib::run_aupd, 95% of the code will be identical
< stephentu>
in teh dense vs sparse case
< stephentu>
i think the right thing to do for arma_extend is just copy and paste and change the 5%?
< stephentu>
but obviously when/if it gets merged upstream it will be refactored
< stephentu>
in fact i could probably skip modifying any of the C++ classes and make it run stand alone for now
< stephentu>
i just need to add a new overload to the fn_eigs_sym.hpp file
< rcurtin>
yeah, for arma_extend we would have to do annoying copypasta
< rcurtin>
but for the patch to submit upstream we can make the reasonable set of changes
< rcurtin>
another option would be just to make the simple patch that changes the necessary 5% of the code and not worry about the backport
< rcurtin>
then in the mlpack code do something like
< rcurtin>
#if ARMA_VERSION_MAJOR is too old (or whatever)
< rcurtin>
eig_sym() then throw away all the other eigenvectors
< rcurtin>
#else
< rcurtin>
eigs_sym() and save computations! :)
< rcurtin>
#endif
< rcurtin>
that'll be inefficient for now, but fast later as new armadillo versions are propagated; I'd suggest that if the backport work would be a huge undertaking
< stephentu>
backport should be pretty easy, just lots of copy and paste
< stephentu>
which i'm fine with
< stephentu>
thanks
< rcurtin>
sure, glad I could help :)
stephentu has quit [Ping timeout: 248 seconds]
kris1 has joined #mlpack
sumedhghaisas_ has quit [Ping timeout: 268 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 255 seconds]
partobs-mdp has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
< partobs-mdp>
zoq: Did you manage to implement the MeanSquaredError test? I still can't figure it out :(
chenzhe has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Ping timeout: 248 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 255 seconds]
kris1 has joined #mlpack
govg has quit [Quit: leaving]
kris1_ has joined #mlpack
kris1 has quit [Ping timeout: 240 seconds]
kris1_ is now known as kris1
kris1 has quit [Read error: Connection reset by peer]