verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
nilay has joined #mlpack
< nilay> zoq: that is awesome, can you post the codes
Mathnerd314 has quit [Ping timeout: 244 seconds]
Mathnerd314 has joined #mlpack
mentekid has joined #mlpack
Mathnerd314 has quit [Ping timeout: 244 seconds]
mentekid has quit [Ping timeout: 246 seconds]
mentekid has joined #mlpack
< zoq> nilay: I think it would be a good idea to integrate the code into the existing PCA method. However, here is the unpolished code: https://gist.github.com/zoq/242894516b798ccabefb2460f6507d3c
< zoq> nilay: Right now, it only works if you don't transpose the data before -> p.Apply(zs, transformedData, eigVal, coeff);
< nilay> this is a very fast method
zoq has quit [Read error: Connection reset by peer]
zoq has joined #mlpack
< nilay> zoq: what do you think about: https://arxiv.org/pdf/1502.03167.pdf
< nilay> i found chainer does this
< zoq> nilay: Sounds like a good idea, but I think it would be a good idea to finish the inception model first. Does this sound reasonable?
< nilay> ok if you think so. I thought you might say we implement this new model only
< nilay> thats why i asked\
< zoq> I like the idea, but I think we should implement batch normalization, as separate method, so that it can be used for all networks.
marcosirc has joined #mlpack
Mathnerd314 has joined #mlpack
nilay has quit [Quit: Page closed]
Mathnerd314_ has joined #mlpack
Mathnerd314_ has quit [Changing host]
Mathnerd314_ has joined #mlpack
Mathnerd314 has quit [Ping timeout: 276 seconds]
mentekid has quit [Ping timeout: 276 seconds]
nilay has joined #mlpack
Wiz_ has joined #mlpack
Wiz_ has quit [Client Quit]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 276 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1097 (master - 6077f12 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1098 (master - cb2ea62 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
mentekid has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1099 (master - 081a601 : Ryan Curtin): The build was broken.
travis-ci has left #mlpack []
< rcurtin> mentekid: can I get a copy of your iris_q.csv and iris_r.csv files?
< rcurtin> actually I guess, no need, I just want to make sure it is the same size... 150 points, 4 dimensions
nilay has quit [Ping timeout: 250 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1101 (master - 8e740b0 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
< mentekid> rcurtin: I've split it to 100 points for _r and 50 for _q. I think I shuffled them before splitting but I'll send them to you just to be sure
newbie__ has joined #mlpack
< mentekid> but yes it's 150x4
< rcurtin> mentekid: I figured out what it is
< rcurtin> it's the conv_to<arma::Row<size_t>>::from(secondHashWeights.t() * arma::floor(hashMat))
< rcurtin> sometimes that can be negative, but the conv_to just forces that to 0 because the conversion target type is size_t
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
< zoq> I'm wondering, has anyone ever tested QUIC-SVD for PCA?
< rcurtin> zoq: I haven't; Siddharth and I tested it for CF, and kind of found it unsuitable for sparse data
< rcurtin> but I think it could work well for PCA
< zoq> okay, I think I'll go and test it
< rcurtin> mentekid: fixed in e6bc4b4
< rcurtin> I couldn't figure out a way to do it with a lambda, like inside of an imbue() call or anything
mentekid has quit [Ping timeout: 276 seconds]
< marcosirc> lozhnikov: rcurtin: I have been reading the code of rectangle_tree
< marcosirc> I think the method Descendant(const size_t index) should be improved...
< marcosirc> It takes linear time with actual implementation.
< marcosirc> I think we could improve this adding a member to each node: "size_t num_of_descendants"
< marcosirc> I mean, a counter, which is increased inside "InsertPoint( .. )".
< marcosirc> and decreased inside "DeletePoint"
< marcosirc> so, we can define NumDescendants() as returning that counter.
< marcosirc> this will make the random access faster.
< marcosirc> it would be logarithmic this way.
newbie__ has quit [Ping timeout: 250 seconds]
< marcosirc> as numDescendants for cover_trees
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1104 (master - e6bc4b4 : Ryan Curtin): The build was broken.
travis-ci has left #mlpack []
marcosirc has quit [Quit: WeeChat 1.4]