verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ranjan123 has joined #mlpack
mentekid has joined #mlpack
Mathnerd314 has quit [Ping timeout: 265 seconds]
mentekid has quit [Ping timeout: 276 seconds]
ranjan123 has quit [Quit: Page closed]
mentekid has joined #mlpack
< mentekid> rcurtin: I implemented the changes we talked about a few days ago regarding PR #623 (LSH using less memory). I pushed the new code
< mentekid> I ran tests for 3 different cutoffs: 0.01, 0.05 and 0.1, but there doesn't seem to be any significant improvement, at least not for default parameters (I tested miniboone, pokerhand, and corel. I am not sure if phy is this one: https://snap.stanford.edu/data/cit-HepTh.html)
< mentekid> at best the new code performs equally to the upstream code, and it some times takes slightly more. I will experiment with parameter values to see if better tuned parameters improves this
< mentekid> rcurtin: I ran some tuned versions too. Parameters: a) 10 tables, 10 projections and b) 10 tables, 5 projections
< mentekid> in this case, hybrid is actually much faster than the old one. Specificaly (I run 10000 queries against the entire reference dataset):
< mentekid> miniboone: 3.5s hybrid and 7s old code
< mentekid> pokerhand: 20s hybrid and 50s old code
< mentekid> corel: 2.3s hybrid and 3.9s old code
< mentekid> (this was parameters a)
< mentekid> and parameters b:
< mentekid> miniboone: 3.6s hybrid and 7s old code
< mentekid> pokerhand: 37.9s hybrid and 57.5s old code
< mentekid> corel: 50.6s hybrid and 38.58 old code
< mentekid> so hybrid still slower for corel for some parameters
Nilabhra has joined #mlpack
Nilabhra has left #mlpack []
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
tham has joined #mlpack
< rcurtin> mentekid: those numbers look good; did you play woth a higher cutoff?
< rcurtin> I wonder if maybe 0.2 or 0.25 might give better performance
< rcurtin> I think the phy dataset is from the kddcup in some year but I don't think the one you linked to is the right one
< rcurtin> should be 78x150000
< mentekid> I didn't try higher cutoffs, only these three... I think as I increased the cutoff it became slower (for the cases where it was slow already)
< mentekid> I got phy now, so I'll try that and a few different cutoffs too
< rcurtin> okay
< rcurtin> I am on the train now
< rcurtin> but when I am at my desk I'll play with it a little too
< mentekid> cool, no hurry, I'll be online for the next few hours
< tham> Try out the leNet example of cuDNN today, 16 batch size, 5000 iterations, accuracy reach 97.23% around 20 seconds(my laptop is Y410P)
< tham> Although the codes of GPGPU are harder to maintain, it could boost up speed a lot
< tham> Studying how to leverage the power of cuDNN, do anyone has interesting to develop a gpu implementation for cnn?
< rcurtin> do you think maybe using armadillo+nvblas could get the same kind of speedup?
< rcurtin> or close to it
< tham> rcurtin : Don't know, but I guess it wouldn't have the same speed, because cuDNN do a lot of jobs to tune the performance of cnn
< tham> It need to change a lot of codes if I want to make vc2013 can compile the codes of mlpack
< rcurtin> yeah, the only thought is maybe if the CNN expressions can be written as big linear algebra, nvblas might give good speedup without having to change any code
< rcurtin> but I haven't played with it
Mathnerd314 has joined #mlpack
< tham> maybe cuda8.0 would support vc2015
mentekid has quit [Ping timeout: 276 seconds]
mentekid has joined #mlpack
tham has quit [Ping timeout: 250 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#808 (master - d502539 : Ryan Curtin): The build was broken.
travis-ci has left #mlpack []
Mathnerd314 has quit [Ping timeout: 260 seconds]