verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
alexsc has joined #mlpack
alexsc has quit [Read error: Connection reset by peer]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
govg has quit [Quit: all]
alexscc has joined #mlpack
alexscc has quit [Quit: alexscc]
alexscc has joined #mlpack
alexscc has quit [Quit: alexscc]
jjmachan has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
jjmachan has quit [Ping timeout: 248 seconds]
witness has joined #mlpack
< rcurtin> a plot of how much speedup we get with batch training support: http://www.ratml.org/misc_img/batch_size_sweep.png
< rcurtin> using OpenBLAS with 8 cores
< rcurtin> a batch size of 1024 gives roughly a 12x-14x speedup over a batch size of 1 (at least for that example)
< zoq> nice, would be interesting to see if e.g. logistic regression also shows a similair effect
< rcurtin> let's find out... :)
< zoq> but as you already said the potential speedup depends on the task
< rcurtin> on the higgs 200k dataset, with 200k iterations of sgd, it takes 0.44s with a batch size of 1 and 0.082s with a batch size of 256
< rcurtin> but then for a reason I haven't looked into, a batch size of 512 takes 25 seconds...
< rcurtin> not sure if that is an artifact of this system only
< rcurtin> in any case, really nice to see that speedup, this should fix the logistic regression benchmark graphs :)
< zoq> agreed, really nice to have batch support
< wiking> zoq, around?
< zoq> wiking: yes
< zoq> wiking: Hi