verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
marcosirc has quit [Quit: WeeChat 1.4]
marcosirc has joined #mlpack
< marcosirc>
rcurtin: Thanks for that comment about Argentina. I hope we win the game tomorrow! :)
< marcosirc>
zoq: Are benchmarks updated somewhere?
< marcosirc>
I mean, is the benchmark system regularly executed?
< marcosirc>
I can see some messages in IRC.
nilay has joined #mlpack
< rcurtin>
marcosirc: I hope to be able to watch the game tomorrow; I saw the third placeatch (or some of it) today. the US did better against colombia than argentina but not good enough for a win I suppose :)
< rcurtin>
*third place match
< nilay>
zoq: ok, i am fixing these now.
Mathnerd314 has quit [Ping timeout: 276 seconds]
tham has joined #mlpack
tham has quit [Quit: Page closed]
mentekid has joined #mlpack
< nilay>
how do we ensure whether FullConvolution or NaiveConvolution is performed?
mentekid has quit [Ping timeout: 264 seconds]
< zoq>
nilay: I think you mean, ValidConvolution and FullConvolution: NaiveConvolution<ValidConvolution>::Convolution(...) or NaiveConvolution<FullConvolution>::Convolution(...)
< nilay>
yes i don't know why it doesn't work for me
< zoq>
nilay: do you get some error?
< nilay>
i am writing this: mlpack::ann::NaiveConvolution::Convolution<FullConvolution>(InImage, k_mat, Output);
< zoq>
hm, do you need mlpack::ann::... or does ann::... work? Also does it work if you remove the two lines?
< nilay>
hm, it doesn't work after removing them too, this is weird
< nilay>
i am checking it, i should be able to figure it out.
< nilay>
thanks for your help.
< zoq>
sure, here to help
mentekid has joined #mlpack
marcosirc has quit [Quit: WeeChat 1.4]
Mathnerd314 has joined #mlpack
< zoq>
marcosirc: Sorry for the slow response, it takes weeks, to run the complete benchmark suite. So, we usually do this once there is a new release. The results you see in the channel, are from the commit benchmark, which runs a benchmark on a smaller subset. I guess, what we could do, is to integrate the results into the nighly benchmark which take about 13 hours, that could be interesting.
< zoq>
I wasn't able to run the executable, before I fixed the errors; about 10 seconds just for the PCA step ... hm
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay>
these links show all the files
< zoq>
oh, okay ... can you see the last comment, about 2 hours ago?
< nilay>
only the comments that you made
< nilay>
yes ok
< zoq>
the one about -DBL_MAX and rowvec
< nilay>
ok
mentekid has joined #mlpack
marcosirc has joined #mlpack
< marcosirc>
zoq: Thanks for your reply.
< marcosirc>
Ok, but I am not sure if it would be useful, because I can see that nighly benchmark only consider small datasets for allknn
< marcosirc>
it includes: isolet, corel-histogram, covtype and cloud.
< marcosirc>
which are the same I included in my comparison.
< marcosirc>
I think it would be useful to compare with bigger datasets, and also, running flann and ann too.
< marcosirc>
I can do it in my computer, and then upload it to github.io. I think it would take some days...
< marcosirc>
What do you think is the best option?
< zoq>
marcosirc: I could just run the neighbor search benchmarks and update the results by using UPDATE=True, instead of running the complete benchmark. I think that's the best option.
< zoq>
That would save us a lot of time.
< marcosirc>
zoq: yeah. That would be great!
< marcosirc>
Is it possible to run flann and ann too ?
< marcosirc>
You should recompile their command line tools.
< zoq>
if your ConvTriangle is faster, I don't see a reason to use the conv method
< nilay>
do you think same vectorized code could be written in naiveconvolution
mentekid has quit [Ping timeout: 244 seconds]
< nilay>
also, when we write a vectorized step, how is it implemented internally
< zoq>
if, the SepFilter2D turns out to be faster, I think it's a good idea to change the naive convolution implementation. We should run more test with larger kernels, to see if it's always faster.
< zoq>
I guess, in that case you have to take a look at the armadillo code.
< nilay>
ok
< zoq>
nilay: Can you also test FFTConvolution?
< zoq>
I'm just curious :)
< nilay>
ok yes
< zoq>
thanks :)
mentekid has joined #mlpack
< nilay>
oops, FFTConvolution takes a ridiculous amount of time
< zoq>
yeah, it should be faster when using a larger kernel