ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< ShahAnwaarKhalid>
stackoverflow post as well. 😬
< kaushal[m]>
<zoq "kaushal: I guess you are looking"> yeah
ImQ009 has joined #mlpack
muis[m] has quit [Quit: Idle for 30+ days]
ImQ009 has quit [Ping timeout: 260 seconds]
ImQ009 has joined #mlpack
jonpsy[m] has joined #mlpack
< zoq>
ShahAnwaarKhalid: Sorry, haven't looked at the comment yet, will take a look later today, thanks for the reminder.
< Aakash-kaushikAa>
hey @freenode_zoq:matrix.org, @ryan:ratml.org can we include a copyright.txt in models repo too ?
< zoq>
Aakash-kaushikAa: Sure thing.
< Aakash-kaushikAa>
Can you do it or i should create a pull request for that ?
< zoq>
Either is fine for me.
< rcurtin[m]>
zoq: your bandicoot Jenkins job exposes the conv_to bugs I still need to fix 😃
< zoq>
I thought that one was known :)
< rcurtin[m]>
known but hidden 😃
< rcurtin[m]>
I've been hiding from it for months now
< rcurtin[m]>
:)
< zoq>
hehe
< rcurtin[m]>
I got the first parts of the adapted clMAGMA implementation for eigendecomposition done yesterday; I'll open an MR for that today
< rcurtin[m]>
after that I suppose I'll have to confront this conv_to issue :)
< zoq>
I guess I have to update the main test next, because I think right now it's selecting the first GPU.
< rcurtin[m]>
if you're using docker containers for the build, we can:
< rcurtin[m]>
(a) push the containers to the mlpack.org:5000 registry, although I'll have to whitelist the IP of the build slaves you want to use
< rcurtin[m]>
(b) also use `pretzel`, which has a pretty old but possibly still capable nvidia GPU