verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
sumedhghaisas has quit [Ping timeout: 276 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
< rcurtin>
I found this fascinating video that someone posted on youtube for some reason:
< rcurtin>
I'm not sure why anyone posted it but it's a visualization of the development of mlpack until about fall 2015
< rcurtin>
(which is too bad it didn't go further, I think things got really interesting in the last year!)
witness_ has quit [Quit: Connection closed for inactivity]
topology has joined #mlpack
sumedhghaisas has joined #mlpack
< zoq>
Really interesting, we could open an issue, to create an updated visualization.
sumedhghaisas has quit [Remote host closed the connection]
K4k__ has quit [Quit: WeeChat 1.5]
K4k has joined #mlpack
< rcurtin>
ok, I have these sparc64 systems online (at least aunty.mlpack.org can be SSH'ed to)
< rcurtin>
but they are running Solaris 10, and the package repositories are basically abandoned (like pkgutil) so I think maybe I should go with a different OS...
< rcurtin>
it seems reasonable choices would be debian and freebsd... zoq, do you think freebsd would be a good choice for sparc64?
< rcurtin>
at least on Solaris 10 I can't even get a copy of gcc5 easily
< rcurtin>
or I guess it is possible to get a free copy of Solaris 11 but I don't know if I am that adventurous...
< zoq>
I think, FreeBSD is definitely an option. I'm not sure if there is a FreeBSD docker image. But I guess the same applies for Solaris.
< rcurtin>
hm, actually, that is a good point, I should see if it is even possible to run docker (or similar) on FreeBSD
Guest83_ has joined #mlpack
Guest83_ is now known as Wanne
< Wanne>
hello there :-) Is there an interest for supporting GPU acceleration or optimizations for Xeon Phi?
< rcurtin>
hi Wanne, I think it woudl be very cool and it may happen at some point in the future
< rcurtin>
for now GPU acceleration is best done via NVBLAS which is a plug-in replacement for BLAS
< rcurtin>
one of the big problems is that programming for either of those accelerators can give very unwieldy and difficult-to-maintain code, so that is something that we haven't really been able to make much progress with
< rcurtin>
I like NVBLAS though: it just checks the size of the matrices being used for BLAS operations, and if the size is large enough that the computation time + transfer time on the GPU is expected to be less than the computation time on the CPU, it does it on the GPU
< Wanne>
I might write some software for my master thesis and I am looking for a place to contribute the results, still planning phase, though.
< rcurtin>
sure, depending on how it was done there might be interest here in merging it in
< rcurtin>
the key is always maintainability and portability (I guess readability too), which are all hard things to do with Phi or GPUs...
< rcurtin>
I have to get lunch, I'll be back later...
< Wanne>
tanks, I will keep it in mind and might come back with some questions :-)
< Wanne>
*thanks
Wanne has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 264 seconds]
topology has quit [Quit: Page closed]
govg has joined #mlpack
sumedhghaisas has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
akanuraj200/mlpack#6 (master - c488c34 : Anuraj Kanodia): The build was fixed.