verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
karthikabinav has quit []
jbc_ has joined #mlpack
jbc_ has quit [Quit: jbc_]
vedhu63w has quit [Ping timeout: 245 seconds]
vedhu63w has joined #mlpack
curiousguy13 has joined #mlpack
Kallor has joined #mlpack
Charly_ has quit [Quit: Page closed]
curiousguy13 has quit [Ping timeout: 264 seconds]
dhfromkorea has joined #mlpack
curiousguy13 has joined #mlpack
stephentu has quit [Ping timeout: 264 seconds]
dhfromkorea has quit [Remote host closed the connection]
dhfromkorea has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
Kallor has quit [Remote host closed the connection]
dhfromkorea has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
dhfromkorea has joined #mlpack
jbc_ has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
dhfromkorea has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
dhfromkorea has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
curiousguy13_ has joined #mlpack
curiousguy13 has quit [Ping timeout: 245 seconds]
curiousguy13_ has quit [Ping timeout: 246 seconds]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
curiousguy13 has joined #mlpack
curiousguy13 has quit [Read error: Connection reset by peer]
< zoq> naywhayare: Oh big is back, new switch?
< naywhayare> zoq: new switch, same problem... 143.215.128.x is still mostly inaccessible :(
vedhu63w has quit [Ping timeout: 272 seconds]
< zoq> So I guess it's not a hardware issue ...
< naywhayare> yeah, it must be a misconfiguration in the switch or something like that
< naywhayare> not much I can do... :-\
Kallor has joined #mlpack
dhfromkorea has joined #mlpack
Kallor has quit [Remote host closed the connection]
dhfromkorea has quit [Remote host closed the connection]
vedhu63w has joined #mlpack
stephentu has joined #mlpack
vedhu63w has quit [Remote host closed the connection]
< naywhayare> stephentu: something I noticed is that chol(..., "lower") isn't available until Armadillo 4.600.x
stephentu_ has joined #mlpack
< naywhayare> I'm not the biggest fan of this, but I'm thinking of #ifdef'ing for Armadillo < 4.600 and then doing "covLower = arma::chol(covariance).t()"
< naywhayare> I'm not seeing any other way to refactor that easily
< stephentu_> oh
< naywhayare> ah, I think you missed the first message
< naywhayare> that or it's sitting at your home :)
< stephentu_> probably
< naywhayare> 20:09 < naywhayare> stephentu: something I noticed is that chol(..., "lower") isn't available until Armadillo 4.600.x
< stephentu_> #ircfail
< naywhayare> :)
< stephentu_> ya ok
< stephentu_> so i think we should just arma::chol(cov).t() always
< stephentu_> no need to ifdef
< naywhayare> hm, let me ensure that that is as fast as chol(cov, "lower")
< stephentu_> when i set up travis CI
< stephentu_> we can have a matrix of builds
< stephentu_> and build against like major versions of arma
< naywhayare> that's what I have set up in Jenkins :)
< naywhayare> I think with Travis the issue is going to be having enough horsepower to test all the builds
< naywhayare> pretty sure we have to pay them for more than 1 simultaneous build (if I understood their terms right)
< stephentu_> oh right
< stephentu_> well it'll still be nice to have at least another sanity check for pull reqs
< naywhayare> yes, definitely
< stephentu_> does mlpack have any $?
< naywhayare> honestly testing against the oldest supported version might be the best way to test reverse compatibility
< naywhayare> nope
< naywhayare> I have access to all manner of old computers and systems via Georgia Tech, but no cash
< naywhayare> GSoC pays a bit to the mentors, but we've traditionally just given that to the mentors
< naywhayare> when I graduate, I'll probably have a few bucks I'm willing to invest into mlpack, although I am looking for a company that will be interested in investing money and time into mlpack too
< naywhayare> also, it looks like chol().t() doesn't evaluate at compile time to chol(cov, "lower")
< naywhayare> so I think I'll just do the #ifdef thing for now
< stephentu_> cool thanks for doing the benchmarking also
< naywhayare> yeah, I noticed that it was slower on my desktop system
< naywhayare> so I dug further into it on a dedicated system for benchmarking and couldn't reproduce it
< naywhayare> so I shrugged and figured that maybe something else I was doing on the desktop was causing weird results
< zoq> I think mlpack's build farm (jenkins) offers way more features which is great. Anyway it's not correct that you have to pay travis to run simultaneous builds. If you create a build matrix travis automatically runs different configurations on 'different' vm's.
< naywhayare> ah, okay, I looked closer and I see it's "fair use"
< naywhayare> for the free plan
< naywhayare> so I suppose that Travis just throws open-source jobs into a giant queue and then builds them when they get around to it
< naywhayare> from somewhere in the documentation: "Please take into account that Travis CI is an open source service and we rely on worker boxes provided by the community. So please only specify an as big matrix as you actually need."
< naywhayare> so maybe we should go with a minimal build matrix for travis... i.e. debug/release, two or three versions of armadillo, and i386/x86_64 (maybe i386 isn't necessary, but I've found a lot of bugs like that)
< zoq> Under the hood they are using docker. If I remember right they have the capacity to start the build in less less than 10 seconds. The build is limited in runtime, so maybe we can't run all test in time. Really nice people, I've met some from the travis team. I'm not sure if we really need a matrix build if we just check if the pull request is okay.
< naywhayare> yeah; I don't mind just checking if the PR builds and accepting based on that
< naywhayare> then if there are Armadillo compatibility issues, I can work those out after the nightly matrix build pretty easily
< zoq> Sounds good, I just wanted to clarify that you can test different configuration simultaneously.
< naywhayare> yeah, thanks for pointing that out
< naywhayare> I must have misunderstood last time I read the website
< naywhayare> I think ideally I'd love to pay (or donate since it's open-source?) Travis CI to manage the build matrices and everything, and then maybe keep some systems set aside for benchmarking purposes
< naywhayare> (just have to find the money first I suppose...)
< zoq> Yeah I would like that: http://love.travis-ci.com/
< naywhayare> wow, only 40 physical servers; I would have expected a lot more
< zoq> I think most of the projects don't need much time to build and to run all tests.