ChanServ changed the topic of #mlpack to: Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there.
vivekp has quit [Ping timeout: 245 seconds]
jenkins-mlpack2 has quit [Ping timeout: 252 seconds]
jenkins-mlpack2 has joined #mlpack
vivekp has joined #mlpack
< ShikharJ> rcurtin: That's great news :)
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
ImQ009 has joined #mlpack
< ShikharJ> rcurtin: Are you there?
< rcurtin> ShikharJ: yeah, I just got back from lunch
< rcurtin> my days seem to be full of meetings recently, so I have (in UTC time) meetings from 1700-1900 and then 2000-2100... so I may fall silent for a bit...
< rcurtin> too much meeting too little work...
< ShikharJ> rcurtin: I'm in a bit of a dilemma, I have everything planned for the paper, and need to get going with the tests and benchmarking, however, I'm unsure of how much time the testing would take.
< rcurtin> do you mean how much computation time?
< rcurtin> often when I have situations like this, where I need to run a bunch of experiments before the deadline, I will make the "outline" of the empty table of results that I need to fill in
< rcurtin> then approach the experiments in an order from the quick ones to the slow ones
< ShikharJ> rcurtin: I have exams in the last week of September, and wouldn't be able to give time to this paper. I can try submitting a lower quality paper to the MLOSS workshop (with lesser experiments) or should I instead put all bets on the MLSYS paper and work towards that?
< rcurtin> and if the situation happens that there is not enough time to, e.g., run on the largest dataset, then I'll have to remove that result from the results table and adapt the paper (usually that is fine)
< rcurtin> hmmm, ah, ok, I see what you mean
< ShikharJ> rcurtin: I mean time from 22nd to 31st so practically no time for self review.
< ShikharJ> If we're planning on for MLOSS.
< rcurtin> ha, looks like actually you could do both if you wanted! MLOSS notifies accept/reject on Oct 12th, and the MLSYS deadline is the 19th
< rcurtin> I don't know that I can really say much about what would have a higher likelihood of getting in, that's really hard to predict
< rcurtin> I would say, if you are sure that your experimental numbers will look good, maybe better to wait until MLSYS
< rcurtin> but the last thing that you would want to happen is, e.g., skip the MLOSS deadline then find out that some other library is way faster than mlpack for the problems you were comparing against, or something
< rcurtin> although that is a bad scenario no matter what
< rcurtin> keep in mind also: if it gets rejected from MLsys (if you go that route), it is always possible to revise and resubmit to some other workshop or conference later in the future
< rcurtin> so the work will not be "lost" :)
< ShikharJ> rcurtin: Ah I see, but I'm really not confident in submitting something thatI wish could have been better. MLSYS fit s in my schedule quite well, as it gives an extra three weeks, hence I wanted your opinion on it.
< rcurtin> that sounds good to me :)
< ShikharJ> rcurtin: Plus I found a similar looking paper from 2016 NIPS that got accepted in MLSYS, so I somehow feel our work might be more suited there: https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxtbHN5c25pcHMyMDE2fGd4OjI2YjI4NTZlNzNmNjg0Zjk
< rcurtin> what's your thought on GPUs? I would expect any reviewer to immediately point out the lack of GPUs, unless you were able to try with NVBLAS or something and get good results
< rcurtin> I guess there are a couple strategies; one is actually trying with NVBLAS and hoping it works; another is trying to use Bandicoot in simple cases; a third is just saying "GPUs are future work" (or something like that), but I worry that the last of those options may not be liked by reviewers
< ShikharJ> rcurtin: That's something that even I wish to try out, GPUs are to be essential if we're hoping to make a competitive paper that survives the review.
< rcurtin> agreed
< rcurtin> NVBLAS is easy to try
< rcurtin> but lots of communication costs
< ShikharJ> rcurtin: GAN, DCGAN and WGAN-GP, all three will need to be tested on 3 (maybe 2?) datasets against Tensorflow, Torch and Theano, so all in all 12(maybe 8?) tests to run (including mlpack). Hence the concern for time.
< ShikharJ> rcurtin: One of these would be the evergreen MNIST dataset, which should be easy to implement and faster to run for all the three (we can readily find optimized code for these), so that should be the starting point.
ImQ009 has quit [Ping timeout: 245 seconds]
ImQ009 has joined #mlpack
< ShikharJ> zoq: Are you there?
< zoq> ShikharJ: I'm here
Shikhar has joined #mlpack
< rcurtin> Shikhar: I see, maybe it is worth trying for the evergreen MNIST dataset first, to see how it compares against TF, Torch, and Theano? That might give some good input about what to do next
< Shikhar> zoq: I was hoping to get your opinion on the above as well.
< rcurtin> if the timings are really bad (like mlpack is comparatively slow), then you could focus instead on the ease-of-use of the framework
govg has joined #mlpack
< Shikhar> rcurtin: That sounds reasonable as well.
< zoq> I agree with you, don't think it's a good idea to submit a mediocre paper, so I think the focus should be MLSys. Perhaps we should start with at least some experiments to see in what direction we can go, fast + easy to use (CPU/GPU) or just easy to use and reasonable results on GPU.
< rcurtin> right, I think NVBLAS could do ok... not sure how much speedup though
< rcurtin> Bandicoot is something that's very necessary to compete in this arena... I am thinking that if development continues to be really slow it might be nice to use a GSoC slot for it
< rcurtin> I think neither Conrad nor I have very much time to work on it at the moment
< zoq> yeah, good idea
< zoq> as a first impression we could start with mlpack + keras, using NVBLAS
< rcurtin> do you mean mlpack vs. keras? or is there some cool way to put them together I don't know? :)
< zoq> right, keras vs mlpack
< rcurtin> ah, ok, I got excited that maybe there was a wrapper I didn't know about or something :)
< zoq> if it's comparable or faster we can construct a good argument
< rcurtin> agreed
< zoq> maybe there is haven't checked :)
< rcurtin> as far as I know Keras has no native support for GANs, so the "ease of use" argument would be straightforward
< ShikharJ> rcurtin: I don't think there is one in Tensorflow either.
Shikhar has quit [Ping timeout: 252 seconds]
< rcurtin> that definitely would help the ease of use argument :)
< rcurtin> there might be models that you can use off-the-shelf from Tensorflow but otherwise you'd have to manually construct the entire computation graph if I understand right
< rcurtin> some months ago I taught a two-day course on Tensorflow (kind of weird, someone asked me to so I learned quickly about it...); I have to say, I think working with it is really ugly at times
< rcurtin> I think at the end of the two days I basically recommended that everyone just use Keras for any actual deep learning work unless they were really doing something crazy
< rcurtin> (but these were all beginners so nobody was doing anything crazy at all, just MNIST with LeNet5 or whatever)
< ShikharJ> zoq: A fundamental part of the test would also be what we define as the training time. In case of the approximation for Tensorflow, we had directly compared the time taken by the both to converge. Do we want the same to happen here or do we define the testin a different way?
< zoq> yeah, keras is a really nice "wrapper"
< zoq> ideally we could test everything with the same settings including weights, not sure that's easy to do
< zoq> that way we could train the model for n iterations and provide timings
< zoq> don't think we are going for classification performance
< zoq> something like: https://github.com/soumith/convnet-benchmarks would be nice to have
govg has quit [Ping timeout: 272 seconds]
vivekp has quit [Ping timeout: 272 seconds]
< rcurtin> right, I usually look at it in two ways: (a) total time to train a network to a certain classification performance; (b) time per point in training
< rcurtin> not sure which would be better here
< ShikharJ> zoq: Alright, we should test that then.
vivekp has joined #mlpack
ImQ009 has quit [Quit: Leaving]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack