ChanServ changed the topic of #mlpack to: "Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there."
By has joined #mlpack
By has quit [Remote host closed the connection]
ChickeNES has joined #mlpack
ChickeNES has quit [Remote host closed the connection]
CalimeroTeknik16 has joined #mlpack
CalimeroTeknik16 has quit [Remote host closed the connection]
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
Asoka22 has joined #mlpack
Asoka22 has quit [Remote host closed the connection]
hggdh9 has joined #mlpack
hggdh9 has quit [Remote host closed the connection]
Comstock_ has joined #mlpack
Comstock_ has quit [Excess Flood]
mnemonic28 has joined #mlpack
mnemonic28 has quit [Remote host closed the connection]
eido1on has joined #mlpack
eido1on has quit [Remote host closed the connection]
Kazuto has joined #mlpack
Kazuto has quit [Remote host closed the connection]
ImQ009 has joined #mlpack
MatthewAllan9325 has joined #mlpack
MatthewAllan9325 has quit [Remote host closed the connection]
< rcurtin> wow, NIPS registration sold out in 11 minutes
< rcurtin> insane
< ShikharJ> 11 minutes 38 seconds to be exact.
< rcurtin> yeah
< rcurtin> ShikharJ: the CFP is done (I think): https://2018.mloss.org/cfp/
< rcurtin> hopefully that is helpful for you
< rcurtin> I think maybe you saw, but there is also the MLsys workshop: http://learningsys.org/nips18/
< ShikharJ> rcurtin: Do you think the GAN work would be a fit here? It seems fine for me (talking about MLOSS).
< rcurtin> the deadline for the MLOSS workshop is likely to be 9/30 but may change
< rcurtin> and the MLsys one is later
mar77i_ has joined #mlpack
< rcurtin> hmm, so I think it all depends on how you cast it. Like I said if you are able to say that your GAN work brings something new to the table (to use a colloquial phrase) then I think it is just fine
< rcurtin> the first question any reviewer is likely to ask is "what is different about this GAN implementation than what I can already do in toolkits like TensorFlow and MXNet and Caffe?"
< rcurtin> and as long as you can answer that question well, I think that any paper would have a good shot
< rcurtin> given the schedules for each of those two workshops, I think it may actually be okay to submit to both
< rcurtin> since proceedings of these will not be published (other than versions authors put on arXiv I guess), I think there is no problem here. Neither CFP says anything about multiple submissions
< rcurtin> typically if it were accepted to one you would withdraw from the other, I guess
< ShikharJ> rcurtin: I'm confused, because even mlsys seems to be a good fit with respect to the vision I have in mind. The GAN framework is a novelty (I'm not aware of any similar framework that's available in major libraries), so that might be suitable for mlsys as well, because it is specifically geared towards applied ML challenges (in our case, the CPU bounds on low resource systems).
< rcurtin> advantages here are that you increase the probability of acceptance; disadvantages are you make more work for reviewers and it can be a bit confusing if the same work is being presented twice in two days
< rcurtin> ShikharJ: if the GAN framework itself is a novelty then I think this is a good selling point
< rcurtin> (I wasn't aware of whether TF or others have a GAN framework. Maybe there are some derivative libraries that do?)
< rcurtin> so if you're able to say "now making GANs is easy whereas previously it was really hard" this is a good approach, in my opinion
< rcurtin> I do agree that both MLOSS and MLsys seem to be good fits
mar77i_ has quit [Remote host closed the connection]
< ShikharJ> rcurtin: I'll need to check regarding the derivative libraries, but as far as I'm aware, there isn't a framework such as ours where you identify a GAN variant that you want, and pass that as a policy, and just define the convolution layers that you wish to run the GAN discriminator or generator on. Also, it is highly extensible this way.
< ShikharJ> rcurtin: It makes it easier, as the gradient updation rules, and the loss functions need to be only defined once for a variant, and they can then be re-used, and contributed back to by the users for extending the "GAN library".
< rcurtin> ShikharJ: these sound like compelling arguments; so if there really is no easy way to do something like this with other libraries then I think this is great
< rcurtin> forgive me if I already mentioned this paper but Conrad and I recently worked on a paper for the Armadillo sparse matrix format where the focus there was also usability: http://ratml.org/pub/pdf/2018user.pdf
< rcurtin> so it's possible that some of the presentation ideas there could be helpful for you too. Like for instance Figure 1
< rcurtin> (where we compare Armadillo sparse matrices with scipy, and it's pretty clear Armadillo is a lot nicer)
< ShikharJ> rcurtin: The reason why mlsys seems a good place is because (from their cfp) what I can register is that they're looking for solutions to the software oriented problems that arise in ML systems.
< rcurtin> in any case, based on what you say, it sounds like you can make a good argument in the paper
< rcurtin> right, that is definitely true. I can't say what the MLOSS reviews will be like (I have no idea) but for the MLsys reviewers I'd expect the following types of questions:
< rcurtin> * does it work on the GPU? (I guess you can reference NVBLAS and Bandicoot)
< rcurtin> * how can you use it from Python? (I guess we have some ideas of bindings for the ANN code?)
< rcurtin> I guess I can't immediately think of other questions, but maybe you can think of some. After all many reviewers consider it their job to figure out the best reasons for rejecting your paper so in this sense it is adversarial ;)
< ShikharJ> rcurtin: Yeah, I'm guessing the reviewers would be more in a mood to reject papers than accept them. Any bets on how many papers are going to be submitted next year for the main conference? :P
< ShikharJ> rcurtin: Atleast the mlsys workshop is a bit later than the mloss, so that gives us some breathing room.
< rcurtin> ShikharJ: ha, I bet it will be over 10k. this year it was 8k, which is already absurd
< rcurtin> the MLsys paper is 6 pages long too, which gives a bit more space to talk about it
< rcurtin> I dunno, I could see it either way. I'd expect slightly easier reviews from MLOSS (since they ask for contributions from smaller toolkits) than from MLsys (since the MLsys organizing community is well-connected in the sphere of the large Python-based toolkits)
< rcurtin> but this is all just guesswork on my part. :) There is no way to know what the outcome will be without actually doing it... :)
< ShikharJ> rcurtin: So given the typical acceptance rate of 21%, around 1600~1700 papers? Woah, that's massive.
< rcurtin> I'm sure they will not accept that many papers---it is just too many for the conference. I'd expect the acceptance rate to drop towards 10% or less
< rcurtin> I guess it's possible they may intend to increase the number of presentation tracks and make the poster session huge, but I dunno. To me it seems unlikel
< rcurtin> unlikely*
wenhao has joined #mlpack
< ShikharJ> zoq: Are you there?
< ShikharJ> rcurtin: Are you there?
wenhao has quit [Ping timeout: 252 seconds]
< rcurtin> I'm here, yeah
ImQ009 has quit [Quit: Leaving]
< rcurtin> sorry I was in a meeting for a while
< rcurtin> ShikharJ: ^ (oops, should have addressed your nick)
< ShikharJ> rcurtin: Ah, nevermind, overcame the issue I was facing.
< rcurtin> :)