ChanServ changed the topic of #mlpack to: Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there.
vivekp has joined #mlpack
Shikhar has joined #mlpack
< Shikhar> rcurtin: Sorry, it was already past 2 in the night, so had to get some sleep.
< Shikhar> rcurtin: It was an interview for CodeNation, a software development firm branched off from the Trilogy group. They don't do much ML related work though, if I recall correctly.
< Shikhar> rcurtin: I'm interested in the workshop, and hopefully, we can put together a publication for it on the GAN and/or RBM work. Could I know of the deadlines? So that maybe I and zoq (if he's interested in the same) might plan out the future course? It wasn't clear from the webpage, and a NIPS workshop would be a shot totally worth taking.
Shikhar has quit [Quit: Page closed]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< cult-> zoq: it helped to smooth but didn't help to solve the problem.
< cult-> thanks anyway
< zoq> cult-: Can you provide a simple example to reproduce the issue?
< zoq> Shikhar: Sure, happy to help.
< cult-> zoq: sorry i can't, but what is obvious, is that the solution works, but, not in every case, as the only criteria for the signs is the local maximum which also can change drastically of course
ImQ009 has joined #mlpack
rohit has joined #mlpack
rohit has quit [Read error: Connection reset by peer]
< rcurtin> Shikhar: right, I think the deadline is likely to be in October? I am just guessing, I haven't got any idea yet
< rcurtin> my thought would be, it's likely that to get something published you will have to show that the software (a) is faster than existing alternatives (that could be a little hard in your case, since we only know it's faster than TensorFlow on the CPU, so if you want to make that point you'd need other favorable comparisons)
< rcurtin> (b) is more flexible than existing alternatives or provides some support which wasn't possible before
< rcurtin> basically there has to be some way to say "what we did is either novel or better", and that could be a little tricky; maybe one angle is that it's possible to emphasize the flexibility of our approach since it's template-based
vivekp has quit [Ping timeout: 244 seconds]
Shikhar has joined #mlpack
< Shikhar> rcurtin: I see. What is the typical paper length for Workshop publications? Is there a specific template for this?
< rcurtin> I'd expect 1-4 pages
< rcurtin> a couple of previous examples that have come from mlpack:
< rcurtin> we also submitted http://www.ratml.org/pub/pdf/2017generic.pdf but it was not accepted (the reviewers wanted to see performance statistics, which didn't really make sense)
< rcurtin> these were to various machine learning software workshops over the years
< Shikhar> rcurtin: Performance statistics in regards to how better the optimization framework perform with respect to other optimizers?
Trion has joined #mlpack
Trion has quit [Client Quit]
< rcurtin> yeah, that's what they were asking for
< rcurtin> or, rather, to other frameworks
< Shikhar> rcurtin: I'll take a look at these publications and try to get a rough draft prepared in week's time. After that we may proceed? zoq : Your views on this?
< rcurtin> the request from the reviewers didn't make much sense, since the "selling point" of the paper was that the framework makes new implementations easy---so whether or not it was fast is not actually the issue at hand
< rcurtin> I think that's reasonable but I also think it's fine to take some time, wait for the call for papers to be published, and decide what the right course of action is
< Shikhar> rcurtin: I see, this clearly is vague, probably they didn't get the idea of having the framework in the first place. NIPS would be dream (maybe a distant dream, can't say :P).
< rcurtin> well it's a NIPS workshop not actual NIPS, but still :)
< Shikhar> Acceptance rate between 20~25% means even some good papers can be left out. Though can I ask, did you not try for a different conference for the above paper?
< Shikhar> I'm assuming it is standard strategy to move on to another conference in order to get the paper published.
< rcurtin> no, we didn't find another place for it. it would actually not be a bad idea to resubmit it, but I haven't found a venue (and for that one, at least for me, this NIPS workshop is out since I have a conflict of interest since I'm helping organize it)
< rcurtin> maybe there is another workshop this year where it could fit---I haven't looked
< Shikhar> rcurtin: I see, I have to go now. I'll be back later for more discussion and research stories :)
< rcurtin> not sure why two workshops on MLOSS are accepted...
< zoq> cult-: Maybe we can find another dataset to reproduce the issue?
< zoq> cult-: At the end what this does is to return a deterministic result.
< zoq> Perhaps the solution isn't correct, so if we can find a matrix that does return different results (signs) for different runs, we can take a closer look into the issue.
< cult-> zoq: i will try to work on it as soon as i will have time for it and will let you know if i have found it
< zoq> cult-: thanks
ImQ009 has quit [Quit: Leaving]
< Shikhar> rcurtin: Do the NIPS workshops happen to have a specific template for their submissions? I can't seem to find one specific to the workshops.
< rcurtin> depends on the workshop; when the call for papers goes up, more information will be there
< Shikhar> rcurtin: I see, I'll get to finishing some pending work then, and maybe use a dummy ACL format template for writing content.
< Shikhar> rcurtin: Now that I think of the selling points of the GAN work, we could also argue of its use in low-resource systems, flexibility of usage and high performance along with the benchmarks.
< rcurtin> Shikhar: right, these are all ideas, but you'll have to at least make note of related work like TFslim (I think) and other machine learning libraries aimed at low-resource systems. That doesn't mean there isn't an advantage to using mlpack still, I'm just saying the first reviewer response to any argument will be "what about $other_software_package?" :)
< rcurtin> oops, forgot a word "these are all ideas" -> "these are all good ideas" :)
Shikhar has quit [Ping timeout: 252 seconds]
Shikhar has joined #mlpack
< Shikhar> rcurtin: I agree, every idea has to be questioned, reasoned and then validated. Seemingly more ideas increase the amount of work that is needed altogether.
< rcurtin> yep, agreed; so it may be a bit of work to put together the right arguments (or the right way to "sell" mlpack to the reviewers), but I think it can definitely be done
< rcurtin> the key will be in figuring out specifically what part of mlpack's functionality is the right one to focus on
< rcurtin> but I do think the GAN work and related work you've done is a good one for it
Shikhar has quit [Quit: Page closed]