ChanServ changed the topic of #mlpack to: "Due to ongoing spam on freenode, we've muted unregistered users. See for more information, or also you could join #mlpack-temp and chat there."
petris has joined #mlpack
< rcurtin> hmm, so it seems like the spam has stopped---so we could try marking the channel -r so unregistered users can talk; any thoughts?
manish7294 has joined #mlpack
< rcurtin> manish7294: hey there! I am still working on the LMNN optimization with trees. I think I have it as fast as I am hoping for, but I have to debug it some more. I only have like 10-30 minutes a day to put into it, so maybe it will be later this week/early next week when it is done though...
< manish7294> rcurtin: As you say, it definitely looks like spam is not there anymore. Hopefully,it won't be back anytime soon. I think reverting channel back to original config would make life much easier :)
< rcurtin> agreed---I imagine it has been inconvenient for a lot of people
< rcurtin> let's give it a shot...
< manish7294> Sorry, for taking a long break. Now, I am done with interviews :)
< manish7294> rcurtin: I am glad you improved the lmnn to new heights.
< manish7294> I will start working from today onwards. It really has been a long break.
<@rcurtin> I hope the interviews went well! :)
< manish7294> yeah, I got software developer internship at Arista networks for next summer :)
<@rcurtin> congratulations! :)
<@rcurtin> hang on, I am having trouble removing the +q flag...
<@rcurtin> forgetting the right command...
< manish7294> Thanks :)
<@rcurtin> ok... let's hope that works
test_q has joined #mlpack
< test_q> testing...
< rcurtin> sweet! it worked
< manish7294> I think I have to step out for classes now. Will be back in the evening (I think it will be morning for you) :)
< manish7294> great
< rcurtin> ok, so hopefully the spam will not come back
< rcurtin> sounds good, enjoy the classes :)
< rcurtin> I think there are still a few LMNN-related issues I need to reply to; I'll do it when I have a chance (hopefully it will not be too long until I can...)
< manish7294> Nah! they are totally boring :)
test_q has quit [Client Quit]
< rcurtin> :)
manish7294 has quit [Quit: Yaaic - Yet another Android IRC client -]
mrohit[m] has joined #mlpack
ImQ009 has joined #mlpack
< rcurtin> Shikhar: sorry, it looks like we will move the MLOSS deadline up to Oct. 1st, but in either case, ~4-5 weeks to prepare something should be enough
< rcurtin> if the MLOSS workshop is the right place, that is
< miqlas> Hi guys.
< miqlas> Sorry i'll be off-topic: Do you happen to know if an IRC channel exists for HDF5 or netcdf?
< rcurtin> no idea, I don't know anything about those communities
< miqlas> rcurtin: thanks
< rcurtin> sure, sorry it is not much help :)
< miqlas> rcurtin: till you guys doesn't come up with a new mlpack release, i think i will left the ml scene, it is not the best use case for haiku without cl/cuda, etc. But i did found a nice things to port : Lets make Haiku HPC capable!
< miqlas> Great idea, right?
< rcurtin> :)
< rcurtin> we did have a release last month and we're likely to have one again soon, like once a month or so
< miqlas> i already madde a recipe for the current, but the python module still missing. will do next time.
< miqlas> btw, do you know how much power the buildbot takes to build mlpack? the cores almost melting :)
< rcurtin> yeah, it is not a trivial job to build
< miqlas> i think it is because boost, everything what uses boost takes much longer to build. at least it feels so for me.
< rcurtin> well, that is part of it, the other part is that mlpack has a lot of code that uses a lot of templates
< miqlas> that's true too
wenhao has quit [Ping timeout: 252 seconds]
< miqlas> currently Haiku have only some heavy ports, wich takes ages to build, LibreOffice, mlpack, Blender, i think. Oh, and ofc gcc (but only because the 3 recompilation-check stages)
vivekp has quit [Ping timeout: 252 seconds]
vivekp has joined #mlpack
< ShikharJ> rcurtin: Thanks fot the heads up. We'll have to plan what to keep and what not to accordingly to fit the 4 weeks that we have. I'll start on with the work. Any suggestions where should I begin first regarding the paper and the benchmarking? Is it ok to draw inspiration from the previous mlpack publications?
< rcurtin> sure, that is fine, my primary suggestion would be to choose some part of the library's functionality that you're aiming to highlight, and figure out what differentiates it from other existing work
< rcurtin> if you can make a good argument that says "we made something new that allows users to do something they previously couldn't" (and that functionality is important or useful), or "we made something that's faster than all other existing things", I think these are good starting points
miqlas has left #mlpack []
< ShikharJ> rcurtin: Would the argument that "we implemented a fast, templatised, policy based GAN framework in C++" be a valid topic in this regard? I don't think major libraries have a coherent builtin module that lets them create a model by just defining the loss and gradient updation routines, that can be extended through the policy design.
< rcurtin> I think that could be reasonable, if you can show that some functionality is significantly easier in mlpack with GANs than what it would be in, e.g., TensorFlow or Keras or CNTK (or other popular frameworks)
< rcurtin> even better if you can show there's something that you really can't do at all in other toolkits that you can with mlpack
< rcurtin> of course I don't know exactly where the bar is---I have no idea what reviewers will say in the end. but I am hoping that this is useful information at least :)
< rcurtin> likely when CFPs are available, this will tell you a lot of what reviewers are looking for
< rcurtin> so for instance, the year we submitted the benchmarking system, the CFP said specifically that they were looking for automatic benchmarking systems
< rcurtin> so it was a pretty clear fit
ImQ009 has quit [Quit: Leaving]
< ShikharJ> rcurtin: I see, thanks for the insight. Any idea when can we expect the CFP to come up?
< ShikharJ> zoq: Are you there?
< zoq> ShikharJ: yes
< ShikharJ> zoq: I was wondering what was meant by "shuffel the predictorsX and predictorsY with the same indices"? Could you elaborate on that?
< zoq> ShikharJ: The shuffel method creates an ordering for each call, which is different for predictorsX and predictorsY since the ordering isn't shared. So I was wondering if it is desirable to use the same ordering for both.
< ShikharJ> zoq: Paired ordering would be useful only in the case where we provide paired data. But in that case, we wouldn't need a CycleGAN with cyclic loss function, as we don't need to regenerate the original image then. In this case, we are sure of having a bijective mapping G: X to Y, so the inverse (G^-1) will invariably exist, and doesn't need to be estimated.
< ShikharJ> zoq: The point of CycleGAN is to estimate a mapping from a domain to another domain, without each having corresponding paired data, and that is why cyclic loss is used.
< zoq> Agreed, there is no requirement for the actual computation, perhaps for debugging, but I guess in this case it's easier to disable shuffle.
< ShikharJ> zoq: I'd keep the shuffle routine, as it will help to build a trained model that would generalize well.
< zoq> Sounds good, thanks for the input.
wenhao has joined #mlpack
Cyrinika has joined #mlpack
Cyrinika has quit [Quit: Leaving...]
Cyrinika has joined #mlpack