ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
soonmok has joined #mlpack
soonmok has quit [Ping timeout: 246 seconds]
soonmok has joined #mlpack
govg has joined #mlpack
soonmok has quit [Ping timeout: 268 seconds]
soonmok has joined #mlpack
picklerick has joined #mlpack
picklerick has quit [Ping timeout: 272 seconds]
picklerick has joined #mlpack
picklerick has quit [Ping timeout: 246 seconds]
KimSangYeon-DGU has joined #mlpack
soonmok has quit [Remote host closed the connection]
soonmok has joined #mlpack
picklerick has joined #mlpack
picklerick has quit [Quit: WeeChat 2.3]
robertohueso has quit [Ping timeout: 268 seconds]
KimSangYeon-DGU has quit [Quit: Page closed]
pymit has joined #mlpack
KimSangYeon-DGU has joined #mlpack
pymit has quit [Ping timeout: 240 seconds]
bugs_ has joined #mlpack
< bugs_>
Hello, I am new to this and want to contribute to a few projects. Can someone please help me?
bugs_ has quit [Ping timeout: 256 seconds]
soonmok has quit [Remote host closed the connection]
soonmok has joined #mlpack
shardulparab97 has joined #mlpack
< shardulparab97>
Hi, I am Shardul Parab, a final year student of BITS Pilani (Hyderabad), India. I am highly interested in contributing to MLPack via GSoC and have tried to make some PRs recently.
< shardulparab97>
A few days ago, @atulim had suggested the topic of Capsule Networks for GSoC, I would like to ask whether the corresponding topic would be pursued for this year's GSoC.
< shardulparab97>
Thank you! :)
shardulparab97 has quit [Quit: Page closed]
KimSangYeon-DGU has quit [Quit: Page closed]
pymit has joined #mlpack
pymit has quit [Remote host closed the connection]
< zoq>
bugs_: Hello, here to help.
< zoq>
shardulpara: Hello there, that could be an option, yes.
pymit has joined #mlpack
pymit has quit [Remote host closed the connection]
soonmok has quit [Remote host closed the connection]
soonmok has joined #mlpack
bugs_ has joined #mlpack
soonmok has quit [Remote host closed the connection]
< ayesdie>
Here's a quick upadte: I've updated the SparseSVMFunction to work as a multiclass classifier and added tests for that. I'm a bit unsure how we might implement Kernel SVM.
< ayesdie>
update*
< rcurtin>
ayesdie: yeah, I think kernel SVM is not possible since it needs to be optimized in the dual space
< rcurtin>
and our implementation works in the primal space
< rcurtin>
but that's ok, we can just go with linear SVM for now
< rcurtin>
I saw your updates via email too, so I need to review the PR again. I'm hoping I'll have a chance to do that later today or tomorrow
< ayesdie>
Also, can we give the user, the option to choose any optimizer? I've not tested this but you mentioned we have to use sparse differentiable separable Optimizer.(HOGWILD)
< ayesdie>
It's fine, I'll work on the tests of SparseSVM and it's binding in another branch, you can review this any time :).
< ayesdie>
I also wanted to bring this to your attention that the list of GSOC ideas has "Parallel stochastic optimization methods" but no details about it down there.
< rcurtin>
oh, hm, let me figure out what happened there. maybe the project was removed but the contents weren't updated
< rcurtin>
and yeah, for SparseSVM, I *think* that we should be able to use more than just Hogwild, but other optimizers may not be able to take advantage of the sparsity of the gradient
< rcurtin>
so long as Gradient() is templatized so that the gradient type can be either arma::mat or arma::sp_mat, it should work fine
< rcurtin>
I guess we should change the name from SparseSVM to LinearSVM, by the way
< rcurtin>
since it doesn't necessarily require sparse data anymore :)
< rcurtin>
ok, checked on the GSoC ideas page... the 'parallel stochastic optimization methods' project was removed because Shikhar Bhardwaj did the project in 2017
< rcurtin>
now of course there do exist more parallel optimization methods, so if you had a cool idea we could do it, but we don't have anything specifically written up on the ideas list :)
bugs_ has joined #mlpack
< ayesdie>
I will try to test with the optimizers other than Hogwild. As of now, I just ran the SVMFunction on some dataset(to check if there was anything wrong with my implementation) without using any Ensmallen optimizer and the results were good. (I've placed that in a separate repository).
< rcurtin>
sounds good :)
< ayesdie>
Thanks for telling this rn, I'll update SparseSVM to LinearSVM before I proceed any further.
< rcurtin>
sure, no problem
< rcurtin>
SparseSVM -> LinearSVM is an easy change anyway :)
< ayesdie>
Yes, will update the folder and filenames.
< ayesdie>
Or can we just typedef it?
< rcurtin>
we should update the folder and filenames; there's no reverse compatibility to care about here
< ayesdie>
gotcha :)
soonmok has joined #mlpack
soonmok has quit [Ping timeout: 240 seconds]
bugs_ has quit [Ping timeout: 256 seconds]
soonmok has joined #mlpack
soonmok has quit [Remote host closed the connection]