verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< MystikNinja> There are a number of compilation errors (https://pastebin.com/uwVhhwTv). It seems that there has been some change in the API since the mvu code was written, for example, the mvuSolver object is called with the wrong number of parameters in the constructor (2 instead of 4). Is there something simple I'm missing or will the mvu code have to be significantly re-written?
< zoq> MystikNinja: Not sure if a complete re-implementation is necessary at this point, but you definitely have to update some line here and there.
< MystikNinja> zoq: It would probably just involve re-writing the offending calls to match the current APIs. I'll try doing it and see what happens.
< zoq> MystikNinja: You are probably right, let me know if you need any help.
MystikNinja has quit [Quit: Page closed]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#111 (RBM - fe8ecee : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
csoni2 has joined #mlpack
csoni has quit [Ping timeout: 264 seconds]
sooham has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#112 (GAN - 0428a92 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
csoni2 has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
moksh has joined #mlpack
< moksh> Hey @zoq, @rcurtin. The mlpack/models repo currently has just one model, for digit recognition, and an open pr for a LSTM model. Can you suggest a model that would a good addition to the repository? I would to work on that.
moksh has quit [Quit: Page closed]
sooham has quit [Ping timeout: 260 seconds]
Nisha_ has joined #mlpack
Nisha_ has quit [Client Quit]
rf_sust2018 has joined #mlpack
Nisha_ has joined #mlpack
Nisha_ has quit [Quit: Page closed]
nishagandhi has joined #mlpack
nishagandhi has quit [Client Quit]
Nisha_ has joined #mlpack
Nisha_ has quit [Client Quit]
nishagandhi has joined #mlpack
nishagandhi has quit [Client Quit]
Nisha_ has joined #mlpack
Nisha_ has quit [Client Quit]
Nisha_ has joined #mlpack
< Nisha_> Hi @zoq, @rcurtin, can SVMs be implemented in mlpack? I was thinking around the lines of optimizing the hinge loss by stochastic batch gradient descent. Also, I have experience in LSTM recurrent neural network models. I was wondering if you could point me in the right direction for working in SVMs / LSTM models. Thankyou :)
Nisha_ has quit [Quit: Page closed]
csoni has joined #mlpack
ketuls has joined #mlpack
rf_sust2018 has quit [Quit: Leaving.]
mohaxxpop has joined #mlpack
mohaxxpop has quit [Client Quit]
ketuls has quit [Ping timeout: 260 seconds]
csoni has quit [Ping timeout: 240 seconds]
< zoq> moksh: Yes, please feel free :)
< zoq> Nisha_: Hello there, if you are going to write an SVM implementation, keep in mind that it should outperform libsvm, which might be pretty difficult, but if you're up for a challenge, then that might be a good one :)
< zoq> Nisha_: Since you are interested in LSTM's, you might find Quasi-Recurrent Neural Networks interesting as well.
csoni has joined #mlpack
csoni has quit [Ping timeout: 240 seconds]
< Atharva> Should I put the entire API in the proposal or should I give a link to the file?
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#113 (GAN - ecd920b : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< zoq> Atharva: This is up to you.
< Atharva> zoq: I will probably give a link the cpp file and in the proposal explain how I will implement each function.
csoni has joined #mlpack
csoni has quit [Ping timeout: 240 seconds]
csoni has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#4452 (master - 1ee8268 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
rajiv_ has joined #mlpack
< rajiv_> In the proposal timeline, how much time should I allocate for the 2nd and the final evaluations?
< Atharva> zoq: there is a FFN constructor that allows us to set the predictors and responses when creating the FFN object, but all the Train methods defined ask for predictors amd responses.
< Atharva> Shouldn’t there be a definition of Train method where it uses the predictors amd responses stored by the constructor
< rcurtin> rajiv_: no need to allocate time for the evaluations themselves
< rcurtin> that's the job of the mentor
rajiv_ has quit [Ping timeout: 260 seconds]
donjin_master has joined #mlpack
< donjin_master> hello everyone i want to draft my proposal on reinforcement learning project in GSOC '18 should i apply deep Q learning with experience relay in summer
< donjin_master> I am little bit confused that how many algorithms we should have to applied in summer
csoni has quit [Ping timeout: 276 seconds]
< zoq> Atharva: If you think that is reasonable, sure.
< zoq> donjin_master: Hello there, the number depends on the complexity of the method you are interested in.
< Atharva> zoq: I think for that definition, if someone calls Train() even when they have not set the predictors and responses, we can just throw an error.
< Atharva> But otherwise, calling Train() is very intuitive when someone has already set the training data during construction of the object.
< Atharva> Should I open a PR for this?
donjin_master has quit [Ping timeout: 260 seconds]
< rcurtin> Atharva: no, I disagree on this one---if you construct the object with given predictors and responses, then Train() should be directly called by that constructor (like the other mlpack algorithms)
< rcurtin> but it doesn't really make sense to call Train() again after that
< Atharva> Okay, but in this case, the constructor is not training the network even when data is provided.
< Atharva> We could change that and Train it in the constructor as you mentioned.
arsh has joined #mlpack
ImQ009 has joined #mlpack
rajiv_ has joined #mlpack
rajiv_ has quit [Client Quit]
< rcurtin> Atharva: that would be my suggestion; let's see what Marcus thinks
< Atharva> Yeah
arsh has quit [Quit: Page closed]
Nisha_ has joined #mlpack
< Nisha_> Hi @zoq, thank you for your suggestion. I will look into quasi- recurrent neural networks. I am currently reading papers (like : https://arxiv.org/pdf/1611.01576.pdf) on QRNN and will think on how to go about implementing this in mlpack. Am I thinking in the right direction? And could you give me details about what all could be expected in the implementation of QRNN? Thanks, Nisha Gandhi
sourabhvarshney1 has joined #mlpack
< zoq> Atharva: The FFN/RNN class is kinda special at this point, since currently you can't pass layer information at construction time, you have to use Add(..).
< sourabhvarshney1> @zoq I was going through rnn code base. I found in the first constructor, there were no predictors and response set. Does the comment imply implicit predictors and responses?
< Atharva> zoq: oh, yeah, that didn’t cross my mind. Then, what do you think we should do? Define another Train() function?
< zoq> Atharva: I think we could remove the extra constructor.
< Atharva> zoq: yeah, that works too, but is there some reason you don’t want Train() function?
< sourabhvarshney1> @zoq may be I put myself in wrong direction.
csoni has joined #mlpack
sourabhvarshney1 has quit [Ping timeout: 260 seconds]
sourabhvarshney has joined #mlpack
sourabhvarshney1 has joined #mlpack
sourabhvarshney has quit [Ping timeout: 260 seconds]
< sourabhvarshney1> zoq: I just found the same thing as @atharva did. I think there is a requirement of modification of that constructor or requirement of removal of that constructor because every train method requires predictors and responses. Either way can work. Also there is some need to modify some comments. Should I open a PR to do that?
sourabhvarshney1 has quit [Ping timeout: 260 seconds]
sourabhvarshney1 has joined #mlpack
< zoq> sourabhvarshney1: You are right, personally I would remove the constructor.
< zoq> Have to check the hpt module, if it requires the constructor.
< sourabhvarshney1> zoq: Also in the above constructor, the comment is written like create the rnn object with given predictors and response set. But the constructor does not require these. Should I modify the comment?
csoni has quit [Read error: Connection reset by peer]
< Atharva> sourabhvarshney1: Are you going to remove the constructor from ANN as well?
< Nisha_> zoq: thank you for your suggestion. I will look into quasi- recurrent neural networks. I am currently reading papers (like : https://arxiv.org/pdf/1611.01576.pdf) on QRNN and will think on how to go about implementing this in mlpack. Could you give me details about what all could be expected in the implementation of QRNN?
yashsharan has joined #mlpack
csoni has joined #mlpack
< sourabhvarshney1> Atharva: Yes I can. But I think Marcus is doing it.
< yashsharan> @zoq.I have submitted the draft of my proposal.Kindly review it and suggest any changes which would be required.Thank You.
< zoq> Nisha_: In case of QRNN, we have to write a separate class similar to the existing FFN/RNN class, which enables us to add a layer and train the model.
< zoq> Atharva: sourabhvars: If either one of you likes to open a PR with the changes, please feel free.
< zoq> yashsharan: Okay, I'll take a look once I have a chance.
< rcurtin> zoq: it looks like the GradientBatchNormLayerTest from #1275 is failing; I have been working with it locally and it looks like it fails about ~50% of the time with different random seeds
< rcurtin> the CheckGradient() different is often between 0.001 and 0.002, so to fix it I would have to adjust the tolerance
< Nisha_> Okay, thanks @zoq. I will look into it.
< rcurtin> but it seems to me like the tolerances are already very large, so much so that I wonder if anything is wrong---in the BatchNormTest where we compare with another implementation, the tolerance is 0.1%, which seems a little bit high to me
< rcurtin> I wanted to see what you thought before I dig further... does this seem reasonable to you? or do you think it's likely that there is a bug in the implementation?
< zoq> rcurtin: hm, sounds like a bug to me, I'll recheck the gradient pass later today. If you like we can comment the test for now.
< rcurtin> there's no hurry, let me know what you find when you take a look
< rcurtin> I'm still working on the PR for the random test fixes and this one was new so I looked into it quickly :)
< zoq> rcurtin: Sure, currently working on the memory issues Eugene pointed out.
< rcurtin> sounds good
Nisha_ has quit [Ping timeout: 260 seconds]
sourabhvarshney1 has quit [Ping timeout: 260 seconds]
sourabhvarshney1 has joined #mlpack
< sourabhvarshney1> zoq: Atharva: I would like to work on the issue if you have no problem
< sourabhvarshney1> guys
haritha1313 has joined #mlpack
< haritha1313> @rcurtin: @zoq: I am working on my proposal for GSoC and I plan to focus it on neural collaborative filtering. To benchmark the same what would be suggestable?
< haritha1313> There is an existing python implementation of ncf which gives hit ratio and ndcg metrics, whereas the mlpack implementation and python implementations focus on rmse.
< haritha1313> The NCF paper has its own perfomance comparisons with other existing methods, so I would like to know your opinion on benchmarking it myself, and if so which metric would be preferable?
< Atharva> sourabhvarshney1: Okay, you can remove the constructor from FFN as well.
< sourabhvarshney1> Thanks
nikhilweee has joined #mlpack
yashsharan has quit [Quit: Page closed]
csoni has quit [Read error: Connection reset by peer]
__amir__ has quit [Quit: Connection closed for inactivity]
sourabhvarshney1 has quit [Quit: Page closed]
ImQ009_ has joined #mlpack
ImQ009 has quit [Ping timeout: 256 seconds]
ImQ009_ has quit [Quit: Leaving]
kgytfd has joined #mlpack
kgytfd has quit [Client Quit]
haritha1313 has quit [Ping timeout: 260 seconds]