naywhayare changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
udit_s has joined #mlpack
udit_s has quit [Ping timeout: 240 seconds]
udit_s has joined #mlpack
< marcus_zoq>
Udit_s: Hello, Is it possible that you forgot to verify the results in the second, ..., fifth test?
< udit_s>
marcus_zoq: Hi !; hold on, - yeah I think I did. I was banking on going through the csv files to verify them; I think I planned to change them later on. I'll get that fixed.
< marcus_zoq>
udit_s: Ah, okay.
< udit_s>
Also, I was just going to get started on the perceptron.
< marcus_zoq>
udit_s: Okay, great.
< udit_s>
I wanted to talk about the learning policies - I wanted to properly understand at least the pocket policy. And I was wondering if you could share a few links which properly explain how the others - BEAM, PRISM work.
< udit_s>
Right now, I'm just going to implement a basic multi-class perceptron. Then, I'll be extending it as we talked about, through template parameters.
< marcus_zoq>
Sounds like a good plan, I'll see if I can find some good explanations. The pocket algorithm should be pretty straightforward. I think we have agreed to implement only one method, right?
< udit_s>
I still wanted to think about implementing another one.
< marcus_zoq>
udit_s: Here is a nice paper about a learning algorithm that is called Random Coordinate Descent:
< marcus_zoq>
udit_s: They compare the random coordinate descent algorithm with stochastic gradient descent, pocket and averaged-perceptron. The pseudo code is also quite instructive. Right now I'm looking for the paper that combined beam with the pocket algorithm...
< udit_s>
Great ! Thanks.
< naywhayare>
udit_s: I am here too for the next couple hours; I'm going to try to go through the decision stump code now and in the next handful of days
< naywhayare>
technology is crazy! I'm sitting in a bus going down the highway in rural Georgia and I'm connected to the internet. couldn't do this ten years ago...
govg has joined #mlpack
govg has quit [Changing host]
govg has joined #mlpack
udit_s has quit [Quit: Leaving]
Anand_ has joined #mlpack
< Anand_>
Marcus : You changed some code? Were we doing it wrong?
< marcus_zoq>
Anand_: Yeah, I've made a few changes to the matlab code. Instead of 'NaiveBayes.fit(TrainData, 'Prior');' we have to use 'posterior(classifier, TestData)', I'm sorry for the misinformation.
< marcus_zoq>
Anand_: If you like you can test the current version on the build server, but I've tested the code and it works.
< Anand_>
Great! No problem! :)
< Anand_>
We need to take shogun next, right?
< marcus_zoq>
Anand_: If you like we can modify the code :)
< Anand_>
yeah sure, we will
< Anand_>
Pointers?
< Anand_>
And btw, we need to merge with the master. Is it the right time? Maybe tomorrow?
< marcus_zoq>
Yeah, tomorrow sounds good. Maybe it would be good to ask (IRC) the shogun guys whether there is no option to get the probabilities.
< marcus_zoq>
Anand_: Maybe there is a hidden feature...
< Anand_>
I will look at the code and see if I can figure out. Also, you meant on #shogun ?
< marcus_zoq>
Anand_: Yeah, #shogun!
< Anand_>
Ok, I will ask someone there. Should I mention gsoc with mlpack?
< Anand_>
I will get back to you
< marcus_zoq>
Anand_: There are all cool guys. I think you can just ask if there is an option in the nbc method to get the probabilities :)
< marcus_zoq>
Anand_: This the the parameter that stores the propb: 'SGVector<float64_t> m_label_prob;'.
< marcus_zoq>
Anand_: We can use a 'nice' warkaround to get the vector :)
Anand_ has quit [Ping timeout: 246 seconds]
andrewmw94 has joined #mlpack
Anand_ has joined #mlpack
< Anand_>
Marcus : So far no one seems to be responding on shogun. I am waiting
< marcus_zoq>
Anand_: Okay, if you look into the code there is a protected parameter that is called m_label_prob. I think we can easily add a fuction that returns this parameter. Or use a workaround something like: '#define private protected' but I would prefer the first way.
< Anand_>
Marcus : Ok. I will look into it if we can use that