ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
ametV has quit [Quit: Leaving]
zoq_ has joined #mlpack
vivekp has quit [Ping timeout: 250 seconds]
ImQ009 has joined #mlpack
vivekp has joined #mlpack
ImQ009 has quit [Read error: Connection reset by peer]
< Toshal>
Hi,I am working on #1483 issue Adapt classifiers to return final objective value from train().
< rcurtin>
Toshal: sounds good, when you have a working PR I'm happy to review it
< Toshal>
I have gone through decision tree .In this algorithm's training is done with gini impurity and we try to optimize it. so do we need to return that as our objective value from train? This may be a silly question.
< Toshal>
I have seen that it is not having any loss function like logistic regression
< rcurtin>
with a decision tree we're maximizing entropy, so maybe returning negative entropy is reasonable
< rcurtin>
(negative so that smaller means better, which it usually would with a model that's minimizing an objective function)
Yellowflash has joined #mlpack
Toshal has left #mlpack []
Toshal has joined #mlpack
ametV has joined #mlpack
ImQ009 has joined #mlpack
Yellowflash has quit [Quit: Page closed]
< rcurtin>
rajiv_: fixed now... no idea what went wrong
< rcurtin>
I just rebuilt the pages
< rcurtin>
ShikharJ: let me know what you're thinking about #1301 (KDE)---I'd like to merge it, but I'm happy to wait for your review to be done
< Toshal>
Can I make a PR step-wise that means after updating one or two classifiers so that I will know my errors quickly?
< rcurtin>
you shouldn't use Travis as something to check for errors
< rcurtin>
instead you should compile locally, and only once you have it working locally should you open a PR
< rcurtin>
I'm not sure if that's what you meant
< rcurtin>
but like I said in the PM, I'm happy to review a PR once it passes all the CI tests
mrohit[m] has quit [Ping timeout: 260 seconds]
shashank-b[m] has quit [Ping timeout: 264 seconds]
shashank-b[m] has joined #mlpack
< Toshal>
No no that is not what i am saying. Actually, I am quite new to these algorithms . So I may return something wrong . So I thought that it will be better that if I work on two three classifiers and then shift towards newer ones.
mrohit[m] has joined #mlpack
Toshal has quit [Ping timeout: 256 seconds]
< rcurtin>
Toshal: ok, I see you what you mean, and that sounds fine to me
< rcurtin>
zoq: I have a problem I was hoping you could help with
< rcurtin>
I'm looking for more things like Moderat to listen to, but they only made three albums
< rcurtin>
so I'm curious if you have any related recommendations... :)
< gauravcr7rm>
Hello everyone, i want to write binding test for gmm_probability and gmm_generate for which i have to take a sample gmm model which i dont know how to take
< gauravcr7rm>
please suggest a way to take a gmm model sample
< zoq_>
rcurtin: That's difficult, Trentemoller is somewhat similar but at the same time different.
zoq_ is now known as zoq
< zoq>
gauravcr7rm: Hello, you can use gmm_train to get a model.
< zoq>
ShikharJ: Sounds good to me, happy to co-mentor if you need any help. Do you have any plans in what direction you like to go?
< zoq>
rajiv_: Not sure, I can follow, the log loads for me.
gauravcr7rm has quit [Ping timeout: 256 seconds]
< ShikharJ>
rcurtin: I'll provide the final review today itself. Thanks for waiting :)
< ShikharJ>
zoq: I'll have to give it a thought, but one part of it quite possibly would be to wrap up on the existing PRs. I'll update the Ideas page as I get them.
< zoq>
ShikharJ: Sounds good.
ImQ009 has quit [Read error: Connection reset by peer]