ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gmanlan has joined #mlpack
< gmanlan>
Hi rcurtin: I noticed that the sample app is showing how to get some metrics such as precision, F1, etc. from the cross-validation, but I'm not sure that's what is actually happening
< gmanlan>
accuracy is for sure coming from CV, but precision, recall and F1? that seems to be using the previously trained RF, not the CV ones
< gmanlan>
is that accurate?
gmanlan has quit [Remote host closed the connection]
Yashwants19 has joined #mlpack
< Yashwants19>
Hi gmanlan: I think so these are correct.
< Yashwants19>
If you see the code of cross validation when we call Evaluate first it reset the modelPtr after passing the respectively parameter of the model
< Yashwants19>
Then accuracy will also use the pre trained model(RF) only
< Yashwants19>
Same as Precision, Recall and F1 score
< Yashwants19>
I think so We can also use modelPtr to evaluate accuracy (but it requires cross validated trainingSet, labels)
< Yashwants19>
But they require testSet and testLabels
Yashwants19 has quit [Remote host closed the connection]
< sumedhghaisas>
We can formalize the problem in their formulation in the final document
< KimSangYeon-DGU>
Ah yeah
< sumedhghaisas>
looking at it now
< KimSangYeon-DGU>
I'm not sure it can be applied
< KimSangYeon-DGU>
to QGMM
< sumedhghaisas>
very interesting indeed
< sumedhghaisas>
I like their formulation
< sumedhghaisas>
Did you go through the paper?
< KimSangYeon-DGU>
Actually, I just read introduction and result
< KimSangYeon-DGU>
I did focus on NLL optimization until now
< sumedhghaisas>
ahh I think it could be worth to go over it a bit
< KimSangYeon-DGU>
Really? Oh, I'll look into the paper more deeply
< sumedhghaisas>
Their formulation does not look at the MLE estimate but
< sumedhghaisas>
wait... but it won't still solve the normlization constant problem
< sumedhghaisas>
hmm
< KimSangYeon-DGU>
Yeah, and the sum of alphas of QGMM isn't one.
< sumedhghaisas>
I am not still sure that should be the case
< sumedhghaisas>
the weights of GMM should be summed to one
< KimSangYeon-DGU>
Agreed
< sumedhghaisas>
but for QGMM that is not the case
< KimSangYeon-DGU>
Yes
< sumedhghaisas>
ahh ou meant it that way
< sumedhghaisas>
sorry sorry
< sumedhghaisas>
I misunderstood
< KimSangYeon-DGU>
I'm okay :)
< KimSangYeon-DGU>
Hmm... Would it be a good way to reformulate the equations in the original paper? or should we try other approaches.
< KimSangYeon-DGU>
?
< sumedhghaisas>
hmm
< sumedhghaisas>
thinking
< sumedhghaisas>
another way to do variational inferenece and maximize lower bound on MLE than actual MLE
< sumedhghaisas>
but that would also involve problems with normalization
< sumedhghaisas>
this normalization turning out to be a bigger problem than I anticipated
< KimSangYeon-DGU>
Agreed
< sumedhghaisas>
how many weeks are remaining of our project?
< KimSangYeon-DGU>
About 6 weeks?
< KimSangYeon-DGU>
6~7 weeks left
< sumedhghaisas>
I have couple of ideas about the stuff but that involves way more sampling and unconventional methods
gmanlan has joined #mlpack
< KimSangYeon-DGU>
I'll try them
< sumedhghaisas>
and would involve lot of reading on your side including variational inferenece
< KimSangYeon-DGU>
I'm okay
< sumedhghaisas>
I am thinking if its better if we approach a different problem and get some implementation
< KimSangYeon-DGU>
I want to make this project complete.
< sumedhghaisas>
me too. But the timeline seems very tight.
< sumedhghaisas>
hm... thinking...
< gmanlan>
Yashwants19: thanks for your response. Although I still believe we are missing something, if you take a look at the sample code in the app, it is only Accuracy that is using cv.Evaluate (therefore reusing the CV model/s) - but then Precision, Recall and F1 are not using cv.Evaluate but "rf" which is the single-trained model (not the CV one) - so Pr
< gmanlan>
ecision, Recall and F1 are not Cross-Validation metrics, right?
< KimSangYeon-DGU>
Approaching a different problem and implementation is good to me
< sumedhghaisas>
Actually we can keep looking at solutions on the side. But What if we do small projects each week and get them done while we focus on research? What do you think about that?
< sumedhghaisas>
I don't want to give up on this project yet but I think it involves little bit more reading.
< sumedhghaisas>
that should give us some time to explore some small implementations
< sumedhghaisas>
its up to you
gmanlan has quit [Remote host closed the connection]
< KimSangYeon-DGU>
I'm good
< sumedhghaisas>
We can do all reading or half half
< KimSangYeon-DGU>
I'm ready to spend more time on this project :)
< sumedhghaisas>
Ahh no I dont think you should spend more than alloted time daily. That is counter productive trust me. I do this all the time at work :P
< KimSangYeon-DGU>
Yeah :)
< sumedhghaisas>
hmmm... okay lets meet Thursday to discuss this? Till then can we go over the mentioned paper?
< sumedhghaisas>
worse case we will try implementing that in MLPACK
< sumedhghaisas>
what say?
< sumedhghaisas>
I will also take some help from Ryan
< sumedhghaisas>
the paper you mentioned I mean
< KimSangYeon-DGU>
Yeah, let meet Thursday
< sumedhghaisas>
I am sorry that I am little stuck on this normalization constant problem. :(
< KimSangYeon-DGU>
Nope!
< KimSangYeon-DGU>
It is our problem
< KimSangYeon-DGU>
I'm also sorry
< sumedhghaisas>
haha... thats a good attitude
< sumedhghaisas>
okay lets go over the paper
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
and decide by Thursday?
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
KimSangYeon-DGU: Hey Kim
< sumedhghaisas>
I think I found a bug in the code
< sumedhghaisas>
there should a stop gradient on Q distribution
sumedhghaisas has quit [Ping timeout: 260 seconds]
ImQ009 has quit [Quit: Leaving]
ImQ009 has joined #mlpack
ImQ009 has quit [Read error: Connection reset by peer]
gmanlan has joined #mlpack
gmanlan has quit [Remote host closed the connection]
KimSangYeon-DGU has quit [Remote host closed the connection]
gmanlan has joined #mlpack
gmanlan has quit [Remote host closed the connection]