ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
favre49 has joined #mlpack
< favre49>
manish7294: okay, thank you. Also, I live in Bangalore
favre49 has quit [Ping timeout: 256 seconds]
gmanlan has quit [Ping timeout: 256 seconds]
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 256 seconds]
zoq has quit [Read error: Connection reset by peer]
jeffin has joined #mlpack
jeffin143 has quit [Ping timeout: 250 seconds]
zoq has joined #mlpack
jeffin has quit [Ping timeout: 258 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
< KimSangYeon-DGU>
What is a good research project you think?
< KimSangYeon-DGU>
I'd like to propel our project to the direction.
< sumedhghaisas>
hmm... I am not sure I get the question correctly. Do you mean what hypothesis will we be checking?
< KimSangYeon-DGU>
Yeah :)
< KimSangYeon-DGU>
We have a schedule to plot the 3D probability space of Quantum GMM
< sumedhghaisas>
sure. I think a good hypothesis is 'does QGMM provide better representation power than GMM'
< sumedhghaisas>
what do you think?
< KimSangYeon-DGU>
Great!!
< KimSangYeon-DGU>
Yes
< ShikharJ>
sumedhghaisas: Sorry for disturbing, are you tackling an open problem with Quantum GMMs over the summer? I thought, the project was regarding implementation of QGMMs?
< sumedhghaisas>
and regarding the 3D plot I think its a good stepping stone
< sumedhghaisas>
once we have an idea how does the distribution look like compared to GMM we will have better idea about our path
< KimSangYeon-DGU>
Yeah :)
< sumedhghaisas>
SikharJ: Hey Sikhar. Yes, I think QuantumGMM is an open problem. We will be implementing QGMM but its not clear yet if it will work exactly as its described
< sumedhghaisas>
thus we will model this project as a research question instead
< ShikharJ>
sumedhghaisas: Super interesting. I'd love to see the progress on that.
< KimSangYeon-DGU>
ShikharJ: I'll try to it :)
< sumedhghaisas>
KimSangYeon-DGU: Do you have any idea how would you like to proceed with 3D modelling?
< KimSangYeon-DGU>
with my mento, Sumedh
< sumedhghaisas>
SikharJ: Indeed! I have his project at the back of my head for some time noew. Never got time around it.
< KimSangYeon-DGU>
sumedhghaisas: Yes, Once we implement the QuantumGaussianDistribution and QuantumGMM, I get some results of it and plot them using Python libraries
< sumedhghaisas>
SikharJ: You are more than welcome to join our weekly meets also. We would mostly be talking about the problem. Would love to hear external views as well.
< KimSangYeon-DGU>
Is is a proper way??
< KimSangYeon-DGU>
*it
< KimSangYeon-DGU>
sumedhghaisas: Definitely :)
< sumedhghaisas>
KimSangYeon-DGU: Given the nature of the probelm I would first recommend doing it on paper. I see couple of problems in getting the distribution.
< ShikharJ>
KimSangYeon-DGU: I should probably mention, your proposal was simply amazing. All 46 pages of it, it was three times the size of my original proposal a year back (I think that's a new record for mlpack) :P So can't wait to see the implementation of it as well :)
< sumedhghaisas>
if we have a way of getting the distribution on the paper, we can do quick plotting using matplotlib
< KimSangYeon-DGU>
sumedhghaisas: Ah~ yes, I'll do that. Thanks for letting me know
< ShikharJ>
sumedhghaisas: Thanks, though I can't say I'll be experienced with QGMMs, I'm barely discovering the math behind GMMs myself.
< KimSangYeon-DGU>
ShikharJ: I think it was a great time to work with you. I'll do my best to make this project successful :) Thanks!!
< sumedhghaisas>
KimSangYeon-DGU: I think we can concentrate on Equation 9 in the paper. We need to plot the same distribution first.
< KimSangYeon-DGU>
sumedhghaisas: Yeah, I'll try to it using matplotlib
< sumedhghaisas>
great. The real trouble is finding the normalization constants 'the alphas' in the equation
< KimSangYeon-DGU>
sumedhghaisas: Agreed
< KimSangYeon-DGU>
Well, before the coding period, I'll check the paper using matplotlib
< KimSangYeon-DGU>
How can I show you the result??
< sumedhghaisas>
KiSangYeon-DGU: Thats a good idea. Although I have some doubts in Equation 9. I still haven't understood why have they taken sum over all the data points
< sumedhghaisas>
I think it should we integration over the distribution rather than sum
< sumedhghaisas>
what do you think?
< KimSangYeon-DGU>
Yes, actually, I worried about it
< KimSangYeon-DGU>
Oh
< KimSangYeon-DGU>
integration will be better
< KimSangYeon-DGU>
Did you intend QGMM for n-classes??
< KimSangYeon-DGU>
not 2-classes that original paper presents
< sumedhghaisas>
Good. If both of us think that then there is some chance that we are right :P while plotting the distribution we have no data points ..., so we have to integrate over the whole distribution
< sumedhghaisas>
for the plotting? we can plot for 2 classes... no worries. Any single interference pattern will do just fine
< KimSangYeon-DGU>
Okay!
< sumedhghaisas>
okay I need to logout for sometime. I will be back in 20 minutes. :)
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
Thanks for the meeting!!
< KimSangYeon-DGU>
sumedhghaisas: I missed one!
jeffin143 has quit [Ping timeout: 252 seconds]
< KimSangYeon-DGU>
I think the reason paper use sum is because it uses log-probability and took exp.
< KimSangYeon-DGU>
*uses
< KimSangYeon-DGU>
*used
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< KimSangYeon-DGU>
I'll give it a shot! Good luck with your class you teach :)
vivekp has quit [Ping timeout: 246 seconds]
< sumedhghaisas>
KimSangYeon-DGU: I dont think I understood that right. Where is the log involved?
jeffin143 has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas: Well, it's my guess but it's not mentioned directly.
< sumedhghaisas>
ahh I see. But when we are not taking dataset into consideration, the plain distribution has to be integrated to find the total area under the curve
< KimSangYeon-DGU>
Ahh, Yeah :)
KimSangYeon-DGU has quit [Quit: Page closed]
KimSangYeon-DGU has joined #mlpack
gmanlan has joined #mlpack
< lozhnikov>
jeffin143: Hi, I am still looking through #1814. I hope I'll finish in a couple of hours.
gmanlan has quit [Ping timeout: 256 seconds]
gmanlan has joined #mlpack
< jeffin143>
lozhnikov : sure,
jeffin143 has quit [Ping timeout: 250 seconds]
sreenik has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143>
lozhnikov : if I have to split a string into char , for exam jef into <j><e><f> , none of the standard tokenizer does it, how should I handle that case..?
< zoq>
Couldn't you use the operator[]? Or maybe the string you like to parse is more complex?
< jeffin143>
Zoq I am currently using boost::tokenizer
< jeffin143>
I want way out that if user passes @ , then it should tokenize using that , or else if user doesn't pass anything then tokenize to character or if user pass @-+ then tokenize using @-+
< jeffin143>
Basically a template version
< zoq>
What happens if you pass nothing as the delimiter?
< zoq>
You could use boost for the user used a delemiter and use standard C++ for the other case.