ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
abernauer has joined #mlpack
< abernauer>
rcurtin: Yeah I will take your advice and go back to the hand written C approach. Rcpp's attribute feature and the compiler were having issues converting a reference to an Armadillo Matrix to type SEXP(pointer) to a SEXPREC C struct, or binary tree which makes sense.
< abernauer>
Dealing with memory and garbage collection might come up again, but I will worry about that later.
xiaohong has joined #mlpack
KimSangYeon-DGU has quit [Remote host closed the connection]
< rcurtin>
abernauer: sounds good
< abernauer>
rcurtin: I' am interested in working on that issue with converting methods to call by value, but first going to see how this week goes before committing to that. Decent opportunity to improve my C++11 knowledge though.
< rcurtin>
sounds good, happy to review a PR when it's ready
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
abernauer has quit [Remote host closed the connection]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
Neo22 has joined #mlpack
Neo22 has left #mlpack []
jeffin143 has quit [Read error: Connection reset by peer]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
vivekp has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< lozhnikov>
jeffin143: Could you add the 10th blog post?
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
sumedhghaisas has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas: Hi Ghaisas, I sent a message about the research documents and videos on hangouts, please can you check it?
vivekp has quit [Ping timeout: 245 seconds]
< sumedhghaisas>
KimSangYeon-DGU: Hey Kim
< KimSangYeon-DGU>
Hey!
< sumedhghaisas>
I am just going through the results of changing the distance
< sumedhghaisas>
So changing the distance does affect the training right?
< KimSangYeon-DGU>
Right
< sumedhghaisas>
even if the Phi is 180?
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
so in Test case 2 ... the final phi after training is around 90?
< KimSangYeon-DGU>
However when phi 180, the initial positions should be close to the each center of the observations
< KimSangYeon-DGU>
Wait a moment.
< KimSangYeon-DGU>
When phi is 0, 0, the final phi is 0, when phi is 45, -45, the final phi is 30.31, -30.31, and when phi is 90, -90, the final phi is 97, -97
< KimSangYeon-DGU>
There are three initial values of phi per test case
< KimSangYeon-DGU>
I wrote the values in Appendix C -3 phi.
< sumedhghaisas>
wait I am confused why are there 2 values in 2 clusters? I thought we modeled them as a single value
< KimSangYeon-DGU>
Ahh, in the equation (8) in the original paper, it's represented as a subtraction.
< KimSangYeon-DGU>
So, I write the code like that...
< sumedhghaisas>
I see...
< KimSangYeon-DGU>
Ahh, in the previous discussion, you mentioned about the subtraction.
< KimSangYeon-DGU>
So, I wrote the code like that.
< KimSangYeon-DGU>
Hmm.. am I wrong??...
< sumedhghaisas>
ahh no that is fine no problem
< sumedhghaisas>
so I am looking at Appendix C 3 Phi
< KimSangYeon-DGU>
Yeah.
< sumedhghaisas>
Could you train case d more so that the Phi settles?
< sumedhghaisas>
I think the phi is still moving in the graoh
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
I'll do that
< sumedhghaisas>
hmm... these are interesting results but hard to understand
< sumedhghaisas>
so only when the distance was too big they settled
< KimSangYeon-DGU>
Yeah, when initial phi is 180.
< sumedhghaisas>
but then the means were too close to the clusters
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
When phi is 0 or 90, it can find correctly.
< KimSangYeon-DGU>
But...
< KimSangYeon-DGU>
Phi 180 counld't
< KimSangYeon-DGU>
When I observed, it tends to be close each other when phi is 180.
< KimSangYeon-DGU>
I guess the phi would represent cohesion of the clusters
< sumedhghaisas>
In the next research with the crazy dataset as w call it
< sumedhghaisas>
the results are interesting
< sumedhghaisas>
when the phi is 0 the cluster are further apart
< KimSangYeon-DGU>
I'm really surprised at your idea.
< KimSangYeon-DGU>
Wait a moment
< sumedhghaisas>
although the objective function is going negative
< sumedhghaisas>
we need to fix that
< sumedhghaisas>
can you analyze what value is making it negative
< sumedhghaisas>
the constraint seems to positive
< sumedhghaisas>
so its the log likelihood that is negative
< sumedhghaisas>
that is strange
< KimSangYeon-DGU>
Ahh, yes
< KimSangYeon-DGU>
I suspect the constraint
< KimSangYeon-DGU>
because it is so jagged
< KimSangYeon-DGU>
In Appendix A-1
< KimSangYeon-DGU>
I'll look into it
< KimSangYeon-DGU>
Ahh, I know it is because of the unconstrained optimization
< KimSangYeon-DGU>
* I see
< KimSangYeon-DGU>
In appendix C, I increased the lambda higher, then the NLL isn't negative.
< KimSangYeon-DGU>
The only difference between A and C is a lambda
< sumedhghaisas>
I see we didn't constraint the probabilities themselves
< sumedhghaisas>
if they are above 1 the objective function will be negative
< KimSangYeon-DGU>
I agree
< sumedhghaisas>
can you try clipping the gradients so that the values of probabilities do not go above 1?
< sumedhghaisas>
basically check after each update if the probabilities are more than 1
< sumedhghaisas>
if they are clip them back to 1
< sumedhghaisas>
its little harder to do
< sumedhghaisas>
but its worth a try
< KimSangYeon-DGU>
Yeah, I'll try
< sumedhghaisas>
this is 1 way
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
another is to add more constraint
< sumedhghaisas>
basically add each probability as a lagrangian
< KimSangYeon-DGU>
Ahh, right
< sumedhghaisas>
but that won't make a difference as each of them is already in the lagrangian
< sumedhghaisas>
hmmm
< sumedhghaisas>
okay try clipping first lets try to figure out more on this
sumedhghaisas has quit [Quit: Ping timeout (120 seconds)]
< favre49>
zoq KimSangYeon-DGU I'm still having difficulty comprehending the code and translating it to c++, it makes no sense to me. Can any of you help me?
< KimSangYeon-DGU>
favre49: Surely
< KimSangYeon-DGU>
Actually, I'm familiar with the algorithm, but I can help you as much as I can
< KimSangYeon-DGU>
Oops
< KimSangYeon-DGU>
*I'm not familiar
< KimSangYeon-DGU>
favre49: Is there any pull request about it?
favre4954 has joined #mlpack
< favre4954>
KimSangYeon-DGU No issues, it's just the code snippet I sent you earlier
< KimSangYeon-DGU>
favre4954: Yeah, is there any pull request about it?
< KimSangYeon-DGU>
for mlpack?
< KimSangYeon-DGU>
*in mlpack
< favre4954>
The paper doesn't explain how it finds the extreme points, so this is the only resource I have
< favre4954>
No, I haven't updated the current state of the code on that PR in a while
< KimSangYeon-DGU>
Ahh... Okay, I'll go through it again