ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#57 (pca_tree - 0fd7130 : Roberto Hueso Gomez): The build is still failing.
xiaohong has quit [Remote host closed the connection]
< rcurtin>
abernauer: so, what happens if you call CLIRestoreSettings() with "Principal Components Analysis" as an argument?
< rcurtin>
I'm looking at r_util.R, r_util.h, and r_util.cpp
< rcurtin>
the h/cpp files look fine to me
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
abernauer has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< abernauer>
rcurtin: Passed "Principal Components Analysis" got the following, terminate called after throwing an instance of 'std::invalid_argument' what(): no settings stored under the name '�' Aborted (core dumped).
xiaohong has joined #mlpack
< rcurtin>
if it printed a unicode character in the error message, then it looks like the string is not being passed back and forth correctly
< rcurtin>
you could consider adding some debugging to the C++ implementation of CLIRestoreSettings() to print the string that was received
< abernauer>
Ok I will do that. Any chance the R bit architecture could be contributing to the problem?
abernauer has quit [Remote host closed the connection]
abernauer has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< rcurtin>
abernauer: I'm not familiar enough with R to say, but in either case, getting some better printed output could help lead in the right direction
abernauer has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< sumedhghaisas>
KimSangYeon-DGU: just give me 2 mins
< KimSangYeon-DGU>
Yeah :)
< sumedhghaisas>
Hey Kim. Hows it going? sorry for the delay
< KimSangYeon-DGU>
No worries!
< sumedhghaisas>
I looked at the document. The comparison looks amazing. Good effort there
< KimSangYeon-DGU>
Oh, thanks
< KimSangYeon-DGU>
I applied the augmented Lagrange method that I said
< KimSangYeon-DGU>
it's interesting.
< sumedhghaisas>
ahh thats in the book right?
< sumedhghaisas>
are these experiments based on the augmented lagrangian method
< sumedhghaisas>
?
< sumedhghaisas>
or our previous method?
< sumedhghaisas>
ahh sorry it is augmented lagrangian
< sumedhghaisas>
great.
< KimSangYeon-DGU>
Thanks!
< sumedhghaisas>
just for clarity. Did you try changing the initial phi?
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
0 and 90
< KimSangYeon-DGU>
however the results were bad when the initial phi is 90.
< KimSangYeon-DGU>
for some cases
< KimSangYeon-DGU>
Almost are good but for specific, the result was bad when phi is 90.
< sumedhghaisas>
I see. When you say bad were they converging?
< sumedhghaisas>
and did you see if the constraint was bounded when the results were bad?
< KimSangYeon-DGU>
I'll check it
< KimSangYeon-DGU>
Wait a moment.
< KimSangYeon-DGU>
Can you give me 3 mins?
< KimSangYeon-DGU>
Actually, I think it's deleted...
< KimSangYeon-DGU>
I'll reproduce it now
< KimSangYeon-DGU>
Sorry..
< sumedhghaisas>
no worries. Just something to keep in mind. Did you observe that augmented lagrangian method is more stable that normal method?
< KimSangYeon-DGU>
Hmm, actually, the augmented lagrangian method is easy to set the initial lambda
< sumedhghaisas>
I also expected that
< KimSangYeon-DGU>
however, I made any data for comparison of them
< KimSangYeon-DGU>
ah sorry
< KimSangYeon-DGU>
I didn't
< KimSangYeon-DGU>
so, I think I should made one.
< sumedhghaisas>
later could you put this also in a small document? just a single experiment with both normal and augmented would do
< sumedhghaisas>
ahh yes... Thanks :)
< KimSangYeon-DGU>
Yeah, definitely
< sumedhghaisas>
so in this research you have documented the edge cases. It would be nice if we also just perform QGMM and GMM on bunch of normal cases and show just which one tends to do better
< KimSangYeon-DGU>
Ah, okay
< sumedhghaisas>
Like take all experiments from Validity of objective function and run them for QGMM and GMM
< sumedhghaisas>
maybe add some more
< KimSangYeon-DGU>
I agree
< KimSangYeon-DGU>
Thanks for pointing that out.
< sumedhghaisas>
and just report the percentage of cases in which they converged
< KimSangYeon-DGU>
Ahh
< KimSangYeon-DGU>
Okay
< sumedhghaisas>
I mean converged close to the initial clusters
< sumedhghaisas>
Ahh and another thing... when initial phi was zero the final phi stays zero right?
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
but when you put initial phi 90 what happens?
< sumedhghaisas>
does it diverge?
< KimSangYeon-DGU>
Some cases are good, but the specific case is bad
< sumedhghaisas>
which specific case?
< KimSangYeon-DGU>
Yeah, despite of converging
< KimSangYeon-DGU>
I'll upload it
< KimSangYeon-DGU>
The two clusters are overlayed
< KimSangYeon-DGU>
It seems to be hard to be away each
< KimSangYeon-DGU>
when the phi is 90
< KimSangYeon-DGU>
So, it increased to near 180
< sumedhghaisas>
oooh okay. but hey converged to both clusters??
< Toshal>
If we look at the second solution. It can be done. However we will need to add some code in every existing visitor. Just let me know what you think.
< rcurtin>
Toshal: sorry for the slow response on #1975, but it's merged now so that should solve everyone's issue :)
< Toshal>
rcurtin: No worries.
< rcurtin>
:)
< rcurtin>
I realized two things yesterday:
< rcurtin>
1) the build matrix on Jenkins doesn't use up-to-date versions of all the dependencies
< rcurtin>
2) all the tags for releases are of the form mlpack-x.y.z, but then Github packages these as mlpack-mlpack-x.y.z.tar.gz
< rcurtin>
(same with ensmallen)
< rcurtin>
so I'm rebuilding the build matrix docker images with new versions of gcc/boost/armadillo,
< rcurtin>
but I'm not totally sure what the effects would be of deleting and re-tagging every mlpack release (also it would be tedious...)
< rcurtin>
jeffin143: awesome, is that something you will have access to? I've always wondered how powerful they really are when training deep neural networks, etc.
< zoq>
Toshal: Thanks for looking into the issue, I like the solution, if you need help with the adjustments of the visitor classes please let me know.
< zoq>
rcurtin: Do you think we should retag the release or just use a correct tag for the next ones?
ImQ009 has quit [Quit: Leaving]
< zoq>
jeffin143: Looked into the CMake file download, turns out you can only turn off the progress but you can't adjust the step size.
< zoq>
jeffin143: I think we all like to see at least some progress.