ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#57 (pca_tree - 0fd7130 : Roberto Hueso Gomez): The build is still failing.
travis-ci has left #mlpack []
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< rcurtin> abernauer: so, what happens if you call CLIRestoreSettings() with "Principal Components Analysis" as an argument?
< rcurtin> I'm looking at r_util.R, r_util.h, and r_util.cpp
< rcurtin> the h/cpp files look fine to me
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
abernauer has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< abernauer> rcurtin: Passed "Principal Components Analysis" got the following, terminate called after throwing an instance of 'std::invalid_argument' what(): no settings stored under the name '�' Aborted (core dumped).
xiaohong has joined #mlpack
< rcurtin> if it printed a unicode character in the error message, then it looks like the string is not being passed back and forth correctly
< rcurtin> you could consider adding some debugging to the C++ implementation of CLIRestoreSettings() to print the string that was received
< abernauer> Ok I will do that. Any chance the R bit architecture could be contributing to the problem?
abernauer has quit [Remote host closed the connection]
abernauer has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< rcurtin> abernauer: I'm not familiar enough with R to say, but in either case, getting some better printed output could help lead in the right direction
abernauer has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit []
< jenkins-mlpack2> Project docker mlpack nightly build build #416: STILL UNSTABLE in 3 hr 50 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/416/
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
xiaohong has joined #mlpack
sumedhghaisas has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< sakshamB> ShikharJ: yes I have written the template code for spectral norm so, far. Will open a PR two or three days.
KimSangYeon-DGU has joined #mlpack
xiaohong has quit [Remote host closed the connection]
sumedhghaisas has left #mlpack []
sumedhghaisas has joined #mlpack
< KimSangYeon-DGU> sumedhghaisas: Hey Ghaisas, I'm ready.
< sumedhghaisas> KimSangYeon-DGU: just give me 2 mins
< KimSangYeon-DGU> Yeah :)
< sumedhghaisas> Hey Kim. Hows it going? sorry for the delay
< KimSangYeon-DGU> No worries!
< sumedhghaisas> I looked at the document. The comparison looks amazing. Good effort there
< KimSangYeon-DGU> Oh, thanks
< KimSangYeon-DGU> I applied the augmented Lagrange method that I said
< KimSangYeon-DGU> it's interesting.
< sumedhghaisas> ahh thats in the book right?
< sumedhghaisas> are these experiments based on the augmented lagrangian method
< sumedhghaisas> ?
< sumedhghaisas> or our previous method?
< sumedhghaisas> ahh sorry it is augmented lagrangian
< sumedhghaisas> great.
< KimSangYeon-DGU> Thanks!
< sumedhghaisas> just for clarity. Did you try changing the initial phi?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> 0 and 90
< KimSangYeon-DGU> however the results were bad when the initial phi is 90.
< KimSangYeon-DGU> for some cases
< KimSangYeon-DGU> Almost are good but for specific, the result was bad when phi is 90.
< sumedhghaisas> I see. When you say bad were they converging?
< sumedhghaisas> and did you see if the constraint was bounded when the results were bad?
< KimSangYeon-DGU> I'll check it
< KimSangYeon-DGU> Wait a moment.
< KimSangYeon-DGU> Can you give me 3 mins?
< KimSangYeon-DGU> Actually, I think it's deleted...
< KimSangYeon-DGU> I'll reproduce it now
< KimSangYeon-DGU> Sorry..
< sumedhghaisas> no worries. Just something to keep in mind. Did you observe that augmented lagrangian method is more stable that normal method?
< KimSangYeon-DGU> Hmm, actually, the augmented lagrangian method is easy to set the initial lambda
< sumedhghaisas> I also expected that
< KimSangYeon-DGU> however, I made any data for comparison of them
< KimSangYeon-DGU> ah sorry
< KimSangYeon-DGU> I didn't
< KimSangYeon-DGU> so, I think I should made one.
< sumedhghaisas> later could you put this also in a small document? just a single experiment with both normal and augmented would do
< sumedhghaisas> ahh yes... Thanks :)
< KimSangYeon-DGU> Yeah, definitely
< sumedhghaisas> so in this research you have documented the edge cases. It would be nice if we also just perform QGMM and GMM on bunch of normal cases and show just which one tends to do better
< KimSangYeon-DGU> Ah, okay
< sumedhghaisas> Like take all experiments from Validity of objective function and run them for QGMM and GMM
< sumedhghaisas> maybe add some more
< KimSangYeon-DGU> I agree
< KimSangYeon-DGU> Thanks for pointing that out.
< sumedhghaisas> and just report the percentage of cases in which they converged
< KimSangYeon-DGU> Ahh
< KimSangYeon-DGU> Okay
< sumedhghaisas> I mean converged close to the initial clusters
< sumedhghaisas> Ahh and another thing... when initial phi was zero the final phi stays zero right?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> but when you put initial phi 90 what happens?
< sumedhghaisas> does it diverge?
< KimSangYeon-DGU> Some cases are good, but the specific case is bad
< sumedhghaisas> which specific case?
< KimSangYeon-DGU> Yeah, despite of converging
< KimSangYeon-DGU> I'll upload it
< KimSangYeon-DGU> The two clusters are overlayed
< KimSangYeon-DGU> It seems to be hard to be away each
< KimSangYeon-DGU> when the phi is 90
< KimSangYeon-DGU> So, it increased to near 180
< sumedhghaisas> oooh okay. but hey converged to both clusters??
< KimSangYeon-DGU> Yeah, it's converged
< KimSangYeon-DGU> But they converged at some strange points.
< KimSangYeon-DGU> When cos(phi) is negative, they tend to be close each other.
< KimSangYeon-DGU> What if set the inverse of phi?
< sumedhghaisas> okay. but its still better than GMM right?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> what do you mean inverse of Phi?
< KimSangYeon-DGU> QGMM has wide possibility to be trained well
< KimSangYeon-DGU> Actually, when the two clusters were close, there is case they should be away.
< KimSangYeon-DGU> So, I thought when we set the negative phi (oops sorry, I mean negative)
< KimSangYeon-DGU> Hmm. but I'm not sure it's a good way.
< sumedhghaisas> yeah but it will still remain a free variable
< sumedhghaisas> but i think we can prove a point that it msy be better than gmm in certain vases
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> d
< KimSangYeon-DGU> Oops... sorry
< sumedhghaisas> okay so after you perform normal QGMM vs GMM tasks I think we can say the comparison part is over
< KimSangYeon-DGU> Yeah
< sumedhghaisas> Although have you tried running GMM on the crazy dataset to see what happens?
< sumedhghaisas> that might a good experiment to try
< KimSangYeon-DGU> Ahh, yeah, I'll try
< sumedhghaisas> Another thing I am thinking is there might be a fixed point iteration for Phi update but might take some time to figure it out
< sumedhghaisas> seems like phi is a very important variable here
< sumedhghaisas> which needs to updated very carefully
< KimSangYeon-DGU> Definitely, actually, I spent my time to figure out the variation of phi
< KimSangYeon-DGU> It was tricky
< sumedhghaisas> continuous updating Phi like we are doing is not sufficient as Phi changes the objective very rapidly
< sumedhghaisas> but thats an extended work for sure
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Sumedh, would it be a bad idea to use distance when updating phi?
< sumedhghaisas> what do you mean distance?
< sumedhghaisas> momentum?
< KimSangYeon-DGU> Distance between clusters.
< sumedhghaisas> ahh we might not have that information every time
< KimSangYeon-DGU> Ahh
< KimSangYeon-DGU> Right
< sumedhghaisas> our method should be generic enough to be used everywhere
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I agree
< sumedhghaisas> Okay lets try QGMM vs GMM on crazy dataset as well and see if we can compare it
< KimSangYeon-DGU> Yeah
< sumedhghaisas> thats gonna be tricky i think
< sumedhghaisas> as you would need to run QGMM with different Phi's to get ideal results
< KimSangYeon-DGU> Ahh, yeah, I'll keep in mind
< sumedhghaisas> okay. We have god results so far. I say our efforts is going in the correct direction.
< KimSangYeon-DGU> Ah, thanks!
< sumedhghaisas> Was the book okay to read?
< KimSangYeon-DGU> Really, insightful
< sumedhghaisas> its very interesting right?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Very very interesting
< sumedhghaisas> ahh yes don't forget the normal vs augmented comparison
< sumedhghaisas> small one will also do
< KimSangYeon-DGU> Okay
< KimSangYeon-DGU> I think the update step is important part
< KimSangYeon-DGU> when comparison between them.
jeffin143 has joined #mlpack
< jeffin143> New dgx workstation :)
sumedhghaisas has quit [Ping timeout: 260 seconds]
jeffin143 has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
ImQ009 has joined #mlpack
vivekp has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
KimSangYeon-DGU has quit [Remote host closed the connection]
< jenkins-mlpack2> Yippee, build fixed!
< jenkins-mlpack2> Project mlpack - git commit test build #214: FIXED in 47 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/214/
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#7823 (master - 1174e84 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
< Toshal> ShikharJ: Aplogies for delay in posting the blog post.
< Toshal> zoq: I was working on the workaround for adding more layers in boot::variant. I found this https://stackoverflow.com/questions/34702612/how-to-increase-the-number-of-types-that-can-handled-by-boostvariant.
< Toshal> If we look at the second solution. It can be done. However we will need to add some code in every existing visitor. Just let me know what you think.
< rcurtin> Toshal: sorry for the slow response on #1975, but it's merged now so that should solve everyone's issue :)
< Toshal> rcurtin: No worries.
< rcurtin> :)
< rcurtin> I realized two things yesterday:
< rcurtin> 1) the build matrix on Jenkins doesn't use up-to-date versions of all the dependencies
< rcurtin> 2) all the tags for releases are of the form mlpack-x.y.z, but then Github packages these as mlpack-mlpack-x.y.z.tar.gz
< rcurtin> (same with ensmallen)
< rcurtin> so I'm rebuilding the build matrix docker images with new versions of gcc/boost/armadillo,
< rcurtin> but I'm not totally sure what the effects would be of deleting and re-tagging every mlpack release (also it would be tedious...)
< rcurtin> jeffin143: awesome, is that something you will have access to? I've always wondered how powerful they really are when training deep neural networks, etc.
< zoq> Toshal: Thanks for looking into the issue, I like the solution, if you need help with the adjustments of the visitor classes please let me know.
< zoq> rcurtin: Do you think we should retag the release or just use a correct tag for the next ones?
ImQ009 has quit [Quit: Leaving]
< zoq> jeffin143: Looked into the CMake file download, turns out you can only turn off the progress but you can't adjust the step size.
< zoq> jeffin143: I think we all like to see at least some progress.
lozhnikov has quit [Quit: ZNC 1.7.3 - https://znc.in]
lozhnikov has joined #mlpack