ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< jenkins-mlpack2> Project docker mlpack weekly build build #60: STILL UNSTABLE in 7 hr 5 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/60/
karmabeach24 has joined #mlpack
karmabeach24 has quit [Ping timeout: 268 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #412: STILL UNSTABLE in 3 hr 21 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/412/
xiaohong has joined #mlpack
ImQ009 has joined #mlpack
xiaohong_ has joined #mlpack
xiaohong has quit [Read error: No route to host]
xiaohon__ has joined #mlpack
xiaohong_ has quit [Ping timeout: 268 seconds]
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
xiaohon__ has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 245 seconds]
favre49 has joined #mlpack
< favre49> It turns out that the cause may be a shorted IC. It'll take atleast two days to fix, I'll try to use someone else's system till then
favre49 has left #mlpack []
karmabeach24 has joined #mlpack
KimSangYeon-DGU has joined #mlpack
< zoq> favre49: Okay, thanks for the update.
karmabeach24 has quit [Ping timeout: 245 seconds]
< rcurtin> favre49: that sounds a lot better than it could be :)
karmabeach24 has joined #mlpack
sumedhghaisas has joined #mlpack
< sumedhghaisas> KimSangYeon-DGU: Hey Kim
< KimSangYeon-DGU> Hi!
< sumedhghaisas> How are things?
< KimSangYeon-DGU> Have you been the message on hangouts?
< KimSangYeon-DGU> I sent an link for the document about multi clusters
< KimSangYeon-DGU> *seen
< sumedhghaisas> Ahh yes. I just went through that. Those results are complicated to analyze though. Whats your conclusion on that?
< KimSangYeon-DGU> So, the conclusion is the initial phi matters and the initial clusters means matter as well.
< KimSangYeon-DGU> and the probability calculation take more time than GMM
< sumedhghaisas> hmm.. Okay lets look into that more later. ahh yes GMM. Did you run the preliminary experiments with GMM?
karmabeach24 has quit [Ping timeout: 272 seconds]
< KimSangYeon-DGU> Yes, I found some edge case for QGMM
< KimSangYeon-DGU> Wait a moment, I'll upload the result on drive
< sumedhghaisas> edge case of QGMM? or GMM?
< KimSangYeon-DGU> QGMM
xiaohong has joined #mlpack
< KimSangYeon-DGU> I run GMM without initial clustering algorithm like K-means, and I observed the case that GMM doesn't find the center while QGMM did
< sumedhghaisas> Little confused... I though we are running the experiments with GMM
< sumedhghaisas> ohh... So not edge case but plus point :)
< KimSangYeon-DGU> Ahh
< KimSangYeon-DGU> sorry
< sumedhghaisas> no worries
< sumedhghaisas> thats an amazing result
< KimSangYeon-DGU> But the current QGMM is a bit slow in the training process
< KimSangYeon-DGU> GMM's probability time complexity is
< KimSangYeon-DGU> O(n), while QGMM is O(n^2)
< KimSangYeon-DGU> I wrote the point as a drawback in the document.
< KimSangYeon-DGU> Of course, if we use GPU when optimizing, I think it will be relaxed
xiaohong has quit [Ping timeout: 258 seconds]
< KimSangYeon-DGU> sumedhghaisas: Could you check it? https://drive.google.com/open?id=1Ja2Dezcpg_8IIrgrCNv_rl-E-ZgdrrLG
< KimSangYeon-DGU> Because I set the learning rate is low, the training process are long a bit in QGMM
< KimSangYeon-DGU> 0.001
< sumedhghaisas> wow... this is super interesting
< sumedhghaisas> so slowly the cluster shifts to the other points
< KimSangYeon-DGU> Right
< sumedhghaisas> could you show me the phi graoh for this?
< KimSangYeon-DGU> Yeah, wait a moment
< sumedhghaisas> maybe there is something there that we can analyze
< KimSangYeon-DGU> I uploaded
< KimSangYeon-DGU> You can check the Graphs for QGMM directory
< KimSangYeon-DGU> the initial phi_{k} is that phi_{1} = 45, phi_{2} = - 45
< sumedhghaisas> oooooh this is super interesting
< KimSangYeon-DGU> oh..
< sumedhghaisas> so initially the angle was 90
< KimSangYeon-DGU> Yes
< sumedhghaisas> then it started to increase
< sumedhghaisas> which shows negative cos
< sumedhghaisas> thats why they became closer and closer
< KimSangYeon-DGU> Yes
< KimSangYeon-DGU> I agree, and I wrote the point in the document
< sumedhghaisas> but later the phi returned to 90 separating them again
< KimSangYeon-DGU> Ahh right
< KimSangYeon-DGU> I'm confused about that
< sumedhghaisas> can you try what happens to the same experiment if you initial phi to 0 and 180
< KimSangYeon-DGU> I'll find it, wait a moment
< sumedhghaisas> Thanks :)
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#406 (ensmallen-1.16.0 - b46986d : Ryan Curtin): The build passed.
< travis-ci> Change view : https://github.com/mlpack/ensmallen/compare/89bf4057a5aa^...b46986d13445
travis-ci has left #mlpack []
< KimSangYeon-DGU> I'm uploading
< KimSangYeon-DGU> but the experiments with phi 0 and 180 have a higher lambda 1500, while the previous one with phi 90 has lambda 1.
< KimSangYeon-DGU> I'll also upload the experiment with phi 90 and lambda 1500
< sumedhghaisas> so we need more constraint
< sumedhghaisas> thats okay but they stiull converge with higher lambda
< sumedhghaisas> ?
< sakshamB> ShikharJ: I am here.
< sumedhghaisas> I can see the phi 0 thing ... thats very nice. Could you also upload the phi change of that
< KimSangYeon-DGU> Yeah
< sumedhghaisas> If these experiments are correct we need to work on the constraint optimization little more
< sumedhghaisas> These experiments prove that our objective is good enough
< KimSangYeon-DGU> I can't find the experiment with phi 0, 180 and lambda 1
< KimSangYeon-DGU> Hmm.. I think I removed them
< ShikharJ> sakshamB: Great, let's start then.
< ShikharJ> Toshal: Are you here?
< KimSangYeon-DGU> sumedhghaisas: Currently, we use the constraint equation in the middle of the paper at page 4.
< sakshamB> ShikharJ: alright. I think my work for regularizers PR and CGAN PR is almost complete.
< KimSangYeon-DGU> sumedhghaisas: How can we optimize it??
< sakshamB> ShikharJ: So, I think that I will start working on spectral Norm layer if you don’t mind.
< sumedhghaisas> KimSangYeon-DGU: Ahh I mean find out better constraint optimization that just lagrangian
< sumedhghaisas> lagrangian is a very soft
< KimSangYeon-DGU> Aha~
< sumedhghaisas> I have been readin upon this some but failing to get more time
< sumedhghaisas> would you be interested in a reading assignment?
< ShikharJ> sakshamB: Yeah, I took a look yesterday. I would appreciate if you could provide the rationale behind the orthogonal regularizer's implementation? That way it would be easier for me to review that, as I feel it is the only thing I'm not confident on.
< KimSangYeon-DGU> Really
< KimSangYeon-DGU> I uploaded the graphs
< sumedhghaisas> cool let me find the chapter
< KimSangYeon-DGU> Really thanks!!
< ShikharJ> sumedhghaisas: Else everything looks good. I'm glad you could get the ball rolling with CGAN. Do you have access to savannah?
< ShikharJ> sumedhghaisas: Ouch that was meant for sakshamB, sorry!
< sakshamB> ShikharJ: you mean derivation for the gradient for orthogonal regularizer. yes I will try to comment that on the PR.
< sumedhghaisas> ShikharJ: Thats fine. Our meetings always always collide :P
< ShikharJ> sakshamB: The evaluate method and the gradient test to be precise.
< ShikharJ> sakshamB: Do you have access to savannah servers? If not, maybe I can schedule a job for you?
< sakshamB> ShikharJ: yes I can try to explain that. No I don’t have access to the savannah servers
< ShikharJ> sakshamB: Okay, maybe we should help you in that case. Since, you'll need to test out your implementation and make minor changes. Have you tried running a 5 minute test on your machine for CGAN?
Blizzard57 has joined #mlpack
< sakshamB> ShikharJ: so, far I have only run the test that I have included on the PR. It does not run for 5 minutes though. :)
Blizzard57 has quit [Remote host closed the connection]
< sumedhghaisas> KimSangYeon-DGU: Its this book
< sumedhghaisas> but its super duper big
< sumedhghaisas> I am trying to dig in this to find stuff related to lagrange multipliers and how to improve them when used with gradient descent
< KimSangYeon-DGU> Wow
< ShikharJ> sakshamB: Haha, yeah. When I used to do my runs, I once ended up keeping my laptop warm for a whole week :) Until zoq told me about savannah.
< sumedhghaisas> try reading chapter 17
< KimSangYeon-DGU> Yeah
< sumedhghaisas> thats the most important chapter for us
< KimSangYeon-DGU> Oh..
< sumedhghaisas> especially something called quadratic penalty method
< Toshal> ShikharJ : I am here
< ShikharJ> sakshamB: Okay, since it is a minor thing, I'll schedule that.
< ShikharJ> sakshamB: Are you confident on the hyper-parameters?
< KimSangYeon-DGU> sumedhghaisas: I'll read this
< ShikharJ> Toshal: Great, I think most of your time was spent completing PRs? Are you still working on some of them?
< sumedhghaisas> KimSangYeon-DGU: And dont worry if you swamped when reading this book. Its normal.
< Toshal> Yes it will continue for some time. I will add LSGAN today itself.
< KimSangYeon-DGU> Thanks :)
< sumedhghaisas> once you compare what they are saying to what we are doing its little easier
< Toshal> Just running it's test will take some time.
< KimSangYeon-DGU> Ahh~
< KimSangYeon-DGU> sumedhghaisas: Okay!
< ShikharJ> Toshal: I think you have some prior experience with savannah?
< sumedhghaisas> I think we might be able to improve our method using quadratic penalty method and augmented lagrangian method
< KimSangYeon-DGU> I'm excited
< sumedhghaisas> great :)
< Toshal> Yes I am having it.
< KimSangYeon-DGU> So interesting.
< sakshamB> ShikharJ: No I am not quite sure about all the parameters.
< ShikharJ> sakshamB: You don't have to be sure of all, most of them are well tuned for regular GANs. I'm asking for the parameters your PR has added.
< sumedhghaisas> KimSangYeon-DGU: Optimization algorithms are super difficult to understand but they are very interesting
< KimSangYeon-DGU> Oh, really really interesting
< ShikharJ> Toshal: Okay, seems like you have a set task ahead of you. Feel free to ask questions.
< sakshamB> ShikharJ: hmm my PR doesn’t add any additional hyper-parameters. There is just some additional input to the CGAN which should be fine.
< KimSangYeon-DGU> sumedhghaisas: This book seems to be really popular
< KimSangYeon-DGU> Amazing..
< sumedhghaisas> okay lets keep working on the QGMM vs GMM little more?
< KimSangYeon-DGU> Yes, actually I didn't write any document about that yet.
< sumedhghaisas> lets run all experiments we ran when checking the validity of objective function with GMM
< KimSangYeon-DGU> So, I think I need some time.
< sumedhghaisas> ahh yes. Take your time with that :)
< KimSangYeon-DGU> And I have a question briefly
< sumedhghaisas> Ahh yes. Also sorry for missing your mail. :(
< KimSangYeon-DGU> I have a plan to update all the paper
< KimSangYeon-DGU> Ah~
< ShikharJ> sakshamB: Okay, I'll schedule a job later today, It should be done by tomorrow evening in India time.
< KimSangYeon-DGU> No worries
< sumedhghaisas> I will get to it today
< KimSangYeon-DGU> Actually, our QGMM is improved more and more, so I intend to update all tha paper
< KimSangYeon-DGU> Yes
< sakshamB> ShikharJ: also maybe we should take a look at the gradient error that Toshal had pointed out.
< KimSangYeon-DGU> Is is desirable?
< KimSangYeon-DGU> *it
< ShikharJ> sakshamB: Yeah, I think I should give it a look.
< ShikharJ> sakshamB: Toshal: I'm glad we're making steady progress. I should ideally spend more time now, getting your work merged in. Have a good weekend guys :)
< sakshamB> ShikharJ: thanks. Hope you have a great weekend too! :)
< Toshal> ShikharJ: Have a good day and weekend.
< sumedhghaisas> KimSangYeon-DGU: ummm... depends on how much bandwidth you have. I will suggest creating a new document where we compare QGMM and GMM, and in that we write down both QGMM and GMM results
< sumedhghaisas> this way it will document the new results and also QGMM and GMM
< KimSangYeon-DGU> Ahh, okay!!
< sumedhghaisas> Also less work :)
< KimSangYeon-DGU> :)
< sumedhghaisas> KimSangYeon-DGU: Need to run to another meeting. Have a great weekend.
< KimSangYeon-DGU> sumedhghaisas: Okay, Have a nice weekend! Thanks!! :)
karmabeach24 has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 244 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
vivekp has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#56 (pca_tree - 6e02790 : Roberto Hueso Gomez): The build failed.
travis-ci has left #mlpack []
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
sumedhghaisas has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 244 seconds]
vivekp has joined #mlpack
sreenik[m] has quit [Remote host closed the connection]
Sergobot has quit [Remote host closed the connection]
chandramouli_r has quit [Remote host closed the connection]
aleixrocks[m] has quit [Remote host closed the connection]
chandramouli_r has joined #mlpack
robertoh1eso has quit [Ping timeout: 272 seconds]
robertohueso has joined #mlpack
aleixrocks[m] has joined #mlpack
Sergobot has joined #mlpack
sreenik[m] has joined #mlpack
ImQ009 has quit [Quit: Leaving]
KimSangYeon-DGU has quit [Remote host closed the connection]
karmabeach24 has quit [Ping timeout: 248 seconds]