verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< rcurtin> ok, I changed the configuration so it should only send one email and one IRC notification on failing tests
< rcurtin> let's see if that actually works...
< rcurtin> or maybe we will get 1000 more messages :)
< jenkins-mlpack> Project docker mlpack nightly build build #318: STILL UNSTABLE in 2 hr 14 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/318/
ImQ009 has joined #mlpack
ImQ009_ has joined #mlpack
ImQ009 has quit [Ping timeout: 240 seconds]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#144 (AtrousConv - 1d0ce3f : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 248 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< jenkins-mlpack> Project docker mlpack nightly build build #319: STILL UNSTABLE in 2 hr 31 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/319/
P has joined #mlpack
P is now known as Guest26708
Guest26708 has quit [Client Quit]
sumedhghaisas2 has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#145 (LayerNorm - 11cbce4 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
< ShikharJ> zoq: Are you there?
< zoq> ShikharJ: Currently in a meeting, but I answer as soon as possible.
< ShikharJ> Sure, just wanted to give an update regarding future work. Since the PRs on Layer Norm and Dilated Convolutions are complete now (I don't think any major changes would be required), I'll start debugging the GAN PR and see if I can get that merged.
< ShikharJ> Most of the smaller changes are now done (except for Weight Clipping and Gradient Penalty methods, but that'll have to wait for WGAN work to commence). So we can start off with the big tasks now.
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#146 (LayerNorm - 1ea8586 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< zoq> That sounds like a really good plan to me, don't think there is much work left for the conv operations if any; so I guess this could be merged by the end of the week.
< zoq> Getting stable results for the existing GAN code is tricky, but I'm sure we can figure out what went wrong.
< ShikharJ> One issue was with the Naive Convolution Rule itself. So maybe we can expect different results now.
< ShikharJ> Also, now that I have understood a lot about the existing codebase, I'm hoping this would be easier now :)
< zoq> ShikharJ: Right, that was definitely an issue, perhaps that solves all the issues :)
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas2 has joined #mlpack
< ShikharJ> zoq: I was wrong, I found another bug in the Batch Normalization layer (I'm guessing it didn't affect the outputs much). I'll recheck the tests again against tensorflow's output.
< zoq> ShikharJ: Oh, okay.
< ShikharJ> This would also need to be corrected in Layer Normalization.
< zoq> right, hopefully you don't have to refactor the complete layer
manish7294 has joined #mlpack
< manish7294> rcurtin: Are you available for a short talk?
< zoq> hm, looks like the job prioritization doesn't work as I thought, perhaps it's because the matrix build is a single job ... not sure.
< zoq> I guess an easy solution is to use one of the benchmark systems in such a situation.
< zoq> If we agree on that, I'll go ahead and install the necessary packages.
< manish7294> rcurtin: I assume you are not here. Well, I am leaving some of my findings here. So, that you can have a look when you are back.
< manish7294> rcurtin: The same SDP which was giving bad_alloc() for primal dual work seamlessly with LRSDP, though it didn't converge when I initialized impostors part.
< manish7294> rcurtin: Yesterday while calculating the memory for matrix, we missed the SparseA matrices vector, which has constraint times objective function size.
< ShikharJ> zoq: No major issues, just had to add a pair of braces. It didn't affect the tests at all, probably the reason why it remained undetected.
< manish7294> rcurtin: Currently, I have created a pull request w.r.t inequality constraints, though some tests are failing. will look into that. All that remains in this part is to adapt primal dual with inequality constraints.
< manish7294> rcurtin: The main problem I am facing is, how we are going to initialize Eijl (slack) part of the matrix from yesterday's repersentation.
< manish7294> rcurtin: As we take impostors and target neighbors constraints independently (leaving us with 2 * k * n constraints), how will we be able to initialize sparseA matrix, as it depends on the combination of triplets ijl
< manish7294> rcurtin: It will be very helpful, if you could look into this. Thanks!
< rcurtin> manish7294: in a meeting now, available later, and I will answer then :(
< manish7294> rcurtin: It's fine. Just leaving the message here :)
manish7294 has quit [Quit: Page closed]
< ShikharJ> zoq: Can I also squash Kris' commits into one (there are more than 45, and everytime I do a rebase, some or the other conflict crops up)?
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#147 (LayerNorm - 7ad07a8 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< rcurtin> zoq: sure, using the benchmark system for the PR jobs is just fine with me
< rcurtin> even multiple benchmark systems would be fine
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#148 (master - d77f040 : Marcus Edel): The build has errored.
travis-ci has left #mlpack []
< rcurtin> manish7294: have you taken a look at the original LMNN implementation and how they handle this?
< rcurtin> I could see that you could initialize the slack variables to whatever is necessary to make the current matrix M a feasible solution
< rcurtin> then you could run the SDP holding the slack variables constant, then you could update the slack variables, etc.
< rcurtin> but you should probably take a look at the existing LMNN or BoostMetric implementation to get an idea
< zoq> ShikharJ: Sure, fine with me.
manish7294 has joined #mlpack
< manish7294> rcurtin: The original implementation calculates a N * N * k slack matrix at each gradient step, which they uses to calculate loss part. That's the largest size matrix they use.
< manish7294> rcurtin: And if we do something like that we will end up having a sparse matrix as a combination of diag matrix of N * N * k + a dense matrix of order d.
< rcurtin> manish7294: thanks for the link, I will study it soon when I have a chance
< manish7294> rcurtin: Sure! Till then I will updating primal dual part too :)
haritha1313 has joined #mlpack
< haritha1313> zoq: Thought it would be better to keep you updated on progress. I am currently refactoring CF class wrt PR #1355. Working on policy classes for various factorizers as of now.
< haritha1313> I guess it should be done by tomorrow, and once its settled I can start working on NCF and introduce it as a new DecompositionPolicy.
< haritha1313> As you know, a PR has been made for multiplymerge layer, and as discussed, embedding layer can be implemented by an alias in lookup layer. Should I make that addition in the existing PR itself?
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
manish72942 has joined #mlpack
manish7294 has quit [Quit: Page closed]
manish72942 has quit [Client Quit]
manish7294 has joined #mlpack
manish7294 has quit [Client Quit]
manish7294 has joined #mlpack
haritha1313 has quit [Ping timeout: 260 seconds]
manish7294 has quit [Quit: Yaaic - Yet another Android IRC client - http://www.yaaic.org]
manish7294 has joined #mlpack
manish7294 has quit [Client Quit]
manish7294 has joined #mlpack
< manish7294> "/home/travis/build/mlpack/mlpack/src/mlpack/tests/lrsdp_test.cpp(270): fatal error in "GaussianMatrixSensingSDP": difference{0.0514407%} between measurement{0.00023853556661135311} and b(i){0.00023841292538047398} exceeds 0.050000000000000003%"
< manish7294> Is this some kind of random error as two of travis job failed and one passed. Moreover I can't seem to reproduce it on my system.
manish7294 has quit [Quit: Page closed]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
ImQ009_ has quit [Quit: Leaving]
< zoq> haritha1313: Sounds great, about the adjustments, let's open another PR.
< zoq> manish7294: Did you use mlpack::math::RandomSeed(time(NULL)); inside the test case?
< zoq> while(true); do bin/mlpack_test -t TestSuite/TestCase; sleep 1;
< zoq> might be helpful as well
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< zoq> okay, benchmark systems configured, they are not as fast as masterblaster but they are still fast enough; and since we have more than one we can build in parallel.