verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
manish7294 has joined #mlpack
manish72942 has joined #mlpack
< manish7294> rcurtin: If I am correct, then it's a good step for separating the two parts and even Low rank property is remaining intact within first part
< manish7294> I think there is just a small typo of + sign in second part :)
< rcurtin> yeah, probably, I was not too exact when I did it, I just wanted to get the idea across
< manish7294> Right, I understand that
< manish7294> I assume that last M expression written in red, is not related to new formulation
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< rcurtin> ah hang on, let me look again
manish72942 has quit [Read error: Connection reset by peer]
manish72942 has joined #mlpack
< rcurtin> manish72942: yeah, right, that last line in red and blue is something I forgot to erase
< rcurtin> the basic idea here is that the slack variables don't need to be part of the optimization; they can be expressed entirely as a penalty term
< rcurtin> (again, I *think*; I am not 100% sure yet, but I am fairly sure)
< manish72942> rcturin: so we will be having single positive semidefinite constraint right?
< rcurtin> to minimize the objective function for _any_ M, we simply take for each slack variable e_ijk = max{ 0, 1 - ((x_i - x_j)^T M (x_i - x_j) - ...) }
< manish72942> sorry for name typo typing in half sleep :)
< rcurtin> actually we don't have any constraints at all! if we optimize for L where M = L^T L, then for _any_ L we have that M is positive semidefinite
< rcurtin> ah, sorry, I hope you are not up too late or something
< manish72942> ya, that's correct
< manish72942> no, I just woke up:)
< rcurtin> ah, ok :) I was thinking it should be the morning there
< rcurtin> you are... UTC + 8.5 ?
manish7294 has quit [Ping timeout: 260 seconds]
< rcurtin> ah sorry UTC+5.5 I think
< manish72942> +5:30
< rcurtin> so it's like 6:15am there... I definitely do not wake up that early :)
< manish72942> I usually don't but today somehow I got up :)
< rcurtin> :)
< manish72942> so, I think we got a fair formulation now
< manish72942> Just need to figure out update step
< manish72942> I think that can be done by running LRSDP with max iteration of 1
< manish72942> and changing coordinates matrix according to update from second optimizer
< manish72942> So, whatever I said is even closer to what you are thinking, then which optimizer will be our second one?
< manish72942> rcurtin: Going back to sleep, see you soon :)
< rcurtin> ah, sorry, I stepped out
< rcurtin> if that reformulation is correct there is no need for an SDP solver at all
< rcurtin> we can just use SGD or L-BFGS or anything else
< rcurtin> since there are no constraints, it's just a regular objective to be optimized, not an SDP
< manish72942> rcurtin:Yeah! right, now it's not an SDP :)
< manish72942> rcurtin: So, shall I start implementing it.
< manish72942> rcurtin:If yes, then request you to please leave some starting comments :)
< manish72942> bye :)
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
manish72942 has quit [Ping timeout: 255 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 255 seconds]
vivekp has joined #mlpack
< rcurtin> manish7294: sorry, I am not always at the computer in the evenings :)
< rcurtin> I think it would be wise to do a little bit of theory work to make sure that these objectives really are doing the same thing
< rcurtin> but if we can show that, you could actually implement it pretty quickly (just need an Evaluate() and Gradient(), then plug it into L_BFGS or whatever!)
< rcurtin> another idea could be empirically implementing that, and seeing if the results it gives are the same as the SDP
vivekp has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
manish7294 has joined #mlpack
< manish7294> rcurtin: I think the formulation you developed is similar to what authors have done. Insisted of solving SDP they have optimized the linear equation in M with standard optimizers.
< manish7294> you can refer to page 33 Appendix A. Solver http://www.cs.cornell.edu/~kilian/papers/jmlr08_lmnn.pdf
< manish7294> So, I think the proof will not be needed.
< manish7294> *Instead
< jenkins-mlpack> Project docker mlpack nightly build build #320: FAILURE in 22 hr: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/320/
< manish7294> rcurtin: I went over the algorithm and it's really a well developed algo with some nice tricks employed. I think we could work over it to exploit more speedups in various computational parts.
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#151 (LayerNorm - 8405801 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
manish7294 has quit [Quit: Page closed]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#152 (LayerNorm - 7155ab1 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#153 (AtrousConv - aafc3cc : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
mayank98 has joined #mlpack
ankit has joined #mlpack
< ankit> Hello, I want to contribute. How should I start ?
< jenkins-mlpack> Project docker mlpack nightly build build #321: NOW UNSTABLE in 2 hr 37 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/321/
< zoq> ankit: Hello, http://www.mlpack.org/involved.html should be helpful.
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#154 (LayerNorm - e2f49ed : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< zoq> ShikharJ: Looks like you solved the build issue, once travis confirmed this one, we can merge it.
< ShikharJ> zoq: Are we talking about the Atrous Convolution PR or the Layer Norm PR?
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#155 (AtrousConv - 5b215ec : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< ShikharJ> zoq: I'd prefer if the Atrous Convolution PR is merged first, as it is of higher priority now. It will lead to a merge issue in LayerNorm PR (as they both edit ann_layer_test.cpp file).
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#4915 (master - a37c9ca : Marcus Edel): The build has errored.
travis-ci has left #mlpack []
mayank98 has quit [Quit: Connection closed for inactivity]
< rcurtin> manish7294: right, thanks for finding that
< rcurtin> the only additional trick would be the decomposition of M into L^T L, and then optimize over L; I think this may break the convexity of the algorithm but I would need to think about it
< rcurtin> in any case, I think it would be fine to start with the solver given in that paper
< rcurtin> if that's how you'd like to proceed, you should take a look at the existing optimizers like L_BFGS or SGD or others and see how you can use those in your implementation
< rcurtin> ideally you wouldn't want to implement the gradient step yourself, you would want to make it as generic as possible so that we could plug in different optimizers
manish7294 has joined #mlpack
< manish7294> rcurtin: Thanks for reviewing it and working through this idea.
< manish7294> I think now we have good enough idea to work upon and should follow it now :)
< manish7294> Yeah you are absolutely right, we do want to have a generic implementation than a one dependent on a single optimizer.
< manish7294> Let's try to make best out of it :)
< manish7294> And regarding the M decomposition part, I think it is similar to page 34 A.2 Projection
< manish7294> The posiive semidefinite constraint is being handled by projection
< manish7294> *positive
govg has quit [Ping timeout: 240 seconds]
sumedhghaisas has quit [Read error: Connection reset by peer]
< rcurtin> there are definitely similarities, but the projection step would be a different approach than optimizing L directly
sumedhghaisas has joined #mlpack
ImQ009 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< manish7294> rcurtin: Right! projection includes diagonalization process, which can be avoided while optimizing L directly.
manish7294 has quit [Quit: Page closed]
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 265 seconds]
sumedhghaisas2 has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
mayank98 has joined #mlpack
haritha1313 has joined #mlpack
< haritha1313> zoq: I am writing tests for cf in compliance with the new design. Right now there are 6 decomposition policies. Should each function be tested for all decomposition policies or a few unique samples would be enough?
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
haritha1313 has quit [Ping timeout: 260 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
witness_ has joined #mlpack
ImQ009 has quit [Quit: Leaving]
mayank98 has quit [Quit: Connection closed for inactivity]
witness_ has quit [Quit: Connection closed for inactivity]