verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
yaswagner has quit [Quit: Page closed]
Atharva has quit [Ping timeout: 260 seconds]
witness_ has quit [Ping timeout: 256 seconds]
gtank has quit [Ping timeout: 256 seconds]
gtank_ has joined #mlpack
witness_ has joined #mlpack
witness_ has quit [Ping timeout: 240 seconds]
gtank_ has quit [Ping timeout: 256 seconds]
Atharva has joined #mlpack
witness_ has joined #mlpack
gtank_ has joined #mlpack
< Atharva> sumedhghaisas: Hey Sumedh
< Atharva> I have been trying to debug the gradent check for the reconstruction loss since yesterday with no luck. Will you please check the LogProbBackward function once?
< Atharva> I checked it multiple times and can't find any mistake
< Atharva> zoq: Do you think there can be any other reasons for gradient check to fail?
< Atharva> sumedhghaisas: zoq: No worries, it just passed ! :D
< jenkins-mlpack> Project docker mlpack nightly build build #355: STILL UNSTABLE in 2 hr 51 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/355/
< zoq> Atharva: Good, how did you solve the issue?
< Atharva> Turns out I changed the loss function from NegativeLogLikelihood to ReconstructionLoss but I was still using the LogSoftMax activation. Also, I wasn't applying softmax before using the standard deviation
< Atharva> zoq: I had a doubt.
< Atharva> I want to add this gradient check test for the reconstruction loss, should I add it in the layer test file or the loss test file?
< Atharva> If the loss test, then we would have to define the checkgradient function again.
< Atharva> softplus*
< zoq> We could write a new file something like ann_test_tools.hpp which implements the gradient check function and include the file inside the test, but I'm fine with either one.
< Atharva> Yes, I think the ann_test_tools.hpp is a good option. That way it can be used both in ann_layer_test and loss_function_test
manish7294 has joined #mlpack
< manish7294> zoq: Is it necessary to have labels in integer format for mlpack?
< zoq> manish7294: That depends on the method, some store the labels as arma::row<size_t>
< zoq> manish7294: You could map the labels before passing them and remap the results afterwards, not sure that is an option.
< manish7294> zoq: Thanks, I am doing that for now.
< zoq> this function does exactly that
< manish7294> zoq: Then it seems strange why lmnn is throwing labels related error for the letters and balance dataset, while working on the integer format versions of the same.
< manish7294> I shall look more into it, as why it's happening
< zoq> manish7294: For the balance dataset isn't the label a string?
< manish7294> zoq: It's a char
< zoq> manish7294: And if you load the dataset it's converted to int?
< manish7294> zoq: verifying it.
< manish7294> zoq: somehow after normalize(), all labels are turning to 0.
< manish7294> Before normalize too rawLabels have all 0 enteries.
< zoq> are you pasing a seperate labels file or do you use the last column?
< manish7294> last column
< zoq> okay, how does data look like?
< zoq> https://github.com/manish7294/mlpack/blob/8a6709f089b72001bee41f23989205fda694a113/src/mlpack/methods/lmnn/lmnn_main.cpp#L254 already converts the labels so I'm not sure I see the reason for NormalizeLabels
< manish7294> zoq: initially we have A, B, C, D as labels and finally we are getting 0 , 0 , 0 ....
< zoq> same in the data matrix before the split?
< manish7294> zoq: just a sec
< zoq> manish7294: I guess an easy solution would be to manually convert the dataset.
< manish7294> zoq: I was thinking of the same, shall I drop a converted version here. Maybe you or Ryan can upload it
< manish7294> zoq: After just loading that data we are getting 0's as the last column.
< zoq> manish7294: Sure, I think it would be great to get this one working but for now I guess it makes sense to upload a new dataset.
< manish7294> zoq: I will be sending a link within some time
< zoq> okay
< zoq> manish7294: okay, uploaded
< manish7294> zoq: great :)
manish7294 has quit [Ping timeout: 260 seconds]
ImQ009 has joined #mlpack
K4k has quit [Ping timeout: 256 seconds]
K4k has joined #mlpack
manish7294 has joined #mlpack
< manish7294> rcurtin: I performed some benchmarking and here's the results. Please have a look at them as you get chance https://github.com/mlpack/mlpack/pull/1407#issuecomment-398772089
< rcurtin> I saw your post, let me finish this other thing first. just at a quick glance it looks great so far (but I need to look closer)
< manish7294> rcurtin: sure
manish7294 has quit [Ping timeout: 260 seconds]
manish7294 has joined #mlpack
< manish7294> rcurtin: I saw your comment, Additionally I would like to say that --- we have quite a number of parameters and I strongly feel that we can get accuracy comparable to shogun's by some tuning.
< manish7294> And then there's balance dataset on which mlpack performs totally on other level :)
< manish7294> and I think shogun is using pca for distance initialization process since I have been passing anything to shogunLMNN
< manish7294> *not been passing
manish7294 has quit [Ping timeout: 260 seconds]
manish7294 has joined #mlpack
< manish7294> zoq: rcurtin: matlab doesn't seems to accessible from benchmarks. The error this time is [FATAL] Exception: 'MATLAB_BIN' , Can you help with this?
< manish7294> zoq: I saw the matlab scripts and looking at the way you implemented them, I think it is possible to take out lmnn implementation out of drtoolbox and include it similar to existing ones.
< rcurtin> try 'export MATLAB_BIN=/opt/matlab/bin/matlab', I think that will fix it
< rcurtin> and I agree, I think we can just take lmnn.m and drop it into place
< rcurtin> I agree with your comments too---I don't think we need to exactly match shogun's accuracy everywhere
< rcurtin> just get an idea that we perform roughly the same and get an idea that we could tune to match the accuracy
< manish7294> rcurtin: Thanks! It looks like there is some progress now.
< manish7294> Ah! finally letter comes to a stop with a timing of 6416.926464s against our's 19.975593 with accuracies almost same :)
manish7294 has quit [Ping timeout: 260 seconds]
< ShikharJ> rcurtin: Could you review the BatchSupport PR, so that we may merge it?
< rcurtin> sure, give me a little while and I will do that
sumedhghaisas_ has joined #mlpack
< sumedhghaisas_> Atharva: Hi Atharva, could you fix the third static code check error so that we can merge that PR? :)
< Atharva> sumedhghaisas_: Yes, I will do it right now.
< Atharva> Do you mean this one - "Not all members of a class are initialized inside the constructor." ?
< zoq> Atharva: Yes, that's the one.
< Atharva> zoq: Okay, so in the constructor I will just initialize them to zero.
< zoq> Atharva: or you can use the constructor list
< Atharva> Okay
ImQ009 has quit [Quit: Leaving]
< Atharva> sumedhghaisas_: Are you free right now?
sumedhghaisas_ has quit [Ping timeout: 260 seconds]