verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
vivekp has joined #mlpack
ImQ009 has joined #mlpack
ImQ009 has quit [Ping timeout: 276 seconds]
ImQ009 has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#5086 (master - 86219b1 : Mikhail Lozhnikov): The build passed.
travis-ci has left #mlpack []
< ShikharJ> zoq: Could you mention a resource where I can read up on the visitor pattern related structure of mlpack (specifically within FFN class)?
manish7294 has joined #mlpack
< manish7294> zoq: Thanks for helping out :)
< manish7294> rcurtin: zoq: As of now I can use benchmarks metrics with mlpack's lmnn script by adding a predicition option in lmnn_main.cpp(here I am using my own weighted knn predictor) but that can't be done for shogun(instead here we can use shogun's knn classifier to get predicitions). So, I was wondering instead of using two different predictors why don't we just use shogun's, so that we have same base for accuracy comparisons.
manish7294 has quit [Ping timeout: 260 seconds]
< zoq> ShikharJ: Not much but for me https://www.boost.org/doc/libs/1_55_0/doc/html/variant.html was helpful.
< zoq> ShikharJ: I'll take a look at the issue later today.
< zoq> manish7294: Sounds like a reasonable option to me, if it's just used to get the accuracy.
< ShikharJ> zoq: Ah okay, I'll get to implementing the GANOptimizer class then. I'll begin with formulating a structure for that.
< ShikharJ> For the time being.
< zoq> ShikharJ: Yeah, let's start with a basic structure.
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
manish7294 has joined #mlpack
< rcurtin> ma ish7294: I think using shogun's knn classifier is reasonable
< rcurtin> you could do that for pack too
< rcurtin> for mlpack too*
< rcurtin> that seems like a reasonable solution to me
< rcurtin> I think inside the mlpack script we should not assume that shogun is available
manish7294 has quit [Ping timeout: 260 seconds]
manish7294 has joined #mlpack
< manish7294> rcurtin: Right, I pushed the changes regarding the classifier.
manish72942 has joined #mlpack
< manish7294> But don't know why I am stuck up with setting up benchmarks, This time mlpack' lmnn script continually throwing can't execute command. Though everything was working last night, don't know what happened :(
< manish72942> I have done several rebuilds of mlpack too but can't get what's happening here
manish7294 has quit [Client Quit]
< rcurtin> hmmm, can you tell me more about what you've done?
< rcurtin> if the execute failed, the error message does show the command so you could try running that command by hand
< manish72942> the command works
< manish72942> even mlpack_lmnn is there in bin
< rcurtin> can you provide any more detail? maybe print stderr and stdout from the subprocess call?
< manish72942> Will you be available after 1 hour from now, currently I am not at my PC?
< manish72942> sorry for that
< rcurtin> unfortunately no, I will be racing :)
< rcurtin> but if you can print the errors from stderr or stdout I think it will make it clear what is going erong
< rcurtin> wrong*
< manish72942> no problem, I will post them as soon as I reach home
< manish72942> today is practice right?
< rcurtin> no, that was yesterday, today is racing?
< rcurtin> oops accidental question mark
manish7294 has joined #mlpack
< manish7294> rcurtin: great, have fun :)
< manish7294> I am posting error here, please don't take the trouble to reply asap, it can be done after the race :)
< manish7294> all the executions giving similar results to this: [FATAL] Could not execute command: ['/home/manish/benchmarks/libraries/bin/mlpack_lmnn','-i', 'datasets/iris.csv', '-v', '-o', 'distance.csv', '-R', '100', '-p','3', '--seed', '42']
manish7294 has quit [Quit: Page closed]
< zoq> manish7294: And if you run /home/manish/benchmarks/libraries/bin/mlpack_lmnn -i datasets/iris.csv -v -o distance.csv -R 100 -p 3 --seed 42 by hand it works just fine?
< ShikharJ> zoq: The DCGAN PR is now completely debugged. However, I have been unable to get the CelebA dataset in the hdf5 format as well, because its running out of space on my system for some reason. Do you think we can merge the PR?
< zoq> ShikharJ: hm, do you think we could create a CelebA subset just to see if we could get some results?
< ShikharJ> zoq: It'd be possible if I could get the dataset, rest of the work should be easy.
< zoq> ShikharJ: Ah, I'll see if I can create a subset.
< ShikharJ> zoq: We need to have one dataset in the mlpack repository as well to give people an incentive.
< zoq> ShikharJ: But since it works on the MNIST dataset I see no problem to merge the code on that basis.
< zoq> ShikharJ: Right
< ShikharJ> zoq: We must also keep in mind that the images would be 178x218 with CelebA, so we'll have to crop them as well to 64x64 before setting them up for training.
< ShikharJ> zoq: I'm not sure how I could do the same without removing a part of the face, in some cases.
< zoq> ShikharJ: One solution would be to pad the image with zeros.
< ShikharJ> zoq: I'd say its your call for the merging. As I said, when I run the script for the conversion to hdf5, it crashes my system.
manish7294 has joined #mlpack
< manish7294> zoq: I got [FATAL] Cannot open file 'datasets/iris.csv'. terminate called after throwing an instance of 'std::runtime_error' what(): fatal error; see Log::Fatal output Aborted
< manish7294> It seems error is in dataset path
< manish7294> I tried replacing it with /home/manish/benchmarks/datasets/iris.csv and it works just fine
< zoq> Can you set the library path and try again?
< zoq> export LD_LIBRARY_PATH=/home/manish/benchmarks/libraries/lib/
< manish7294> zoq: not working
< zoq> ShikharJ: I don't mind to merge the code, perhaps after the BatchSupport PR?
< manish7294> zoq: I think cmd shoud be /libraries/bin/mlpack_lmnn -i datasets/iris.csv -v -o distance.csv -R 100 -p 3 --seed 42
< manish7294> as we running from inside benchmarks
< manish7294> and the above works too
< zoq> Using the full path should be fine.
< zoq> can you add print(e) to each except block
< manish7294> zoq: right, its working
< zoq> manish7294: the benchmark script?
< manish7294> no, just cmd
< manish7294> I have started a check for script
< zoq> The actual error message (print(e)) should be helpful.
< zoq> perhaps you already fixed that part?
< manish7294> Right, This is because I am doing all the debugging and work on slake.
< manish7294> Now it seems to be working as I just pass through that error, Earlier I was directly getting -2
< zoq> manish7294: Not sure I can help you at this stage, if you stuck at some point, please push the code and I'll take a look.
< manish7294> zoq: no problem, If I stuck again I will let you know.
< zoq> manish7294: Okay, sounds good :)
< Atharva> zoq: I think the reason negative_log_likelihood wasn't moved to loss_functions is because it's the default for many objects. Other files use NegativeLogLikelihood by just importting layer_types.hpp.
< Atharva> Should I add loss_functions/negative_log_likelihood.hpp to all of them or just keep it in layer folder
< zoq> Atharva: hm, what about including negative_log_likelihood.hpp inside layer_types.hpp?
< Atharva> Yeah, that's the easiest solution, should I add the other loss_functions as well?
< zoq> Atharva: hm, if the build time keeps almost the same.
< Atharva> Okay, I will add just negativelog right now
manish7294 has quit [Quit: Page closed]
< ShikharJ> zoq: Sure.
manish72942 has quit [Ping timeout: 240 seconds]
sumedhghaisas_ has joined #mlpack
< sumedhghaisas_> Atharva: Hey Atharva
< sumedhghaisas_> I saw your PR. Good work :)
< Atharva> Thanks Sumedh!
< Atharva> Any comments on it?
< sumedhghaisas_> Yeah. I didn't quite understand the Reconstruction loss function
< Atharva> forward?
< sumedhghaisas_> Shouldn't you be templatizing it with distribution?
< Atharva> Okay, yes, I don't know why I forgot that.
< Atharva> I will push a commit
< sumedhghaisas_> Sure :)
< Atharva> default will be normal, right?
< sumedhghaisas_> No rush
< sumedhghaisas_> I was just little confused while reading the code
< Atharva> Also, we don't have gradient check now, so we are stuck with simple tests
< Atharva> Do you think those are enough?
< sumedhghaisas_> Atharva: Sorry didn't get that. Why don't we have gradient checks?
< Atharva> Hmm, no other loss functions have employed gradient checks in their tests, even I was wondering why?
< sumedhghaisas_> Ohh... thats weird
< sumedhghaisas_> Maybe they are part of networks tested in ann_layer_test?
< sumedhghaisas_> but I think there only 1 loss function is used
< sumedhghaisas_> everywhere
< Atharva> Yeah, not all of them have been tested with gradient check
< sumedhghaisas_> thats not good
< sumedhghaisas_> okay we should definitely test Reconstruction loss though
< sumedhghaisas_> just use a tested network of ann_layer_test and replace the loss with reconstruction loss
< Atharva> Okay, with or without a repar layer? I don't think it should matter
< Atharva> Yeah
< sumedhghaisas_> you are right
< sumedhghaisas_> it won't matter
< sumedhghaisas_> actually I would prefer if you test it with just a linear layer
< sumedhghaisas_> thats much cleaner
< sumedhghaisas_> what you think?
< Atharva> Yeah, it's better. Any other suggestions for testing?
< sumedhghaisas_> umm. Haven't gone through the whole PR yet :)
< sumedhghaisas_> I will try to go through it today and will comment on it.
< Atharva> Okay, whenever you are free :)
< Atharva> Btw, our next task it to support generation throught the predict function, right?
< sumedhghaisas_> umm... The primary task is to merge Repar layer :)
< Atharva> Oh, sorry I haven't rebased it yet. I will do it first thing tomorrow.
< sumedhghaisas_> Sure thing :)
< sumedhghaisas_> Lets get all this code ready, then we can move on to MNIST
< sumedhghaisas_> Generation we can support as soon as we define the distribution over it
< sumedhghaisas_> so thats easy
< Atharva> I pushed a commit templetizing the distribution but I just realized that in the forward function, we cannot construct any dist passing the upper half as std and lower half as mean
< Atharva> for example, bernoulli will have different contructor
< Atharva> I will look into it
ImQ009 has quit [Quit: Leaving]