verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ImQ009 has joined #mlpack
sds has joined #mlpack
sds is now known as Guest65307
Guest65307 has quit [Client Quit]
vivekp has joined #mlpack
wenhao has quit [Quit: Page closed]
< ShikharJ> zoq: I received some good results on the full dataset as well :) Posting in the PR now.
< ShikharJ> zoq: It took about 3 days for the GAN to converge, but the results we got were also better than what is posted as the output for the O'Reilly example here (https://github.com/mlpack/mlpack/pull/1066#issuecomment-322415665).
< ShikharJ> zoq: I think we can cut down a lot on the time elapsed, once the support for batches is implemented. We are all set to merge the GAN PR now!
< ShikharJ> zoq: Take a look at our outputs here (https://github.com/mlpack/mlpack/pull/1204#issuecomment-395187579).
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< zoq> ShikharJ: The results are really promising, great.
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#173 (GAN - 2635d6b : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< ShikharJ> zoq: Thanks for your help. I was able to debug our DCGAN MNIST test, I'll run the 10,000 image and 70,000 image tests on that as well.
< jenkins-mlpack> Project docker mlpack nightly build build #345: STILL UNSTABLE in 3 hr 4 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/345/
< ShikharJ> zoq: Do you think we can merge the GAN code now?
< zoq> ShikharJ: Yes let's merge the code, one last thing can you remove the output from the test (https://github.com/mlpack/mlpack/pull/1204/files#diff-7ad5b550c447d8de902a03acc4d31736R90)?
< ShikharJ> zoq: Do you mean the `std::cout << "Loading Parameters" << std::endl;` statement?
< zoq> yeah
< zoq> If you like keep the output, but we should use Log::Debug.
< zoq> That way a user can disable the output.
< ShikharJ> zoq: I'll make use of Log::Info?
< zoq> yeah, that's fine as well.
< ShikharJ> zoq: Done!
< zoq> ShikharJ: Okay, once the travis build finished, I'll hit the merge button :)
< ShikharJ> zoq: Great. COuld you also tell of a source where I can find the CelebA dataset in the csv format? The original author's link seems to have gone dead, and I'm only able to find the jpg or png format files.
< zoq> ShikharJ: Not sure the dataset exists as csv, but we could use the hdf5 format: https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/GAN/src/data
< zoq> ShikharJ: Another idea is to create one as csv, I can do this if you like.
< ShikharJ> zoq: Does armadillo::load support hdf5 format?
< ShikharJ> zoq: Ah, I see now, it does. Thanks for that.
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#174 (GAN - b294b31 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack
< ShikharJ> zoq: I've tmux'd the two DCGAN MNIST builds as well. I'll get to debugging the CelebA test as well.
manish7294 has joined #mlpack
< zoq> ShikharJ: Good, I guess we could use a subset for the DCGAN as well to see some initial results.
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#175 (DCGAN - 5a5451c : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< manish7294> zoq: I was using BigBatchSGD with adaptive stepsize and it turns out that effectiveBatchSize is surpassing permissible value leading to errors, whereas backtracking line search works fine. Here is the backtrace https://pastebin.com/q0LWjM4V
< manish7294> initially I kept batch size of 50
manish7294 has quit [Ping timeout: 260 seconds]
< Atharva> zoq: In ffn_impl.hpp line 188 : res += Evaluate(parameters, i, true); is their some reason to writing the third parameter as a boolean, because there is no definition of Evaluate with the third parameter as a boolean and I think it's just getting converted to size_t = 1? Am I missing something?
< ShikharJ> zoq: One build runs the 10,000 image subset, should be done in 8 hours from now.
< zoq> manish7294: Ahh, can I use the code from the PR to reproduce the issue?
< zoq> Atharva: There is an Evaluate function (https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/ffn.hpp#L159) which takes a boolean as the last parameter; it's used to distinguish between training and testing.
< zoq> ShikharJ: Nice, let's see if we get some reasonable results.
< zoq> manish7294: Another optimizer you could test is SGRD with CyclicalDecay.
< Atharva> zoq: That's true but that function has four parameters, the one I pointed to has three.
< Atharva> The evaluate function with 3 parameters has size_t as the last parameter
< zoq> ohh, you are right, nice catch
< zoq> this should be Evaluate(parameters, i, 1, true);
< zoq> Do you like to open a PR or should I fix this?
< Atharva> It's just one line, if you can directly push it, it will be better.
< Atharva> Is it okay?
sulan_ has joined #mlpack
manish7294 has joined #mlpack
< manish7294> zoq: I haven't pushed the change regarding optimizer on the PR yet because of the issue but you can reproduce it by just passing the BigBatchSGD through lmnn_main.cpp --- you will just have to make a very short change in the last part of lmnn_main.cpp
< zoq> Atharva: Yeah, no problem.
< manish7294> zoq: I tried SGDR. The results were similar to SGD and comes up with the same problem of coordinates matrix divergence we are facing with SGD. So, I guess BigBatchSGD and L-BFGS are the best for us.
< zoq> manish7294: Okay, I'll take a look into the issue.
< manish7294> zoq: great, Thanks for help.
manish7294 has quit [Quit: Page closed]
< ShikharJ> zoq: You there/
< ShikharJ> ?
< zoq> ShikharJ: yes
< ShikharJ> I think we can close off the old PR and issues now. Like (https://github.com/mlpack/mlpack/pull/1066) and (https://github.com/mlpack/mlpack/issues/1206).
< zoq> agreed, let me close the issue/pr
< ShikharJ> zoq: For the next one week, I would like to focus on getting the support for batches and optimizer separation (and debug and test the DCGAN tests on the side). Would that be fine?
< zoq> ShikharJ: Absolutely, let us put some time into tuning the existing code.
witness_ has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
sulan_ has quit [Quit: Leaving]
ImQ009 has quit [Quit: Leaving]