verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
cjlcarvalho has joined #mlpack
caiojcarvalho has quit [Ping timeout: 256 seconds]
cjlcarvalho has quit [Ping timeout: 244 seconds]
cjlcarvalho has joined #mlpack
ShikharJ has quit [Quit: EliteBNC 1.6.5 - http://elitebnc.org]
ShikharJ has joined #mlpack
caiojcarvalho has joined #mlpack
cjlcarvalho has quit [Ping timeout: 256 seconds]
caiojcarvalho has quit [Ping timeout: 240 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #3: STILL FAILING in 3 hr 26 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/3/
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#60 (evalBounds - cb36f2d : Manish): The build was broken.
travis-ci has left #mlpack []
caiojcarvalho has joined #mlpack
ImQ009 has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#61 (impBounds - 1b7da33 : Manish): The build is still failing.
travis-ci has left #mlpack []
ShikharJ has quit [Quit: ZNC 1.6.5-elitebnc:6 - http://elitebnc.org]
ShikharJ has joined #mlpack
< ShikharJ> zoq: Are you there?
< zoq> ShikharJ:About to step out, but I'm here now.
< ShikharJ> zoq: I implemented the EvaluateWithGradient function for FFN class, but I don't see an improvement in the runtime, as it hardly reduces just a single if-check. Should I close the PR?
< ShikharJ> zoq: FFN's Evaluate and Gradient functions are pretty disjoint, so I don't think implementing EvaluateWithGradient would make much sense here.
< zoq> ShikharJ: It should at least save one Evalute call.
< zoq> ShikharJ: What do you mean with "it hardly reduces just a single if-check"?
< ShikharJ> zoq: Yes, but if we call Evaluate and Gradient independently, and just call EvaluateWithGradient, there's practcally no difference in the number and type of functions being called.
< ShikharJ> zoq: This was different in the case of GAN class, where both Evaluate and Gradient had a Forward() call, which we could reduce in EvaluateWithGradient().
< zoq> If EvaluateWithGradient isn't implemented the optimizer should call:
< zoq> Evaluate() for the loss calculation and Gradient() for the Gradient calculation, the Gradient() does also call Evaluate() to to perform the forward pass for the backward and gradient step. EvaluateWithGradient will use the loss from the Gradient step, so this should at least save us one call of Evaluate() in every epoch.
< zoq> let me check the PR
< ShikharJ> zoq: Ah, I see it now. I had thought, calling Evaluate() would be unnecessary here, but yeah, that would be an improvement. Please take a look whenever free.
< zoq> ShikharJ: Okay, see my comment on the PR.
caiojcarvalho has quit [Ping timeout: 244 seconds]
caiojcarvalho has joined #mlpack
caiojcarvalho has quit [Read error: Connection reset by peer]
caiojcarvalho has joined #mlpack
< ShikharJ> zoq: I think, we can also make use of FFN::EvaluateWithGradient() internally inside GAN::EvaluateWithGradient() as well. I'll see what can be done.
caiojcarvalho has quit [Ping timeout: 256 seconds]
caiojcarvalho has joined #mlpack
< ShikharJ> zoq: Seems like it is possible! This should give a good speedup as well!
ImQ009 has quit [Quit: Leaving]
caiojcarvalho has quit [Ping timeout: 265 seconds]
< zoq> ShikharJ: Great, I'll take a closer look at the PR tomorrow.