verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
cjlcarvalho has joined #mlpack
caiojcarvalho has quit [Ping timeout: 256 seconds]
< zoq>
ShikharJ:About to step out, but I'm here now.
< ShikharJ>
zoq: I implemented the EvaluateWithGradient function for FFN class, but I don't see an improvement in the runtime, as it hardly reduces just a single if-check. Should I close the PR?
< ShikharJ>
zoq: FFN's Evaluate and Gradient functions are pretty disjoint, so I don't think implementing EvaluateWithGradient would make much sense here.
< zoq>
ShikharJ: It should at least save one Evalute call.
< zoq>
ShikharJ: What do you mean with "it hardly reduces just a single if-check"?
< ShikharJ>
zoq: Yes, but if we call Evaluate and Gradient independently, and just call EvaluateWithGradient, there's practcally no difference in the number and type of functions being called.
< ShikharJ>
zoq: This was different in the case of GAN class, where both Evaluate and Gradient had a Forward() call, which we could reduce in EvaluateWithGradient().
< zoq>
If EvaluateWithGradient isn't implemented the optimizer should call:
< zoq>
Evaluate() for the loss calculation and Gradient() for the Gradient calculation, the Gradient() does also call Evaluate() to to perform the forward pass for the backward and gradient step. EvaluateWithGradient will use the loss from the Gradient step, so this should at least save us one call of Evaluate() in every epoch.
< zoq>
let me check the PR
< ShikharJ>
zoq: Ah, I see it now. I had thought, calling Evaluate() would be unnecessary here, but yeah, that would be an improvement. Please take a look whenever free.
< zoq>
ShikharJ: Okay, see my comment on the PR.
caiojcarvalho has quit [Ping timeout: 244 seconds]
caiojcarvalho has joined #mlpack
caiojcarvalho has quit [Read error: Connection reset by peer]
caiojcarvalho has joined #mlpack
< ShikharJ>
zoq: I think, we can also make use of FFN::EvaluateWithGradient() internally inside GAN::EvaluateWithGradient() as well. I'll see what can be done.
caiojcarvalho has quit [Ping timeout: 256 seconds]
caiojcarvalho has joined #mlpack
< ShikharJ>
zoq: Seems like it is possible! This should give a good speedup as well!
ImQ009 has quit [Quit: Leaving]
caiojcarvalho has quit [Ping timeout: 265 seconds]
< zoq>
ShikharJ: Great, I'll take a closer look at the PR tomorrow.