verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
kaushik_ has quit [Quit: Connection closed for inactivity]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
govg has joined #mlpack
kaushik_ has joined #mlpack
mikeling has joined #mlpack
< rcurtin> kaushik_: check out commit 623813ab on my branch, that has some further improvements for tree serialization you can incorporate
< kaushik_> rcurtin: ok, will look into it. I am done with methods/ but need to fix few things I missed.
< rcurtin> kaushik_: sounds good; when you push that, I can fix the other files that won't currently compile
govg has quit [Ping timeout: 246 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
mikeling has quit [Quit: Connection closed for inactivity]
< rcurtin> zoq: I spent a while thinking about how we can avoid duplicating work in calls to Evaluate() and Gradient()
< rcurtin> I think, the best solution I can come up with, is to add a function 'double EvaluateWithGradient(const arma::mat& parameters, const size_t start, arma::mat& gradient, const size_t batchSize)'
< rcurtin> and then adapt the optimizers to use that
< zoq> rcurtin: Sounds reasonable to me, not sure I would go with EvaluateWithGradient, we could just name it Evaluate which takes the gradient as another parameter?
< rcurtin> yeah, I was wondering about the name
< rcurtin> if you think just Evaluate() is fine, I'll go with that
< rcurtin> I think that as a result of Shikhar's project we have a lot of different FunctionType implementations that each need somewhat different signatures
< rcurtin> I was thinking that also, it could be possible to make an "adapter" so that any FunctionType can be used with any optimizer
< rcurtin> i.e. if the FunctionType supplies only Evaluate() with a gradient calculation, it is possible to write an adapter that provides both an Evaluate() and a Gradient() call
< rcurtin> similarly if the function takes a sparse matrix for Gradient(), we can provide an adapter to convert back and forth (although we should issue a warning in a situation like that)
< rcurtin> and in this way each optimizer could be used with each function type, without requiring the user to write tons of different methods for the FunctionType to make it all work
< rcurtin> I think that idea might need some more thought, so I'll keep thinking about it... in some cases it would be such a slowdown to do automatic type conversion that it's not even worth doing
< zoq> Sounds like a good idea, would simplify the overall optimizer classes.
< rcurtin> yeah, they are a bit overcomplex right now I think
< rcurtin> but it seems like the complexity is absolutely necessary for speed
< zoq> So what is the plan, I think it's reasonable to finish #1073 first before implementing the EvalauteWithGradient idea.
< rcurtin> yeah, I agree, I think what I will do is focus my efforts on fixing the rest of the batch support for #1073, then I can help Rajiv through the process of incorporating those changes
< zoq> I was planing the same, maybe we can split the work.
< rcurtin> yeah, sounds good
< rcurtin> do you want to handle ann/ and I'll take the rest? I think there are only a few function types left
< zoq> sounds good
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
govg has joined #mlpack
keonkim has quit [Quit: PanicBNC - http://PanicBNC.net]
keonkim has joined #mlpack
kaushik_ has quit [Quit: Connection closed for inactivity]