ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< sreenik[m]> rcurtin: I agree with you. Mangled names are kind of unreliable. I just checked the name() function and it clearly states that it is not portable between compilers. Right now I'll proceed with the rest of the work and come back to this later.
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Read error: No route to host]
xiaohong has joined #mlpack
xiaohong has quit []
< jenkins-mlpack2> Project docker mlpack nightly build build #406: STILL UNSTABLE in 3 hr 31 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/406/
vivekp has joined #mlpack
xiaohong has joined #mlpack
ImQ009 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< xiaohong> Hi, Anybody knows how to get one row of the arma:mat, when I use the Deriv function, I got this error.
< xiaohong> note: candidate function [with InputVecType = arma::Op<arma::Mat<double>, arma::op_vectorise_all>,
< xiaohong> OutputVecType = arma::subview_row<double>] not viable: expects an l-value for 2nd argument
< sreenik[m]> xiaohong: What I know is that armadillo stores matrices in column major form so you can obtain the columns by the function colptr(), but for rows either you will need to transpose and extract columns (brute force) or extract rows on your own (either through the colptr() function or through memptr())
< xiaohong> sreenik: so the row(0) is the subview of matrix?
< xiaohong> I need to extract it by my own?
< sreenik[m]> Yea you'll get a subview but not an iterable pointer
< xiaohong> sreenik: Thank you~ I solve it now.
< sreenik[m]> :)
< xiaohong> sreenik: Hi, I have another question, I hope discuss it there to get it solved. Do you have time to discuss it?
< sreenik[m]> Yes I will be here for a while. I can try
< xiaohong> Thank you. I spent several days to figure it out, but it seems stuck in it.
< xiaohong> I have a model that output was two dimensions, one for normal distribution's mean, the other is for distribution's variance.
< xiaohong> I use this distribution to calculate the log_prob for the observation, then calculate the loss in some way.
< xiaohong> If we need to take derivation with respect to the model's output, do I need to backward through the distributions?
< xiaohong> Am I clear?
< sreenik[m]> Let me first confirm if I understand your point completely. So are you creating something similar to an actuvation function?
< xiaohong> Yeah, maybe some part of them is similar.
< sreenik[m]> So are you normalizing (or doing something similar to) the values of the output of an ANN model?
< sreenik[m]> If yes then are the parameters of the distribution learnable?
< xiaohong> Actually, I am predicting the normal distribution's parameters.
< sreenik[m]> Then is it that you are feeding some values as the input to the model and the model is predicting the mean and variance considering a normal distribution?
< xiaohong> I want to optimize the actorNetwork.
< xiaohong> In pytorch, the gradient will backward automatically.
< xiaohong> But I have some difficult to have a clear mind how it backward.
< sreenik[m]> Yeah mlpack models also backpropagate (automatically)
< sreenik[m]> When you run Train() it obtains the gradients and backprops.
< sreenik[m]> But I see that you are not calling Train() but doing it manually. In that case you might have to call the Backwards() function of the relevant layers I suppose
< xiaohong> Yes, I am not creating a standard model, so it is a little different.
< sreenik[m]> You can take a look at the FFN class as it calls the Backwards function for the layers and maybe implement it manually as you require in your case
favre49 has joined #mlpack
< favre49> zoq I haven't been communicating too much recently, since things have been hectic since college started. I'll write a blog post tomorrow, but I wanted to keep you updated on my status
< favre49> I'm Midway through writing the NSGA-III optimizer, I want to finish it within the next three or four days at most, so that I'll be able to test it
< sreenik[m]> xiaohong: Perhaps we should consult Ryan or Marcus as I am not too confident on this
< favre49> I also have some comments that I must address on the NEAT PR, I'll do that today or tomorrow at worst
< xiaohong> Yes, I am also not sure my understanding is correct. But thank you for your time. My thought was more clear.
< favre49> I was also thinking, how important is evolutionary computation to ensmallen? If we build the multi-objective codebase more over time, we will most probably see reuse of the same genetic operators
< sreenik[m]> You're welcome :)
< xiaohong> I will consult Marcus or Ryan to figure it out. I spend too much time on it, but got small progress.
< favre49> It sounds like it would make sense to templatize and make policies now
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
KimSangYeon-DGU has joined #mlpack
xiaohong has joined #mlpack
xiaohong has quit [Read error: Connection reset by peer]
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 276 seconds]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
ImQ009 has quit [Quit: Leaving]
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack