ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
jeffin143 has quit [Ping timeout: 246 seconds]
favre49 has joined #mlpack
< favre49> zoq: I am implementing multiple pole cart balancing right now
< favre49> In all our RL environments, we return the State as a arma::colvec. I was wondering if we could instead return an arma::mat where each column is the state of the ith pole, and the 0th column is the state of the cart
< favre49> If we want to return only a colvec, we could of course just keep appending the states of the next pole at the end of the colvec. I just thought an arma::mat would be somewhat more intuitive. Let me know what you think.
favre49 has quit [Quit: Page closed]
xiaohong has joined #mlpack
mlpackuser100 has joined #mlpack
< mlpackuser100> Hi. Does anybody have any idea why NMFPolicy with CFType might be giving systematically low values?
< mlpackuser100> I figured it out, potentially, if anybody's interested.
mlpackuser100 has quit [Quit: Page closed]
xiaohong has quit [Ping timeout: 256 seconds]
< rcurtin> mlpackuser100: what do you mean by low values? like in the recovered matrices?
sreenik has joined #mlpack
< rcurtin> sreenik: I think maybe the numeric_limits<> solution would be fine here, so long as you comment sufficiently to indicate that max() or nan() or whatever you use indicates that it's uninitialized
< rcurtin> if you need multiple values per key, you could use map<string, vector<double>>, then the length of the vector could indicate if it's initialized
< rcurtin> but if you have only one value for key, probably that idea would be slower than necessary
< rcurtin> hope that helps, I don't fully know the context though :)
cult- has left #mlpack []
jeffin143 has joined #mlpack
mlpackuser100 has joined #mlpack
< sreenik> rcurtin: Thanks. The context was regarding the json parser, where, for each layer type there are some default parameters in a map and there are a few user-specified params in another map that would update the default ones.
< sreenik> Previously there was no provision for checking if the user's input is valid, but now I am implementing that. Anyway, the approach zoq and you suggested would be good enough for this
< mlpackuser100> rcurtin: Low values are concerning the predictions given for a cf application (with NoNormalization), both on actual data and generated. Normalization policy dramatically impact results too.
< mlpackuser100> Certain normalization policies give systematically not too biased seeming results, like OverallMeanNormalization - but still highly inaccurate.
sreenik has quit [Ping timeout: 256 seconds]
mlpackuser100 has quit [Quit: Page closed]
mlpackuser100 has joined #mlpack
mlpackuser100 has quit [Client Quit]
mlpackuser100 has joined #mlpack
mlpackuser100 has quit [Ping timeout: 256 seconds]
sreenik has joined #mlpack
xiaohong has joined #mlpack
< sumedhghaisas_> KimSangYeon-DGU: Hi Kim. Sorry for the delay. There is NeurIPS dealine tomorrow and I have a paper which is still unfinished. I will get to the plot as soon as I can. :)
xiaohong has quit [Ping timeout: 256 seconds]
sreenik has quit [Ping timeout: 256 seconds]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
< zoq> favre49: We can use arma::mat, don't see an issue here.
jeffin143 has quit [Ping timeout: 258 seconds]
jeffin143 has joined #mlpack
sreenik has joined #mlpack
< rcurtin> sumedhghaisas_: still 28.75 hours left ;)
akhandait has joined #mlpack
KimSangYeon-DGU_ has joined #mlpack
< KimSangYeon-DGU_> sumedhghaisas_: Hey Sumedh, no worries. Please take as much time as you need :)
< KimSangYeon-DGU_> Best wishes for your work :)
KimSangYeon-DGU_ has quit [Quit: Page closed]
< rcurtin> mlpackuser100: the rank of the decomposition can make a huge difference, as can the actual algorithm that's used for the decomposition--try some different ones (I have had good success with RegSVD in the past)
< rcurtin> sreenik: ah, got it, that sounds good to me :)
vivekp has joined #mlpack
jeffin143 has quit [Quit: AndroIRC - Android IRC Client ( http://www.androirc.com )]
Suryo has joined #mlpack
< Suryo> zoq: I've created a PR for the test functions. For now, I have included the ackley function. It would be great if you could tell me if it looks okay; if it does, then I'll program the rest.
< zoq> Suryo: Will take a look later today.
< Suryo> Apologies for the delay in this: I had messed up the branches of my fork for the development of PSO -- had commited one change to master and things we're syncing for a while. I got that resolved and now things look good at my end.
< Suryo> zoq: thanks!!
Suryo has quit [Client Quit]
< zoq> No worries, glad you could figure it out.
Suryo has joined #mlpack
< Suryo> zoq: also, how are we going to test these functions? As a part of the test runs of other optimization methods?
< Suryo> Let me know whenever you get time.
< zoq> Ahh good point, yeah for now let's use SGD or Adam?
< zoq> We could adjust the initial solution if SGD isn't able to find a solution in a reasonable time.
< Suryo> Understood. I was considering CNE if that could work.
< rcurtin> CNE can be super slow because it's zeroth-order
< zoq> That should work as well.
< rcurtin> actually I need to adjust the ensmallen tests because the CNE tests can take orders of magnitude longer than others
< zoq> good point
< rcurtin> but perhaps the function you are optimizing is more amenable to CNE, I don't know :)
< Suryo> rcurtin: that's the point I was trying to make
< Suryo> The ackley function is interesting
< Suryo> It's unlikely that a gradient-based method will be able to optimize it
< Suryo> Very likely to get stuck in a local minima
< zoq> In any case let's run the optimizer on the test like a 1000 times and see what the fail rate is
< rcurtin> yeah, for the Ackley function, I think you're right
< Suryo> So what we can do for the time being is treat these new functions as stubs and try them with SGD or Adam with good initial points, and as we build up CNE or PSO, test them with initial points that are far away from the actual global solution.
< Suryo> Does that sound reasonable?
< zoq> Sounds good.
< Suryo> Great!
Suryo has quit [Quit: Page closed]
akhandait has quit [Quit: Connection closed for inactivity]
< sreenik> How can I use constant initialization, since FFN<MeanSquaredError<>, ConstInitialization> won't work as this particular initialization requires an init value parameter?
< zoq> sreenik: Not sure I get the issue, ConstInitialization does have a default value.
< sreenik> I mean, say I want to assign a value of 30 to it
< zoq> FFN<NegativeLogLikelihood<>, ConstInitialization> model(NegativeLogLikelihood<>, ConstInitialization(30.0));
< zoq> or
< zoq> ConstInitialization initRule(30.0);
< zoq> FFN<NegativeLogLikelihood<>, ConstInitialization> model(NegativeLogLikelihood<>, initRule);
< zoq> Does that help?
< sreenik> zoq: Got it. Thanks :)
mlpackuser100 has joined #mlpack
< mlpackuser100> rcurtin: Thanks! I've had good results with RegSVD too. The thing that's troubling me is that I haven't been able to get a single result with NMF that appeared to be even correlated with the data. I must be doing something wrong, right?
< mlpackuser100> Unrelatedely, how can one use mlpack do to knn (for cf) with pearson correlation? Right now I'm using RegSVD with a high learning rate and a low regularization coefficient and then doing PearsonSearch for the neighborhood search. Are there any other approaches I should consider?
mlpackuser100 has quit [Quit: Page closed]
Suryo has joined #mlpack
< Suryo> zoq: thanks for your comments on PR#117. I'll fix everything that you've pointed out and get on with the rest.
Suryo has quit [Client Quit]
sreenik has quit [Quit: Page closed]
jenkins-mlpack2 has quit [Read error: Connection reset by peer]
jenkins-mlpack2 has joined #mlpack