ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< himanshu_pathak[>
<zoq "himanshu_pathak: You can ignore "> Oh ok. when you get time can you do a review on my It will be a quick one . I can rebase my DBN pr after merging of this :)
ImQ009 has quit [Quit: Leaving]
< zoq>
himanshu_pathak[: The RBM one?
< himanshu_pathak[>
> himanshu_pathak: The RBM one?
< say4n>
For some reason the released script for ensmallen doesn't work on MacOS. I think it is because the `git diff | wc -l` in BSD output a tab followed by 0, which when compared to the string "0" is equated to false. A quick fix would be to just strip the whitespace characters if they exist from the string.
< say4n>
Should I add it to the current PR updating the release script or make a separate one?
< say4n>
*release script
< rcurtin>
say4n: sure, feel free :)
< rcurtin>
I certainly haven't tested it on OS X so any fixes are totally appreciated
< say4n>
Alrighty! :)
< say4n>
Also sed is pretty mysterious on BSD. I remember the last time I was playing with the release script, it was missing some flags that were being used by the script.
ak has joined #mlpack
< ak>
hello, I am having some difficulty with training a logistic regression model one point at a time, it seems like the model is never getting better / parameters reset after each point
< ak>
Say my model has 5 variables, I am doing model.Train(input, labels);, where input is a column vector of doubles and labels is a Row<size_t> of size 1.
< zoq>
Hm, depending on the optimizer you might want to adjust the stepsize batch size.
< zoq>
But the default settings should work as well.
< zoq>
Actually wondering if we should just add that line, will test it out tomorrow.
< rcurtin>
ak: it sounds like you are only giving one point for `input` and one label; maybe you should pass the entire dataset int hte call to `Train()`?
< rcurtin>
it seems like maybe the documentation is incorrect! the code that zoq linked to resets the parameters in each call to Train()
< rcurtin>
give me just a second---I will open a PR to fix the behavior
< ak>
is there another ml algorithm that I could implement that I would work with that flow? I am kind of new to this work and want something that will get better whenever I have new data
< rcurtin>
so, after I open this PR, LogisticRegression will behave like you expect (and like the documentation says it should)
< rcurtin>
not all machine learning models can be trained incrementally, but logistic regression and neural networks can, at least
< rcurtin>
I believe SoftmaxRegression (which is just an extension of logistic regression for more than 2 classes) will work that way too
< rcurtin>
(and, glancing at the code there, I believe that incremental training will work correctly)
< ak>
thats great, thank you! Ha, it is a relief to hear that!
< rcurtin>
the bugs always sneak in somehow when we aren't looking :) thank you for reporting this!
< rcurtin>
I believe that the model will train and perform best, though, when you can give it as many points as possible at a time when calling Train()
< rcurtin>
("perform best" both in terms of accuracy and speed)
< ak>
that makes sense, I am just glad to know it is possible. The plan is to start the program with a fairly trained model, and as it goes it can only get better
< rcurtin>
I was reading that comment too, I think that is also inaccurate according to the code I am seeing
< rcurtin>
oops, sorry, for some reason my IRC client was scrolled to the wrong message
< rcurtin>
ignore what I just wrote :)
< rcurtin>
I'm waiting on tests to compile and pass here, then I'll have a patch posted
< ak>
'=D I may not be able to update my installs for a bit, should I just add zoq's if-statement to enclose those two lines?
< rcurtin>
ak: yeah, that's basically exactly what my patch does
< rcurtin>
so you could just add that and it should work