verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
witness has joined #mlpack
sulan_ has joined #mlpack
danR_ has joined #mlpack
< danR_>
hi everyone! Installed mlpack3 from pip and I am trying to pass a numpy array of features to hmm_train (which I think I read somewhere should be possible), but it is not working. With the current python bindings, is it only possible to pass a string with a file location? Thanks
witness has quit [Quit: Connection closed for inactivity]
< rcurtin>
danR_: hi there, unfortunately you are right, it is currently only possible to pass a string with a file location
< rcurtin>
the file can contain data, or a list of other files which are each training sequences to train on
< rcurtin>
I agree this isn't an optimal interface, but it's what we currently have. it could be improved, it would just take a little bit of implementation to do it...
< daivik>
It seems that the mlpack optimizer stops after some 3000 odd iterations (and doesn't reach the optimum), quite a bit earlier than the specified maxIterations. I'm looking into this, but any leads would be greatly appreciated.
< daivik>
While the tensorflow optimizer -- under the same parameters and initial value of x, runs through the all of maxIterations iterations and reaches the optimal value of y.
< zoq>
daivik: Note the optimizer will stop if the tolerance between two iterations is reached, the default tolerance is 1e-5. So either you can set the tolrance at creation time or afterwards with optimizer.Tolerance() = -1;
< zoq>
daivik: Let me know if that solves the issue.
< daivik>
zoq: Yes, you're right -- that is why the mlpack optimizer stops. I actually wanted to see where the CNN model I wrote (models repo PR #12), differs from a tensorflow implementation - because I've tried hyperparameter tuning for a while now, and I'm not able to get any performance improvement.
< daivik>
Let me check what the terminating criteria for TFs AdamOptimizer is -- they don't seem to have a tolerance option.
< daivik>
zoq: yes, optimizer.Tolerance() = -1 did solve the issue. Thanks a lot. Any hints on what could be the issue with why the CNN model is not performing as well as expected?
< zoq>
daivik: Not sure, have to take a closer look into the code, will do that tomorrow.