verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham> Hi, zoq, thanks for your helps
< tham> I run the example you provided, the results are much better now(around 92%)
< tham> I post the results at here(http://pastebin.com/Z3nuWAbd), sometimes the accuracy can over 95%, it is quite random
< tham> Besides, since the training set and test set are the same(training set == test set), I think this is not overfitting but underfitting
< tham> change the weight initialization policy fix the problem
< tham> About the Train function of the trainer, is it ok to declare the Training data, Training labels and validationData, validationLabels as const
< tham> If the results of those input would not be changed but the implementation details need it to be non const
< tham> Is it safe to do the const cast?
< tham> What I mean is, change the Train api as following
< tham> So the users would know that the train data and test data would not be changed after training
tham has quit [Ping timeout: 246 seconds]
KTL has joined #mlpack
KTL has quit [Ping timeout: 256 seconds]
KTL has joined #mlpack
KTL has quit [Quit: Leaving]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#302 (master - 6e86d10 : Marcus Edel): The build was fixed.
travis-ci has left #mlpack []
< zoq> wow the travis build runs without any failures .... reminds me of the good old days :)