verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham> About fine tune the stacked autoencoder, is it possible to design a generic class or function for it?
< tham> By now the tutorial of UFLDL use softmax as the classifier, and the fine tune tasks also assume softmax as the classifier
< tham> Is it possible to design a generic class to do the finetune tasks, which could use any classifier?
< tham> To tell you the truth, I do not understand the equations listed at here(http://deeplearning.stanford.edu/wiki/index.php/Fine-tuning_Stacked_AEs)
< tham> Do anyone know how to do it?
< tham> I would like to design a fine tune algorithm of stack autoencoder based on softmax first
< tham> but it would be better if it could accept any classfier as the layer
govg has quit [Ping timeout: 256 seconds]
< tham> There are implementation details of softmax, but what if the last classfier is svm or another classfier?
< tham> I mean, implementation details of using softmax to finetune the whole network
tham has quit [Ping timeout: 246 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#282 (master - dc2c5c6 : Ryan Curtin): The build passed.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#283 (master - 7a8b0e1 : Ryan Curtin): The build was broken.
travis-ci has left #mlpack []
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 272 seconds]