verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham>
Writing a finetune class to find tune stack autoencoder
< tham>
When I want to reuse the StackAutoencoderFunction and SoftmaxFunction, I find out two problems
< tham>
First one is the input of StackAutoencoderFunction and SoftmaxFunction need to be changed when you finetune the parameters
< tham>
second is the SoftmaxFunction need to cache the probabilities value, else you have to recalculate the probabilites two more times
< tham>
What would you do?
< tham>
I would like to contribute this class back to the community(if it could work), but the current design of SoftmaxFunction and StackAutoencoderFunction
< tham>
cannot finish the fine tune algorithms without solving the first obstacle(second one will affect the performance)
< tham>
Do you think that allow the users to change the underlying data of StackAutoencoderFunction and SoftmaxFunction is a good idea?
< tham>
change the arma::mat const &data to arma::mat *data
< tham>
About the second one, when you call the Gradient function, the probabilities should be calculated already, you do not need to recalculate it again(if I did not make any mistake)
< tham>
if you cache it.When you finetune the parameters, you need to use this probabilities matrix one more time, without cacheing, you will need to calculate it three times
< tham>
Please give me some suggestion about this one, thanks
< tham>
There are another solution
< tham>
Pass the reference of the input to the fine tune class
< tham>
This discussion looks bizarre without codes, I think I better open a new ticket on github
tham has quit [Quit: Page closed]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 260 seconds]