verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham>
Hi, I do some refactoring on sparse_autoencoder
< tham>
the speed improve about 28.795%
< tham>
I compare the results with the original implementation, the result of the output are same(the final output optimize by the L-BFGS optimizer)
< tham>
What I did is reduce the temporary objects and recalculation of the activation of hidden layers, output layers and the rhoCap
< tham>
I intent to add two more template parameters to SparseAutoEncoderFunction, try to make it more versatile
< tham>
Things like "template<typename HiddenLayer, typename OutputLayer> class SparseAutoencoderFunction"
< tham>
The last thing is I notice that current implementation of SparseAutoEncoderFunction do not check on "division by zero"
< tham>
and prevent the log function exist any zero member(ex : log(0))
< tham>
Forget to check or in the real world, we do not need to worry about them?
< tham>
Should I do the checking?Or I could safely omit them?Thanks
< tham>
I would like to commit it back to the community if the results are reasonable
< naywhayare>
hi tham, I'd love to include your contributions
< naywhayare>
you can submit a pull request on github or send an email to the mailing list with the patches and I'll look over them tomorrow and commit them
< naywhayare>
as for the division by zero, often mlpack methods will assume that the data is not going to cause problems like this, in order to be faster
< tham>
ok, I will learn how to submit a pull request on today, and thanks for your reply
< tham>
I also send an email to the mailing list yesterday, it mention about the compilation issue of current codes
< tham>
I hope this could save some troubles for the other users when they try to compile the next release of mlpack on windows
< naywhayare>
tham: I didn't see the email... I will see if the list is having any issues
< naywhayare>
(tomorrow though. I need to sleep now...)
< tham>
bye, I do not know why you haven't see the email.Whatever, i post the contents at here(http://pastebin.com/3xFhzEeZ)
< tham>
Have a nice dream
tham has quit [Ping timeout: 246 seconds]
< naywhayare>
zoq: I didn't realize he was talking about writing a sparse autoencoder with the ann code, I thought he meant the one Siddharth wrote in methods/sparse_autoencoder/
< naywhayare>
I can still take a look through the pull request, but presumably you have a plan for the ann code so the decision is yours :)
< zoq>
I just throw a glance at the code. I guess I can look deeper into the pull request in the next few days. It would be nice to see a test for the code. Would save me time to write my own test suite. Maybe tham is kind enough to write some test cases.
< naywhayare>
we could also probably adapt the sparse autoencoder tests that siddharth wrote... I'll look into that
< zoq>
ah right, that would be the best idea, ideally it should work without any modification
< naywhayare>
yeah it will probably need some minor API changes but not anything else