cameron.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< stephentu>
billLiu: for numerical stability i think the better thing to do is have Probability() be LogProbability() and then do a logsumexp at the end
< billLiu>
Yes, of course. And I think it is very basic to use logProbability instead of probability.
stephentu has quit [Ping timeout: 264 seconds]
billLiu has quit [Quit: Page closed]
decltype_me has joined #mlpack
jbc_ has joined #mlpack
islamfaisal has joined #mlpack
decltype_me has quit [Ping timeout: 250 seconds]
govg has quit [Quit: leaving]
govg has joined #mlpack
islamfaisal has quit [Ping timeout: 240 seconds]
islamfaisal has joined #mlpack
islamfm has joined #mlpack
islamfaisal has quit [Ping timeout: 240 seconds]
islamfm is now known as decltype_me
stephentu has joined #mlpack
danny474 has joined #mlpack
< danny474>
hii...i started some work on MLP....i guess zoq has recently added some code on them as well.
< zoq>
Right now there isn't any code to create an mlp out of the box it needs some extra glue code to stitch everything together. I will add the code needed, in the next days. However, the code already committed allows to create a bunch of different networks, including feed forward neural networks (mlp) and recurrent neural networks with a bunch of layers (sigmoid, tanh, linear, sign, rectifier, LSTM).
< zoq>
You can also specify the method to initialize the weights and choose between different optimizer to update the weights.
< danny474>
ohh okay thats good.
< danny474>
i wanted to work on them. so just asked.
< zoq>
on mlp's?
< danny474>
i started on it today.
< danny474>
but uve done a lot more. ill look into the new commits youve added tomorrow
< danny474>
is there anything i can help with?
< zoq>
oh, great, there is still a lot of work to do. If you are interested I think we can find something you can work on. E.g the plan is to integrate dropout and dropconnect.
< danny474>
great.
< zoq>
I'm also planning to implement another method to initialize the weights.
< danny474>
wic one ?
< danny474>
If you have something i can look into, please let me know.
< zoq>
The plan is to train a deep belief network and to use the the unfolded weights for a feedforward network. Another idea is to implement a method by Andrew Saxe which uses svd and qr.