verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
< tham> Hi, do you have any plan to support deep network?Like the deep network of sparse_autoencoder
< tham> current api of autoencoder cannot get the trained parameters, hence it is impossible to construct deep network
< tham> Besides, do anyone interesting on enhance the autoencoder on top of the ann modules?
< tham> ann modules have fnn and a lot of layers, if the users can determine the hidden and output layers by template parameters, it would be very cool
< tham> ex : SparseAutoencoder<FFN<std::tuple<DropoutLayer>, , DropoutLayer>>--is this possible?
< tham> What kind of optimizer I should use when using FFN?Any example?
< naywhayare> tham: I wish I could answer these questions, but I didn't write the neural network code
< naywhayare> I'm surprised zoq hasn't responded; you could try sending an email to the mlpack list and that might have better success
< zoq> tham: Sorry for the slow response, you are right the current api doesn't work with the ann modules. At the end the ann modules/layers are flexible enough to easily write an sparse autoencoder, which uses the existing functionality.
< zoq> So that you can write: SparseAutoencoder<std::tuple<DropoutLayer>,.., DropoutLayer>
< zoq> Unfortunately time is limited, so if you are interested in contributing an modified Sparse Autoencoder, I'm here to help out with any issues if not I definitely put that on my todo list :)
< zoq> Regarding the optimizer it depends on the task and network, but often I start with RMSPROP and AdaDelta, because you don't have to specify a bunch of parameters e.g. learning rate.
< zoq> Sometimes some people claims they found another more efficient method to traint he parametes: http://sifter.org/~simon/journal/20150420.html I haven't had time to look into it.
< naywhayare> good news: my new employer, Symantec, is very interested in replacing the entire stack of build servers we have... it looks like they are going to sink a large amount of money into a handful of powerful servers that can do all the builds
< naywhayare> it will probably be about a month or a month and a half until everything is transferred to the new systems and the old stack can be decommissioned
< naywhayare> and I will probably keep several of the systems in the old stack for the purpose of benchmarking (like shoeshine does now)
< tham> thanks for your reply, I sent an email several hours ago. Yes, I want to implement a sparse_autoencoder based on ann
< tham> If I find out any problem I will send email or come here to ask some questions
< tham> about the build servers, is this mean we can save the troubles of building mlpack on windows in the future?
< tham> I tried to build mlpack(github master) and write down the problems, hope this could be some help, bye
tham has quit [Quit: Page closed]