ChanServ changed the topic of #mlpack to: "Due to ongoing spam on freenode, we've muted unregistered users. See for more information, or also you could join #mlpack-temp and chat there."
vivekp has quit [Ping timeout: 252 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 272 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 244 seconds]
vivekp has joined #mlpack
icecoolcat has joined #mlpack
< icecoolcat> do you guys have any dbscan tutorial that u recommend?
petris has quit [Ping timeout: 260 seconds]
petris_ has joined #mlpack
< rcurtin> icecoolcat: I don't think there is a tutorial, but 'mlpack_dbscan --help' (if you are using the command-line programs) should be pretty comprehensive in terms of what options are there and what it does, etc.
Shravan has joined #mlpack
Shravan has quit [Ping timeout: 256 seconds]
cjlcarvalho has quit [Ping timeout: 272 seconds]
davida has joined #mlpack
< davida> zoq: Regarding training RNNs with datasets of different length time-steps, would it be possible to train the network one series at a time, keeping the weights between each training step so that the network would learn. Previously you mentioned to me to try padding the cube to ensure all the time-steps were of the same length but it doesn't seem to be working in my case and I cannot get the network to converge. Do you have any though
< davida> zoq: my idea is to change rho on each run of Train so that it terminates nicely at the time-step length of each datapoint.
< zoq> davida: Yeah, that should work just fine, however it should be slower as sequentially training the model.