verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
abhishek0318 has joined #mlpack
abhishek0318 has quit [Quit: Page closed]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
keonkim has joined #mlpack
wiking has joined #mlpack
brni has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
brni has quit [Quit: Page closed]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 255 seconds]
vivekp has joined #mlpack
brni has joined #mlpack
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
alsc has joined #mlpack
alsc has quit [Client Quit]
alsc has joined #mlpack
govg has joined #mlpack
govg has quit [Ping timeout: 246 seconds]
govg has joined #mlpack
govg has quit [Quit: Lost terminal]
< rcurtin>
alsc: sure, if it's an easy fix maybe we can do it, but we do a lot of crazy CMake stuff so I don't know if it'll be easy
Techievena has joined #mlpack
scarecrow has joined #mlpack
< scarecrow>
Hello are there examples/documentation on how to build a custom layer (ann)?
alsc has quit [Quit: alsc]
Techievena has quit [Remote host closed the connection]
Techievena has joined #mlpack
alsc has joined #mlpack
killer_bee[m] has joined #mlpack
alsc has quit [Quit: alsc]
vivekp has quit [Ping timeout: 250 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
< zoq>
scarecrow: Unfortunately no, you could look at the existing layer and uses that as a basis.
< zoq>
scarecrow: Also, we are here if you need any help.
alsc has joined #mlpack
Shreya has joined #mlpack
< Shreya>
Hello! I am new here. I want to contribute for GSOC 2018.
< zoq>
Shreya: If you have any questions let us know.
< Shreya>
Thanks zoq. Can you please tell me how shall I start? I went through the two links.
< zoq>
One option is to go through the issues listed on github and maybe you find something interesting, another option is to go through the code and find something that could be improved or maybe you like to write a new interesting method.
< Shreya>
Okay thank you!
pegasus has joined #mlpack
pegasus has quit [Read error: Connection reset by peer]
alsc has quit [Quit: alsc]
pegasus has joined #mlpack
pegasus has quit [Client Quit]
Shreya has quit [Ping timeout: 260 seconds]
alsc has joined #mlpack
alsc has quit [Quit: alsc]
scarecrow has joined #mlpack
< scarecrow>
Thank you zoq for responding. I tried to emulate code in the Linear layer. I'm running into trouble while adding my layer to FFN.
< scarecrow>
It fails to compile. Looks like I need to modify "layer_types.hpp" to include my type in the boost::variant
< scarecrow>
The other workaround is to extend one of the layer already in the list. But since the methods like Forward Backward are template methods I cannot override them.
< scarecrow>
suggestions?
< rcurtin>
scarecrow: you're right, it would need to be added to LayerTypes
< rcurtin>
I had previously thought that we had some infrastructure to do this, so e.g. you could do something like
< rcurtin>
FFN<NegativeLogLikelihood<>, RandomInitialization, <set of extra custom layers>>
< rcurtin>
but I am not seeing code that does that
< rcurtin>
I do remember discussing it though, but maybe I remember the resolution incorrectly
< rcurtin>
maybe zoq has a different way to do it that I have forgotten, let's see what he says
< scarecrow>
Right, that is not possible with the current version. Thank you for your response. Lets wait for zoq.
< zoq>
Yeah, I think we discussed the <set of extra custom layers> solution and agreed on it, but unfortunately it's not implemented yet.
< zoq>
Another option that somehow works with clang is to inherit from e.g. the LinearLayer.
< cult->
rcurtin: when are you expecting bandicoot to be in alpha phase, and how much % of mlpack routines can be covered with it?
< rcurtin>
do you want me to open an issue for it so we don't expect?
< rcurtin>
cult-: sorry, I saw your question; I was preparing a NIPS talk so I did not get to respond to it :(
< cult->
ok np
< rcurtin>
I am not sure when it will be ready; Conrad and I are currently working on some sparse matrix improvements when we can
< rcurtin>
and I assume after that we'll get back into Bandicoot development more
< rcurtin>
I'm excited about it, because every algorithm that takes a MatType template parameter (which eventually should be every mlpack algorithm) will be able to use a bandicoot matrix instead
< rcurtin>
so then all the operations can be done on the GPU, without needing to significantly change the code
< rcurtin>
there may be some places where the bandicoot matrix class doesn't work quite right for mlpack, but we can adapt that code when we get there
< scarecrow>
Inheriting did not work for me. I'm using g++ though.
< cult->
just a quick note on hmm implementation; maybe you recall there were some issues with it, and you fixed it from 2.1.x. the problem is that now if data is bad, i can still make it work with the old version by log(), but not with the new versions
< rcurtin>
out of the box I expect things like linear and logistic regression and other typical algorithms to work fine, possibly neural networks too
< rcurtin>
oh, hm, would you mind opening a github issue so that I can look into it?
< rcurtin>
more complex things like nearest neighbor search might not get speedup immediately but may instead require some tuning and modification; not sure yet
< rcurtin>
for timeline, I dunno, I think 6 months to 10 months is reasonable? neither of us are currently working on it full-time
< zoq>
Yeah, if you open an issue, I'll take a look at it once I get a chance.
< rcurtin>
zoq: right, let me do that now then
< zoq>
rcurtin: How was your presentation?
< rcurtin>
it's in about 5 or 6 hours, but I at least have the slides and talk ready finally
< rcurtin>
actually Shangtong is here at the same workshop, he's presenting in about an hour on his implementations of reinforcement learning
< rcurtin>
I'm looking forward to meeting and talking with him afterwards :)
< zoq>
ah right, I guess distributed rl
< cult->
do you have a list of companies/projects who are using mlpack in production?
< rcurtin>
cult-: no, typically people don't seem to tell me (or us) when they're using mlpack in production
< rcurtin>
we use it internally at Symantec for some experiments here and there but none of those projects have made it into production yet
< rcurtin>
I know that Sumedh's group was using it at eBay some time ago, but no idea if it made it into production or not
< rcurtin>
usually I see who is using it by looking at who is citing the mlpack paper:
< rcurtin>
I read a cool paper sometime back where some lab was using mlpack to do real-time control of a spacecraft
< cult->
nice
< cult->
rcurtin: so, in hmm in 2.0.x, i still use the fix you gave a while ago to make the transition matrix random to avoid errors in training. now, above 2.1.x, you did the fix that works well.
< cult->
rcurtin: the problem is, when in 2.0.x the data isn't good, i just use replace the datapoints with logarthmic values and its good. in 2.1.0 if i do log(), the function runs without errors, but all states are the same with or without log()
< cult->
that means I can't fix the data in 2.1.x
< cult->
or 2.2.x
< rcurtin>
I don't remember the fix very well, can you explain why the log() is needed? (or maybe you have a link to the right IRC log?)
< cult->
yes 1 min
< rcurtin>
thanks
< cult->
so, when my data isn't proper for the number of states I given, I simply use log(data) instead of data, to make the series more noisy and hence allow the algorithm to distinguish between states
< cult->
after this fix, that has been solved. now if i want to fix my data with the log() again, i can't do that anymore
< cult->
probably its not clear what i am talking about. but with 2.0.x i used arma::mat c = arma::randu<arma::mat>(X, X);for (size_t i = 0; i < X; ++i) c.col(i) /= arma::accu(c.col(i));hmm.Transition() = c; to overcome the issue
< cult->
that few lines above works well with 2.0.x and i can also use log() on my datapoints so that i will get proper states
< cult->
but in >=2.1.x i a) don't need to use those few lines because it has been already implemented b) using log() on data doesn't help anymore
< cult->
basically, the fix solved the chol decomposition errors, but i can't modify my data anymore from 2.1.x if the states are too high and the dataset isn't appropriate for the number of states
< cult->
again, with 2.0.x and with the manual transition matrix hack, everything works. since 2.1.x without the transition matrix hack (because its already fixed in the commit above) things behave differently now and not in a good way
< cult->
it looks to me that the difference is that intial states part should be reverted back as it was before
< cult->
add back this initial(arma::ones<arma::vec>(states) / (double) states), and remove initial /= arma::accu(initial);
< cult->
because the transition matrix fix is the same what i do manually, the only difference is the initial variable, IF there were not other modifications since
< rcurtin>
(sorry I am now in a talk---I will come back to this over lunch)
< cult->
thanks, i will dig into this and will let you know
< cult->
interesting i can't reproduce it with 2.2.x
< cult->
i don't have that data anymore, but that was what i was experiencing above.