verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
stephentu has joined #mlpack
sumedhghaisas_ has joined #mlpack
stephentu has quit [Ping timeout: 260 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 260 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]
stephentu has joined #mlpack
sumedhghaisas_ has quit [Read error: Connection reset by peer]
partobs-mdp has joined #mlpack
govg has quit [Ping timeout: 255 seconds]
govg has joined #mlpack
chenzhe has joined #mlpack
stephentu has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
andrzejku has joined #mlpack
chenzhe has quit [Quit: chenzhe]
andrzejku has quit [Quit: Textual IRC Client: www.textualapp.com]
kris1 has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 246 seconds]
kris1 has quit [Quit: kris1]
mikeling has quit [Quit: Connection closed for inactivity]
stephentu has joined #mlpack
kris1 has joined #mlpack
stephentu has quit [Ping timeout: 246 seconds]
kris1 has quit [Quit: kris1]
shikhar has joined #mlpack
kris1 has joined #mlpack
partobs-mdp has quit [Ping timeout: 276 seconds]
kris1 has quit [Client Quit]
kris1 has joined #mlpack
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 258 seconds]
kris1 has quit [Read error: Connection reset by peer]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 255 seconds]
kris1 has quit [Quit: kris1]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 246 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 255 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]
govg has quit [Ping timeout: 260 seconds]
qwertea has joined #mlpack
< qwertea> hello everyone :)
< qwertea> I have a some questions, I'm on mac os x and the Debug logs are never displayed, even if my program is compiled with debugging symbols
< qwertea> (-g -rdynamic)
< qwertea> any ideas?
< qwertea> and second question: how can I debug the mappings of a DatasetInfo object? just display what string maps to what numerical value; I didn't find any function that could help
< qwertea> thx in advance!
< zoq> qwertea: Hello there, have you build mlpack with -DDEBUG=ON? You could also do mlpack::Log::Debug.ignoreInput = false; before calling the specific method.
< zoq> qwertea: I guess, one easy solution would be to serialize the model and take a look at the model file to see the mapping e.g. define the "--output_model_file=model.xml" unless you like to write some C++ code to do that
< qwertea> zoq: oh I didn't know mlpack had to be built with this DEBUG=ON, I thought only my executable had to have debugging symbols..
< qwertea> zoq: ok that makes sense, would be the easiest solution haha
< qwertea> zoq: thanks for your answers!
< zoq> qwertea: If you build mlpack with DEBUG=OO which isn't recommended since all optimizations are turned off, mlpack::Log::Debug.ignoreInput is already set, but you could manually set the parameter.
< qwertea> zoq: yep, however I can't set it if mlpack hasn't been built with DEBUG=ON bc Log::Debug is an NullOutStream
< qwertea> zoq: it doesn't have the attribute ignoreInput
< zoq> qwertea: You are right, in this case you have to build with DEBUG=ON, which I think is fine if you use it just for debugging?
< qwertea> zoq: yep that's cool :) thanks!
qwertea has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
qwertea has joined #mlpack
shikhar has quit [Quit: WeeChat 1.7]
qwertea has quit [Ping timeout: 260 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 255 seconds]
kris1 has quit [Ping timeout: 240 seconds]
kris1 has joined #mlpack
kris1 has quit [Ping timeout: 258 seconds]
stephentu has joined #mlpack
kris1 has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]
kris1 has quit [Quit: kris1]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]
sumedhghaisas_ has joined #mlpack
< sumedhghaisas_> rcurtin, zoq: Hey Marcus, Ryan... I think we have reached the limit of boost variant size. I am not able to add more layers. Do you have any idea how to solve this?
< sumedhghaisas_> ahh got it. We have defined the limit prereqs.hpp header
< sumedhghaisas_> I will increase it to 50 for now...
< rcurtin> sumedhghaisas_: just out of curiosity, as you add more types to the boost::variant, do you notice any compile-time slowdown? (hopefully not)
< sumedhghaisas_> rcurtin: I have tried noticing it... But technically there should be some minor slowdown. Cause the mpl sequence will go through more recursion.
< sumedhghaisas_> but I dont think there is big difference from recursion depth 20 to recursion depth 40
< sumedhghaisas_> maybe we will notice some slowdown at depth 100 or so
< sumedhghaisas_> also I have setup a testing framework for layers which handle memory. This testing framework creates fake memory from a linear layer and checks if the gradient w.r.t input and memory are both correct.
< sumedhghaisas_> But requires me to add extra parameter of 'input' of the layer to the BackwardVisitor ... cause the gradient w.r.t. memory might depend on input as well. Will that be okay? I have inserted a check if Backward function accepts 3 parameters or 4 parameters to check if a layer requires 'input' or not
< sumedhghaisas_> this way we don't have to change the Backward function of existing layers
< rcurtin> sumedhghaisas_: I'd prefer not to add an extra parameter for the BackwardVisitor if possible, maybe there is some other way?
< rcurtin> but unfortunately I am on vacation now so I can't dig too deep today... need to go get some breakfast
< sumedhghaisas_> even I am thinking about it.... but layers which takes memory also as input might need 'input' as parameter to compute the gradient w.r.t. memory. Other way is to store the inputs while in forward ...
< sumedhghaisas_> but thats a major drawback... as we are storing the same matrix again in the network
< sumedhghaisas_> RNN framework already stores all the input .... as they are passed in Gradient call
< sumedhghaisas_> rcurtin: Ohh and backporting is just copying that function to arma extra headers in mlpack right?
< rcurtin> sumedhghaisas_: basically yes, and then wrapping it with an #ifdef to make sure it doesn't get backported to incorrect versions
< rcurtin> so you'll need to find the first version of armadillo that has what you need
< rcurtin> if you have trouble finding old armadillo versions, send Saurabh an email, he has been collecting them :)
< sumedhghaisas_> rcurtin: collecting them? haha... nice hobby...
< sumedhghaisas_> okay I will send him a mail then. So I find the first version that has that function ... so disable our code at that cersion right...?
< rcurtin> right
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 240 seconds]