verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
partobs-mdp has joined #mlpack
andrzejku has joined #mlpack
govg has joined #mlpack
witness_ has joined #mlpack
andrzejku has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
andrzejku has joined #mlpack
mentekid has joined #mlpack
mentekid has left #mlpack []
andrzejku has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
andrzejku has joined #mlpack
partobs-mdp has quit [Remote host closed the connection]
andrzejku has quit [Quit: Textual IRC Client: www.textualapp.com]
sumedhghaisas_ has quit [Ping timeout: 246 seconds]
kris1 has joined #mlpack
nikhilweee has quit [Remote host closed the connection]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
shikhar has joined #mlpack
kris1 has quit [Quit: kris1]
rishabhgupta05 has joined #mlpack
< rishabhgupta05> help
rishabhgupta05 has quit [Ping timeout: 260 seconds]
govg has quit [Ping timeout: 248 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
qwertea has joined #mlpack
< qwertea> hello everyone :) I need to access to the mappings in a datasetinfo to display a text representation of a decision tree; I didn't find any method I can call to access the mappings, any idea?
< qwertea> I was adviced before to serialize it but I now need the mappings in my code
< qwertea> thanks in advance!
< zoq> If not we could add a simple function that returns all mapping, maybe rcurtin can say more about this.
< qwertea> unfortunately I don't know the value :/ a function to get all mappings would be great!
< zoq> Do you mind to open an issue on github for this? Maybe someone likes to work on this; I could also implement the addition.
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
< qwertea> sure no worries! it also looks relatively simple so I could also implement it
< zoq> qwertea: Great!
sumedhghaisas_ has joined #mlpack
shikhar has quit [Quit: WeeChat 1.7]
qwertea has quit [Ping timeout: 260 seconds]
< sumedhghaisas_> zoq: Hey Marcus, I am doing a clean build right now.
< sumedhghaisas_> while I do that, had couple of doubts
< sumedhghaisas_> I merged the copy task code in NTM to perform copy task. Even though there were not many conflicts
< sumedhghaisas_> I noticed that there is some extra code about 'rho' parameter... how do you want to fix this issue?
< sumedhghaisas_> also how much is the error in GradientNTMTest??
< zoq> We can remove that part, since you already fixed that in the GRU PR.
< sumedhghaisas_> zoq: shiiiiiiiiiit.... I got the problem
< sumedhghaisas_> it was in my initialization of controller in ann_layer_test
< sumedhghaisas_> I initialised the first layer of the controller wrong...
< sumedhghaisas_> should be '10 + 6'... not '10 + 5'... I was using 5 block memory before
< sumedhghaisas_> but then for testing I also tried 6...
< zoq> ah, I see, easy to fix
< sumedhghaisas_> I will commit now ... lets see if the online gradients pass
< sumedhghaisas_> wait... sorry it should be '10 + 10'...
< sumedhghaisas_> I think those network checks that we discussed that day, are necessary :P
< zoq> Agreed :)
< sumedhghaisas_> also could you help in setting up the NTM tests for copy task on the server? my computer becomes too slow with those tests running
< sumedhghaisas_> another issue... I installed the merged NTM and benchmark tasks... there was some error while building models repo. Something related to parallel_sgd... some implementation not being there. So I deleted the extra headers in sgd_impl.hpp and it worked.
< sumedhghaisas_> don't know if thats the right fix...
< sumedhghaisas_> but maybe those headers are added by mistake?
< zoq> The error should be fixed once https://github.com/mlpack/mlpack/pull/1077 is merged.
< sumedhghaisas_> ahh okay then
< zoq> About the copy task, do you want to use the code from the models repo or do you like to write a boost test case?
< sumedhghaisas_> I like the idea of 'models' repo... I would like to use that.
< sumedhghaisas_> should I send a PR there?
< sumedhghaisas_> I also changed some parameters in LSTM baseline... it is performing better now.
< sumedhghaisas_> I hope thats okay...
< sumedhghaisas_> I think the learning rate was quite high... cause the learning was very unstable.
< zoq> I think it would be neat if we could provide some pretrained models, and also show some nice examples how to use the model and the place for that would be the models repo.
< sumedhghaisas_> Also changed the optimizer to RMSProp...
< zoq> Ah nice :)
< sumedhghaisas_> So what do you propose?
< zoq> About the test?
< sumedhghaisas_> yeah. Add some simple copy task to boost test and major ones in models?
< sumedhghaisas_> ohh... you were saying both in models. Sorry I got the wrong idea...
< zoq> It would be great to have a unit test in mlpack for the NTM model (boost test case), we have to see if we can come up with something that runs reasonable fast on travis...
< sumedhghaisas_> ahh... yes I was thinking about that. You know we have recursive reber grammar test. Why not make it more recursive?
< sumedhghaisas_> So we define a recursive depth and get a reber grammar.
< sumedhghaisas_> it would be cool to check at which recursive depth the LSTM fails...
< sumedhghaisas_> that would also be the NTM model test
< zoq> Sounds interesting, nice idea.
< sumedhghaisas_> Okay I will do that first. Then lets setup the models thing. is that fine?
< zoq> Sure sounds fine, for me.
< sumedhghaisas_> okay. So did you get a look at the MemoryTest design? Do you think thats the correct design for testing memory gradients?
< zoq> Let me take a closer look at the MemoryTest tomorrow.
< zoq> About the models repo: Basically all you have to do is to copy the code from the ann/augmented/tasks folder. Clone the models repo adjust GenerateModel function and build with:
< zoq> cmake -DMLPACK_LIBRARY=/path/to/build/lib/libmlpack.dylib -DMLPACK_INCLUDE_DIR=/path/to/build/include/ && make
< zoq> and at the end run the run_copy_task.sh script.
< sumedhghaisas_> okay. Ahh yes... I think I have login to the server.
< sumedhghaisas_> but how do I select the machine on which to run the task?
< zoq> I think you should be able to ssh into masterblaster.mlpack.org?
< sumedhghaisas_> yup. I can.
< sumedhghaisas_> ohh... I just did lscpu there... its a single 72 core machine... I thought they are separate machine. Thats what we have here in the uni
< zoq> ah, yeah no it's a single machine
< ironstark> rcurtin: zoq: I will soon start working on benchmarking R so we will need to setup R.I know how to implement the algorithms in R (after the dataset is read). I might need help with the part where we send the command using python scripts to run the R script. I also need help with the changes we need to make to the Makefile for setting up and ensuring smooth functioning of R scripts. I also might need help with how to
< ironstark> return the runtime calculated back to the python script.
< ironstark> The first thing I need to know is how to setup R on the system. Once that is done I'll try to figure out other problems on my own by running the scripts, if I am not able to I'll take help
< ironstark> but the setup thing is something I am not sure how to achieve. Since R is a language and not a library so how do we set it up using make setup
< ironstark> I have also thought about benchmarking against some of the libraries like MachineLearning.jl( Julia), MLlib(Spark), Shark(C++) after I am done with R. Please let me know your thoughts on this.
< sumedhghaisas_> zoq: Okay the gradients are passing now. There is some compilation error in AppVeyor regarding arma::each_col function. is AppVeyor using older version of armadillo?
< zoq> sumedhghais: We build against 7.800.2, probably lack of lambda support ...
Sam_____ has joined #mlpack
< zoq> ironstark: We could build R from source: https://cran.r-project.org/doc/manuals/r-release/R-admin.html#Building-from-source but we could also just install R via the package manager. I guess since we build everything from source we could at least test it out.