verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq> Never heard of D.A.F, one band I really like is Moderat, kinda different and probably more modern.
delfo_ has joined #mlpack
< zoq> "A New Error"
< zoq> just a cool name for a song
< rcurtin> Moderat, I will add that to my list
< rcurtin> but I don't have any headphones on this trip, so it will have to wait until I am home...
delfo has joined #mlpack
delfo has quit [Client Quit]
thanhdng has joined #mlpack
delfo_ has quit [Quit: WeeChat 1.7]
delfo_ has joined #mlpack
delfo_ has quit [Client Quit]
delfo_ has joined #mlpack
thanhdng has quit [Quit: Page closed]
chvsp has quit [Ping timeout: 260 seconds]
arunreddy has joined #mlpack
arunreddy has quit [Client Quit]
mikeling has joined #mlpack
< mikeling> rcurtin: ping
delfo_ has quit [Quit: WeeChat 1.7]
< rcurtin> mikeling: I am here but not able to help with your compilation error, I have a lot of work I need to do and a paper deadline on Monday
< rcurtin> I would advise experimenting with different syntax possibilities and seeing how this changes the error, which could possibly help guide you to a solution
< mikeling> rcurtin: oh, I see.
< rcurtin> but I am assuming you were going to ask about that :)
< mikeling> yep, you are absolutely right :D No worried, I will keep working on it
< rcurtin> yeah, sorry that I can't help more right now
< mikeling> it's ok ;)
< rcurtin> I think, based on glancing at it, that the fix will probably a be a simple syntax change, but figuring out the right change to make with gcc's errors can be very hard sometimes...
kaamos has joined #mlpack
< kaamos> hi
< zoq> kaamos: Hello there!
< kaamos> I'm a MSc candidate student from Canada and I'm looking forward to doing GSoC this summer. This will be my first time doing GSoC and I'm really interested in some of the project ideas that you have posted. Could you please help me out on the application and the process?
< zoq> kaamos: Have you seen: http://mlpack.org/gsoc.html? The Student Manual is also quite helpful: https://developers.google.com/open-source/gsoc/resources/manual
< kaamos> I understand that your primary codebase is C++, and I don't have much background in C++. However, I have most of my undergrad in C, and I have used Java, Python, and JS throughout my internships, employment, and reasearch. Is still okay that I apply?
< kaamos> Thanks for the links! No I hadn't seen that page before, but I did review your github profile (that's what was linked from your GSoC profile).
< zoq> kaamos: Depending on the project, basic knowledge is sufficient in either case you should be willing to dive into various aspects of C++.
< kaamos> Happy to learn!
< kaamos> One more thing: I don't have much planned for the summer, however, I am considering a two week vacation. Is that okay?
< kaamos> Unfortunately, it falls within the 3-month work period and not May
< kaamos> I'm sure I can put some extra hours over the weekends on other days and make up for it somehow :)
< kaamos> *weeks
< zoq> kaamos: As long as you discussed that with your mentor upfront, showed progress and probably can compensate the time, I think this is fine. Also, it's a good idea, to notice that in your proposal.
< zoq> mikeling: I just glanced over the patch and probably take a closer look at it tomorrow but since you changed the template parameters of SplitIfBetter don't you have to specify the value of UseWeights if you call e.g. "double dimGain = NumericSplitType<FitnessFunction>::SplitIfBetter("
< kaamos> Cheers mate! I look forward to it!
kaamos has quit [Quit: Page closed]
< mikeling> zoq: yep, sorry, I should call it like "NumericSplitType<FitnessFunction>::SplitIfBetter< UseWeights >". I guess I just pay too much attentions on the Evaluate functions :)
< mikeling> thank you!
govg has quit [Ping timeout: 260 seconds]
thyrix has joined #mlpack
govg has joined #mlpack
doublegamer26 has joined #mlpack
< doublegamer26> Hello. I would like to know how I can contribute to the community. All help is appreciated. :)
< rcurtin> doublegamer26: hi there, we get this question a lot, so we made a page for it: http://www.mlpack.org/involved.html
< rcurtin> maybe that will be helpful to you :)
< doublegamer26> Thank you.
vinayakvivek has joined #mlpack
doublegamer26 has quit [Quit: Page closed]
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
diehumblex has joined #mlpack
witness_ has joined #mlpack
thyrix has quit [Quit: Page closed]
adi_ has joined #mlpack
madhudeep has joined #mlpack
thyrix has joined #mlpack
witness_ has quit [Quit: Connection closed for inactivity]
madhudeep has quit [Quit: Page closed]
tejank10 has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
adi_ has quit [Ping timeout: 260 seconds]
tejank10 has quit [Ping timeout: 260 seconds]
frankbozar has joined #mlpack
frankbozar has left #mlpack []
chvsp has joined #mlpack
thyrix has quit [Ping timeout: 260 seconds]
sicko has joined #mlpack
chvsp has quit [Ping timeout: 260 seconds]
govg has quit [Ping timeout: 258 seconds]
thyrix has joined #mlpack
govg has joined #mlpack
ironstark has joined #mlpack
ironstark has quit [Quit: Page closed]
tejank10 has joined #mlpack
ironstark has joined #mlpack
ironstark has quit [Quit: Leaving]
ironstark has joined #mlpack
ironstark has quit [Client Quit]
< tejank10> Hello, I was trying to compile and run the tests provided. But I am getting errors pertaining to boost library, to be specific from its variant directory. Can anybody please help me?
mikeling has quit [Quit: Connection closed for inactivity]
< rcurtin> tejank10: I can try to help, but you'll need to provide more information like the error message, etc. :)
K4k has left #mlpack []
< zoq> mikeling: Two more issues; everytime you use 'FitnessFunction::Evaluate<UseWeights>(...)'' it should be 'FitnessFunction::template Evaluate<UseWeights>(...)'
< zoq> mikeling: and I think you missed the weights parameter in one of the SplitIfBetter function calls. Let us know if that solves the errors you see.
< zoq> let's see if he checks the logs
< tejank10> Thanks @rcurtin. Following are some errors which I am encountering while compiling ann_layer_test.cpp
< tejank10> boost/variant/detail/make_variant_list.hpp:40:46: error: wrong number of template arguments (33, should be at least 0) typedef typename mpl::list< T... >::type type; ^
< tejank10> boost/variant/variant.hpp:2332:43: error: using invalid field ‘boost::variant<T0, TN>::storage_’ return internal_apply_visitor_impl( ^
< tejank10> boost/variant/variant.hpp:2334:13: error: return-statement with a value, in function returning 'void' [-fpermissive] );
< tejank10> I am running boost 1.58
< zoq> tejank10: And you used 'make' or 'make test' do build the tests?
< tejank10> no
< zoq> some g++ command line?
< tejank10> yes
< zoq> tejank10: I wouldn't say you can't build the test cases with g++ but it's somewhat sophisticated. It's easier to use make: http://www.mlpack.org/docs/mlpack-2.1.1/doxygen.php?doc=build.html#build
< rcurtin> tejank10: I agree with zoq (sorry for the slow response, I had to drive to work)
< rcurtin> it's always easier to just 'make mlpack_test' to build the tests
< rcurtin> if you want to see what kind of command-line arguments to g++ are necessary, you can run 'VERBOSE=1 make' but even then be aware that the invocation of g++ there depends on lots of .o files that get built in different calls to g++
vinayakvivek has joined #mlpack
< tejank10> Thanks @zoq and @rcurtin! I shall try this now :)
thyrix has quit [Quit: Page closed]
ironstark has joined #mlpack
< ironstark> hi, i wanted to contribute to mlpack to develop essential deep learning modules, can someone give me pointers on where to start?
< ironstark> thanks in advance
< ironstark> i have already built mlpack and tried out some programs with it
< ironstark> are there any bugs/ideas i can work on to begin with?
< zoq> ironstark: Hello, have you searched through the list archives (http://lists.mlpack.org/pipermail/mlpack/) for other messages about the deep learning modules project? There is a bunch of information that has been written about this project over the past.
< zoq> ironstark: Check the issues on github maybe you find something interesting, it's kinda difficult to keep enough issues open, so another idea is to dig around in the codebase and see if you can find something that can be improved. Also We are always open for intersting new algoritihms.
< ironstark> cool, i'll get back in some time
< zoq> ironstark: Sounds good :)
tejank10 has quit [Quit: Page closed]
shihao has joined #mlpack
deepanshu_ has joined #mlpack
Nax has joined #mlpack
Nax__ has joined #mlpack
Nax has quit [Ping timeout: 260 seconds]
Nax__ has quit [Ping timeout: 260 seconds]
< shihao> I have a question about issue#921: https://github.com/mlpack/mlpack/issues/921.
< rcurtin> shihao: sorry, I had not responded to that
< rcurtin> let me do that now...
< shihao> rcurtion: Hi!
< shihao> rcurtion: I think I figured that out
< rcurtin> oh? ok, I will still comment anyway, because the way to fix it is actually somewhat complex...
< rcurtin> or maybe there is a quick workaround you found out?
< shihao> rcurtion: I see that in load_impl.hpp file, there is only one function to load data
< shihao> But it can only load into matrix.
< shihao> rcurtion: Is that right?
< rcurtin> I just merged another PR that Lakshya worked on that can load column and row vectors too
< rcurtin> make sure you're looking at the up-to-date git master branch
< shihao> That's great!
< shihao> I guess in test programs there are a lot of this kind of situation.
< rcurtin> I added a comment, I hope it is helpful
< rcurtin> sorry if you read it and it feels like this is a much harder task than originally thought... :)
< shihao> rcurtion: No worry, it a meaningful improvement and I can learn a lot of thins :)
kris2 has joined #mlpack
< kris2> getting the gradients of the last layer or the loss function. I am doing this arma::vec grad = model.Model()[model.Model().size()-1].Gradient();
< kris2> but it shows a error saying the layer function dosen't have the Gradient method
< zoq> kris2: Are you sure the last layer has a Gradient function? How does your model look like?
< kris2> The last layer is also model.Add<LogSoftMax<>>();
< zoq> kris2: There is no Gradient function for LogSoftMax check src/ann/layer/log_softmax.hpp.
< kris2> FFN<MeanSquaredError<>,RandomInitialization> model; but the last layer should be mean squared error
< kris2> or am i wrong
< kris2> check here
< rcurtin> ok, it looks like the time has come to move the build server masterblaster from its home in LA to a new home in Springfield, Oregon
< rcurtin> I think that I can minimize the downtime to be ~3 days max; I'll see if I can get it to be lower than that
chvsp has joined #mlpack
< kris2> zoq: can you have a look at the gist
< kris2> I can confirm that even Backward() gives the same error
< zoq> rcurtin: perfect timing :)
< zoq> kris2: MeanSquaredError isn't stored in std::vector<LayerTypes> network.
< zoq> kris2: Just the ones you added with Add(...).
< kris2> so if wanted to the gradients of the last layer how should i go about it
< kris2> i though of using backwardvisitor
< zoq> That depends on the model, but arma::vec grad = model.Model()[model.Model().size() - 2].Gradient(); should work in your case.
< kris2> with input from apply_visitor(OutputParameter(), model.Model()[model.Model().size()-1]
< kris2> i can confirm that gives the same error
< kris2> has no member gradient error
shihao has quit [Quit: Page closed]
< zoq> kris2: Yeah, I see there is no visitor class, should I write one for you?
< kris2> zoq: basically i need the gradient of the loss layer. also we have the gradient_visitor. implemented.
< kris2> i think we could use that for gradients of any layer.
< kris2> that implements the gradients function
< kris2> we just have to provide the input. I am not sure what the inputs mean there.
< zoq> The gradient_visitor executes the Gradient function (Gradient(...), but what you like is the gradient (Gradient()) right?
< kris2> Yes. but we could provide the input from the previous layer to the Gradient(...) visitor and essential to would work the same right.
< kris2> *essentially it would work the same
< zoq> The problem is, even then the gradient visitor does not expose the gradient, because it's just used internally. I have an idea, let me write it down.
< zoq> see my comment
< zoq> I will update the gradient visitor, so that it also works, but maybe this works for you at the moment?
< kris2> Would that give the gradients for the output layer(Mean squared error)
< kris2> no right
< kris2> how could we get the gradients for them
< zoq> kris2: it does
< kris2> ohh, but they don't have the gradient() function ??
< zoq> The gradient of layer x can be accessed via the previous layer.
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1992 (master - 07b3707 : Marcus Edel): The build is still failing.
travis-ci has left #mlpack []
chvsp has quit [Quit: Page closed]
< kris2> zoq: if we want to get the weights of the layer connecting hidden layer to output layer then can i do something like this weights = boost::apply_visitor(OutputParameterVisitor(), model.Model()[model.Model().size - 2]);
< kris2> because the Output parameter visitor gives the trainable parameters here that would be weights ?/
< zoq> kris2: OutputParameterVisitor returns the output of layer x (e.g. input*w -> OutputParameterVisitor()) and ParametersVisitor returns the weights/trainable parameter (e.g. w) of layer x.
< kris2> so i can use weights = boost::apply_visitor(ParameterVisitor(), model.Model()[model.Model().size - 2]);
< zoq> kris2: ParametersVisitor not ParameterVisitor and model.Model().size() not model.Model().size
< kris2> yes.
< kris2> thanks
< zoq> kris2: here to help :)
deepanshu_ has quit [Quit: Connection closed for inactivity]
< kris2> zoq: but i think we discussed earlier the ParametersVisitor does not always give a matrix for every layer type.
< zoq> kris2: If layer x does not implement the Parameters() function the return value of ParametersVisitor is an empty matrix.
< kris2> layer1->layer2->layer3. ParametersVisitor for layer2 would give the forward layer weights
< kris2> also is there any work around if a layer does not implement the Parameters function
< zoq> there is ParametersVisitor returns an empty matrix if a layer does not implement the Parameters function
benchmark has joined #mlpack
benchmark has quit [Client Quit]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1993 (master - 847b5ac : Marcus Edel): The build is still failing.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1994 (master - e153611 : Marcus Edel): The build is still failing.
travis-ci has left #mlpack []
diehumblex has quit [Quit: Connection closed for inactivity]
< zoq> hm, just noted that the RecurrentNetworkTest takes 20 minutes on masterblaster and 30 seconds on savannah, has to be correlated with the number of cores.
< rcurtin> hm, it could be that masterblaster is very heavily loaded when it runs the test?
< rcurtin> I know the sun systems were swapping and I had to reduce the number of executors
< zoq> no, I checked the jenkins build history and also tested it manually
< zoq> 29sec
< rcurtin> I see what you mean, #2636 took 20 minutes and #2630 took 29 seconds, both on masterblaster, both when the load average on masterblaster should have been low
< rcurtin> I wonder, if this is the result of high variance in, e.g., the number of iterations before SGD convergence criterion is being reached
vinayakvivek has quit [Quit: Connection closed for inactivity]
kris2 has left #mlpack []
< zoq> In this case the number of iterations are fixed.
< rcurtin> hmmm, very strange then
< zoq> Can we see the load at build times?
< rcurtin> if you like, add it to #922 :)
< rcurtin> hmm, maybe you can add to the build instructions, 'cat /proc/loadavg' right before mlpack_test is run
< zoq> Maybe the load at 2630 or 2631 the load was high
< zoq> We could test this out, can we stop the matrix build it's stucked right now; and then start the matrix and commit build at the same time?
< rcurtin> yeah, sure
< rcurtin> the sun builds are hanging for some reason, I haven't isolated the failure
< zoq> We just need some load at the same time we run the commit job
< rcurtin> yeah, easy to test, do you want to do that or should I?
< zoq> I can do it
< zoq> here we go :)
< rcurtin> hehe, load average 35
< rcurtin> seems like there are some issues still with dealgood where the build fails, I'll see if I can fix those
< zoq> Maybe someone takes up the "Build testing" idea, docker could make things easier.
< rcurtin> yes, I sure hope so! (I hope someone reads these logs and sees that I am really interested in that project getting done too!)