verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1428 (master - 87776e5 : Ryan Curtin): The build passed.
< nilay>
zoq: didn't need to modify the cnn class?
< zoq>
nilay: I don't think so
< zoq>
nilay: At least not for the weight size part, if you call UpdateGradients of course that function has to be public.
< nilay>
zoq: yes, UpdateGradients is public now. but i am not sure if this is enough, as the outputLayers maybe in networkA or networkB and we are not calculating the error
< nilay>
for networkA or networkB
< zoq>
I'm not sure what you mean, we calculate the error in the backward pass, maybe I missed something.
< nilay>
zoq: in backward pass we pass the error to the functions, but in Evaluate() we calculate the error with OutputError()
< nilay>
maybe if i put that after the forward call it'll be enough. .
< nilay>
so now the only thing I need to do is recognize the connect_layer and don't do any of the mandatory tasks that we do for the other layers.
< nilay>
and then modify the Predict() function...
< zoq>
nilay: ah, right, you have to calculate the error in the forward function, so that you can use network.error in the backward pass.
< zoq>
nilay: yes, sounds right
< nilay>
ok, thanks.
George__ has joined #mlpack
< George__>
Hi guys, would anyone care to help me with a few general(~ish) questions (preferably with some mlpack examples, but not necessary) ?
< zoq>
George__: Hello, sure you can ask any questions you like, we get back to you once we have the time.
< George__>
Well, first I was wondering if you guys have any tips for improving computation times (e.g. flags to set when compiling to allow for better optimization of floating point operations) ? Secondly I was wondering if there is any way one could "force" a regression algorithm (currently I'm using ridge regression from dlib, with a 4 dimensional kernel) to "train" itself in such a way that it always gives predictions "above" the required v
< George__>
Well, i guess the second question is kind of vague and stupid... rather I might ask if there are any papers dealing with the subjects of predicting things in a kind of situation where a false prediction for a "bigger value
< rcurtin>
George__: I am on the bus right now but when I am at a desk I can help answer
< rcurtin>
phones are too bad to type a lot on :(
< George__>
pus... that cut out, I was going to say a false preidction for a bigger value is acceptable but a false prediction for a smaller one isn't
< rcurtin>
the commute is long today but I should be there in like 90 minutes or so...
< George__>
There's literally no rush. I started working on a project today and I have ~2 months to finish it :p
< rcurtin>
ok, I will be at my desk within 2 months :)
govg has quit [Ping timeout: 260 seconds]
George__ has quit [Ping timeout: 264 seconds]
< marcosirc>
lozhnikov: Hi
< marcosirc>
I have been reviewing the code of different tree types.
< marcosirc>
I can see that for RectangleTree, the constructor that takes a reference to the dataset, will create a copy of it.
< marcosirc>
As far as I know, Rectangle trees don't modify the dataset. Am I right?
< marcosirc>
So, maybe we could save a pointer to the dataset instead of copying it, as CoverTree does. What do you think?
govg has joined #mlpack
< lozhnikov>
marcosirc: Hi, I argee. I looked through the code, currently it seems there is no reason for copying the dataset. But R trees allow to add new points. We decided that it is better to add points to the main dataset.
< lozhnikov>
On the other hand, we can add points to the dataset and then add them to the tree.
< rcurtin>
hm, I had forgotten about that concern; in that case, I think I agree, it is better to copy the dataset when a reference is given
mentekid has joined #mlpack
Kirizaki has joined #mlpack
< Kirizaki>
Hello
< rcurtin>
Kirizaki: hello! I haven't had a chance to merge your PR yet. I know it is very simple but I just haven't gotten to it yet :)
< Kirizaki>
nu problemo ;)
Kirizaki has quit [Client Quit]
Kirizaki has joined #mlpack
pantsforbirds has joined #mlpack
pantsforbirds has quit [Ping timeout: 264 seconds]
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1431 (master - 0a19d07 : Ryan Curtin): The build was broken.
< pantsforbirds>
so dumb question. How are functions defined for the optimizers?
< pantsforbirds>
i have a great optimization algorithm i was hoping to submit for approval, but im trying to comfortable with how the mlpack ecosystem is defined
< rcurtin>
hi there! we would be glad to look at new optimization algorithms
< rcurtin>
the functions for the optimizers are given as template parameters
< rcurtin>
and they really only need to have two functions: double Objective(const arma::mat& coordinates), and void Gradient(const arma::mat& coordinates, arma::mat& gradient)
< rcurtin>
it might be useful to take a look at how these are used in the tests, like src/mlpack/tests/lbfgs_test.cpp and src/mlpack/tests/sgd_test.cpp and others
< pantsforbirds>
ah i get how you guys are doing it now
< pantsforbirds>
when i was testing my implementation i was passing a function from the <functional> header instead.
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1436 (master - 5310731 : Ryan Curtin): The build is still failing.
< rcurtin>
pantsforbirds: yeah, we generally assume a gradient and use armadillo objects to represent coordinates, so we have to define our own FunctionType
< rcurtin>
but it seems like there is no specific documentation for that... sorry about that
pantsforbirds has quit [Ping timeout: 258 seconds]
< rcurtin>
a coworker told me about an interesting possibility that I will investigate at some point soon...
< rcurtin>
AWS now offers a service called Lambda, where essentially you can upload a function to be completed and it will run it for you and return the results
< rcurtin>
this has been used for scikit, but my understanding is that you pay for how long the computation takes
< rcurtin>
so, it might be hard to justify to a data scientist "make your algorithms run 10% faster with mlpack (or more)" but it is much easier to say "make your algorithms 10% cheaper (or more)"
< rcurtin>
so I think I will sit down, perform some proof-of-concepts to see if I can get it to work, and then do some comparisons...
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1439 (master - 4271e89 : Ryan Curtin): The build is still failing.