ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< ABHINAVANAND[m]> Hi, I have thought of a way to increase speed of pooling layer. Currently I a trying to see how much of a speed gain I receive. What kernal size should I use for this experiment. I was thinking of using 2*2, 3*3, 4*4 and 5*5. I am not sure if kernal size greater than 5*5 are used. Till what kernal size should I go for this.
KyleTaylor[m] has joined #mlpack
< AakashkaushikGit> Hey @zoq, As we discussed in our last mail that I can implement mobilenet and resent(all the ones that are offered by PyTorch in their classification head) but I think it would be really better to implement them as customizable models in mlpack/models rather than as modules in mlpack itself on which further segmentation or another type of models can be based in the future if needed with some slight internal code
< AakashkaushikGit> modification and no API or workings changed for the users, when they want to call resnet50 or mobilenetv3 they would just do that and we want to say use them as a backbone we would call them in the code for the model that uses them and will just change the parameters for that specific model.
gunnxx has joined #mlpack
gunnxx has quit [Quit: Connection closed]
ImQ009 has joined #mlpack
gauravk25 has quit [Ping timeout: 240 seconds]
< rcurtin[m]1> ABHINAV ANAND: it looks like your implementation is faster for anything larger than 2x2; am I reading that right?
< ABHINAVANAND[m]> Yes.
< rcurtin[m]1> do you think there are additional accelerations that could be made for 2x2 so it's also faster than the current implementation? if so, then the question of which algorithm to use is very easy---we would always use yours 👍️
< ABHINAVANAND[m]> I will calculate a rough estimate of total computation required by both algorithms and let you know.
< rcurtin[m]1> 👍️
< zoq> ABHINAVANAND[m]: Very nice speedups!
< jonpsy[m]> hey rcurtin
< jonpsy[m]> i applied ur suggestion on elaborating how it fits into our codebase
< jonpsy[m]> i wanted to knw if I've been thorough enough; would you mind having a brief look (you can skip straight to the design part). let me know
< rcurtin[m]1> where can I see the proposal?
< jonpsy[m]> ping me when it asks for permission, i'll grant it ASAP
< rcurtin[m]1> ping
< jonpsy[m]> pong
< jonpsy[m]> done, see if you got it?
< zoq> AakashkaushikGit: Responded on the mail.
< rcurtin[m]1> yep, thank you
< jonpsy[m]> you can "ctrl +f " for "Things to be done" and "Design implementation"
< jonpsy[m]> and example usage, yeah that'll interest you too :)
< rcurtin[m]1> it looks like you're using the exact same multi-objective function optimization API that the existing optimizers do, which seems great from my perspective 👍️
< jonpsy[m]> for SPEA-II same as NSGA2
< jonpsy[m]> and for MOEAD/DE similar to PSO (two policies do our work)
< jonpsy[m]> if you have any comments, dont hesitate. infact, be as harsh as possible :)
< rcurtin[m]1> sure, if I have a chance I'll try to take a deeper pass
< jonpsy[m]> cheers
< rcurtin[m]1> I might suggest using the type of the responses to determine, but that may not be perfect. basically, if the user passes an `arma::Row<size_t>`, it is classification, but if they pass an `arma::rowvec`, it is regression
< rcurtin[m]1> hm, yeah, maybe the better idea here is to have a template parameter like you suggested, like `ResponsesType`... if that's `size_t` or some other integer type, then training expects an `arma::Row<size_t>` and it will be classification; if `ResponsesType` is `double` or some other floating-point type, then a regression tree is produced
< rcurtin[m]1> we definitely don't want inheritance here, because when we predict, we'll be recursively calling functions of `DecisionTree`, and that extra virtual function overhead will make a difference
< rcurtin[m]1> hmm, maybe did you mean use overloading instead of inheritance?
< rcurtin[m]1> (sorry, I don't know how well I answered your questions! I gave like four different responses all at once :))
< RishabhGargGitte> I am not exactly sure what is it called, but the concept that I have in my mind about OOP is to minimise code reuse. Whenever I see two similar classes sharing a few common methods, I tend to make an abstract class where those functions reside and then I create child classes where I implement the specialised methods.
< RishabhGargGitte> Your responses are helpful. Here I am trying to brainstorm different ways in which regression trees can be implemented. So, its good for me to see different perspectives all at once :)
< rcurtin[m]1> I don't think inheritance is the right way to do this for sure; a `virtual` function of any sort will have overhead. I think other designs are probably fine though (and plus, this would be the kind of thing that is easy to change at implementation time, if we find difficulties with whatever design was proposed)
< RishabhGargGitte> Yes. I am wondering if we define the Train and Classify methods only inside the child class and not in the base class then we will not require to maintain a vtable because those functions does not exist for the base class. So, no runtime overhead. I might be horribly wrong here, I have very limited knowledge of OOPs.
< rcurtin[m]1> At that point though you are basically just writing two different classes; so I might just suggest using a template parameter instead and controlling the behavior based on that---because that's how the existing decision tree implementation already works
< jonpsy[m]> it looks like arma::fmat stores fp16?
< rcurtin[m]1> why do you say that? it should just store `float`
< rcurtin[m]1> I guess if your system has `sizeof(float) == 2`, then I suppose it is fp16?
< RishabhGargGitte> Ahh,, yes. I think then template should be the choice here. Thanks, it was nice discussion with you.
< rcurtin[m]1> 👍️
< jonpsy[m]> In windows, tests are failing because of numeric errors
< jonpsy[m]> When I'm doing "as_scalar(solution) ", the returned float value is different somehow
< rcurtin[m]1> if this is an ensmallen test, are you sure that the algorithm is starting from the exact same starting point? different OSes will generally have different results if you are generating, e.g., a random starting position
< rcurtin[m]1> I might suggest running the test many times locally with different random seeds to see how robust it is
< jonpsy[m]> wait, let me get back with something concrete
< jonpsy[m]> > I might suggest running the test many times locally with different random seeds to see how robust it is
< jonpsy[m]> i'll try this.
< jonpsy[m]> i have one sanity-check question, does catch produce the same output for ```arma::randu()``` for all tests in test-suites?
< jonpsy[m]> * i have one sanity-check question, does catch produce the same output for `arma::randu()` for all tests in a single test-suite?
< RishabhGargGitte> In the `GiniGain` and `InformationGain` classes, we have a method called `EvaluatePtr` that has pointer to vector containing the counts of classes as input. I suspect that it has been done to prevent unnecessary copying when the function is called. But can't we use pass it by reference there ? Or is there any other specific purpose for it which I missed ?
say4n has joined #mlpack
< rcurtin[m]1> Rishabh Garg (Gitter): take a look how it's used; it's sometimes called on the first column of a matrix, sometimes on the second; passing a pointer avoids creating an Armadillo object wrapped around that memory (from which we'd just be accessing the memory anyway)
michael6 has joined #mlpack
michael6 has quit [Client Quit]
ImQ009 has quit [Read error: Connection reset by peer]
< ABHINAVANAND[m]> zoq I calculated the total operations required. Here are my calculations. So, the new method requires *(4 N^2) operations* and Current method requires (N^2. k^2)/(s^2).
< ABHINAVANAND[m]> So, if *(k > 2s),* then the new method will perform better as we can see in timing earlier. Now I am not sure whether this can be a good addition or not.
< ABHINAVANAND[m]> My calculations are in the attached document:
< jonpsy[m]> ABHINAV ANAND You're amazing. Personally, sometimes when I do some quick maths I feel "I've cracked the code! I've got an optimized algorithm" ; only to be let down seeing I missed something 😛. But in your case it seems you have solid mathematical proof + benchmarks. So congrats!
< jonpsy[m]> May I suggest you post this method on tensorflow/pytorch mailing list as well, so it can be more thoroughly validated.
< jonpsy[m]> Nice work again :)
< rcurtin[m]1> nice! I'll take a look when I have a chance 👍
< zoq> ABHINAVANAND[m]: Very nice, I don't think there is a question if we should add it or not, the speedups are really nice, so I would really like to see it integrated.