rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<jonpsy[m]> Shah Anwaar Khalid: How's your PR coming along?
<khalidanwaar[m]> Hey jonpsy ! Which one?
<jonpsy[m]> you had plenty :)
<jonpsy[m]> so I'm trying `findTBB.cmake` to set its path correctly
<jonpsy[m]> This is the [link](https://github.com/Project-OSRM/osrm-backend/blob/master/cmake/FindTBB.cmake), Its not able to identify my path in MacOS (The m1 chip is a disaster). Someone help? :)
<khalidanwaar[m]> <jonpsy[m]> "you had plenty :)" <- Well, presently, Iā€™m waiting for the [Convolution issue](https://github.com/mlpack/mlpack/issues/2986) to be fixed on the ann-vtable branch so that I can test my [InceptionV3 PR](https://github.com/mlpack/mlpack/pull/2963) there. Apart from that, I closed the dual-optimizer PR, for now, until someone is able to figure out the [Gradient Penalty issue](https://github.com/mlpack/mlpack/issues/3063).
<khalidanwaar[m]> How's your job hunt going? jonpsy
<jonpsy[m]> khalidanwaar[m]: got an intern
<jonpsy[m]> @ shipsy
<khalidanwaar[m]> Oh, wait, I had one more . zoq can we merge [this](https://github.com/mlpack/mlpack/pull/2900)?
<khalidanwaar[m]> jonpsy[m]: Nice ! Congrats! When are you graduating?
<jonpsy[m]> 2022
aakashi2001 has joined #mlpack
aakashi2001 has quit [Client Quit]
<jonpsy[m]> how do I find IRC chatrooms via element?
<JeffinSam[m]> Copy paste the whole thing
<zoq[m]1> <khalidanwaar[m]> "Oh, wait, I had one more . zoq..." <- Hello, if you can rebase against the latest master branch, that would be awesome.
<khalidanwaar[m]> <zoq[m]1> "Hello, if you can rebase against..." <- Cool! I'll do that.
<jonpsy[m]> <JeffinSam[m]> "Copy paste the whole thing" <- irc://irc.oftc.net/#osrm, so I'm trying to enter here...
<jonpsy[m]> I tried copying this in "join public room"...in vain
<JeffinSam[m]> Oh I thought it was mlpack
<JeffinSam[m]> But here I guess copy pasting the below will work
<JeffinSam[m]> irc://irc.oftc.net/#osrm
<JeffinSam[m]> But not sure , for me mlpack one worked
<JeffinSam[m]> Never tried other room šŸ˜…
<jonpsy[m]> not found ://
<NabanitaDash[m]> I think there is no feature like ``flatten()`` in mlpack. Should I try wriitng one? It is required for my PR.
<rcurtin[m]> Nabanita Dash: maybe `arma::vectorise()` is what you are looking for?
<NabanitaDash[m]> yes, thank you. I am not good with armadillo.
<rcurtin[m]> no worries šŸ˜„ their docs are great, but sometimes it can be hard to discover what all the functions do (unless you read the whole thing start to finish, but that doesn't sound like a very fun time...)
<NabanitaDash[m]> I guess writing code with armadillo will make me familiar with the base functions.
<NabanitaDash[m]> ```
<NabanitaDash[m]> * ```
<NabanitaDash[m]> ```
<NabanitaDash[m]> The ``model`` is a type of ``FFN`` and not a matrix, hence implementing ``arma::vectorise()`` is not possible on the ``model``. I think introducing a ``vectorise`` class in the ``methods/ann/layers`` which can use ``arma::vectorise()`` on its inputs can be a solution. This won't be a problem while using the ``add`` layers. Any thoughts?
<rcurtin[m]> I would be hesitant about adding too much new support to the neural network toolkit; we're in the process of refactoring it entirely anyway
<NabanitaDash[m]> hmm.. It is not possible to write an example of ``adaptive pooling` layers without using ``vectorise``.
<NabanitaDash[m]> Maybe I will just add move and copy constructors for ``mean`` and ``max`` pooling and leave aside ``adaptive`` pooling for now. We can redo it after refactoring?
<rcurtin[m]> yeah, I think that would be fine šŸ‘ļø after we merge the ann-vtable refactoring, there will probably be lots of layers that have to be adapted in follow-up PRs