ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< RishabhGarg108Gi> > `zoq on Freenode` Rishabh Garg (Gitter): There could, we just have to find someone who likes to mentor the project.
< RishabhGarg108Gi> Okay. Thanks!
rhyse[m] has joined #mlpack
sayantanu has joined #mlpack
sayantanu has quit [Quit: Connection closed]
sayantanu has joined #mlpack
sayantanu has quit [Quit: Connection closed]
< AnmolpreetSinghG> Hi all ! I was implementing Copy and Move Constrictor for "Select" Layer , but while testing i am getting an error. Select layer chooses a single column from the input matrix hence one training example, so i passed one selected training label in CheckCopyFunction so that it will correspond to the selected column, but it is causing error as shown below
ImQ009 has joined #mlpack
nanabaahgyan[m] has quit [Quit: Idle for 30+ days]
< AakashkaushikGit> This might be a bit too much of work or may not even be a project from our side but i wanted to know if we can provide support for mlpack in onnx so as such people are able to convert any model to and fro from and to the libraries supported by onnx .This idea is branched from the idea in gsoc 20201 Ready to use Models in mlpack. I wanted to discuss about it. because i feel like this is something we have to work on
< AakashkaushikGit> as pytorch and tensorflow ship their own utility to convert models to onnx.
< kartikdutt18Gitt> Hey @Aakash-kaushik, Could you take a look at https://github.com/sreenikSS/mlpack-Tensorflow-Translator. Last year Anjishnu did some work on the converter.
< kaushal07[m]> <zoq "kaushal07: What os are you using"> I am using ubuntu and I used sudo apt-get libcereal-dev
< kaushal07[m]> <zoq "kaushal07: cmake -DCEREAL_INCLUD"> ok
abhi_22[m] has quit [Quit: Idle for 30+ days]
ImQ009 has quit [Read error: Connection reset by peer]
< AakashkaushikGit> Hey @kartikdutt18 i didn't went through the complete code but from what i understood this was to change onnx to mlpack right ? What i am trying to propose is maybe build upon this a more robust solution and add support for converting models from mlpack to onnx, do correct me if i am wrong.
< anjishnu[m]> I think you got it right Aakash kaushik (Gitter) . That repo is for converting from onnx to mlpack, not the vice versa. So the vice versa is something that you could possibly work on. Regarding the robustness of the existing code for that repo, I have a open PR in it that modifies a lot of stuff compared to the master branch. Maybe you could take a look at it before deciding what other components you might need for your
< anjishnu[m]> purposes.
< anjishnu[m]> Also, I just noticed one more thing. That repo can convert some particular types of models from mlpack to torch. You can find an example of that in my PR branch. Maybe you could build on that for your purposes.
< AakashkaushikGit> Hey @iamshnoo, @kartikdutt18, @zoq, @shrit, @rcurtin and everyone else that i missed what i am thinking to propose as an application for GSOC is a deliverable that can convert modules provided in mlpack to ONNX and improving on what @iamshnoo has already worked on which is to convert ONNX models to mlpack. I want to discuss if this should be something that is wanted or could be accepted as an idea, what are the
< AakashkaushikGit> things i need to consider, what are the things i need to show before this is acceptable as an application or idea and whatever that comes to your mind that should be demonstrated with the idea for the application.
< zoq> AakashkaushikGit: I like the idea, the ONNX converter we have right now, is a functioning prototype, that supports some of the exsisting layer. That said I would probably extend the ONNX -> mlpack instead of mlpack -> ONNX part first.
< zoq> AakashkaushikGit: A good start would be to get familiar with the exsisting code, ONNX can be tricky.
< zoq> AakashkaushikGit: So build the code, create a simple model in some other framework and see if you can load the model, and do inference.