rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
pbsds has quit [Quit: The Lounge - https://thelounge.chat]
pbsds has joined #mlpack
Unix_Man has joined #mlpack
Unix_Man has quit [Quit: WeeChat 3.6]
Guest7832 has joined #mlpack
Guest7832 has quit [Client Quit]
Guest7823 has joined #mlpack
<Guest7823> Hi Team i have a mlpack model file , now i want to load this file to python for inferencing .
rcurtin_matrixor has joined #mlpack
<Guest7823> could you pls help me on that
Guest7823 has quit [Quit: Client closed]
Guest786 has joined #mlpack
<Guest786> Hi Team i have a mlpack model, how can i load to python for inferencing .
<Guest786> could you pls help me on that
rcurtin_matrixor has quit [Quit: You have been kicked for being idle]
<EshaanAgarwal[m]> <zoq[m]> "Let’s move the meeting to..." <- Hi zoq ! Would it be possible to move this meeting later to Thursday or Friday ? I am actually travelling 😬
<EshaanAgarwal[m]> <jonpsy[m]> "You could update us via your..." <- I have written most of the updates and I will sync that up with the latest progress.
Guest786 has quit [Quit: Client closed]
Guest781 has joined #mlpack
<rcurtin[m]> @Guest786 if you trained the model with an mlpack binding you should be able to pickle or depickle the model to load it. it's easiest to do this if you create the model originally with the python bindings
<Guest781> If i create the model originally with c++ ,  how can i load to python binding for inferencing?
rcurtin_matrixor has joined #mlpack
Guest781 has quit [Quit: Client closed]
Guest7827 has joined #mlpack
Guest7827 has quit [Client Quit]
Guest7887 has joined #mlpack
<Guest7887> Hi Team ,I am having mlpack model which is originally written in C++, how can i load the model to python for inferencing.
Guest7887 has quit [Client Quit]
Guest7837 has joined #mlpack
Guest7837 has quit [Client Quit]
Guest7860 has joined #mlpack
<Guest7860> Hi Team , I am having mlpack model which is originally written in C++, how can i load to python for inferencing?
<Guest7860> could you please help me on that
<zoq[m]> <EshaanAgarwal[m]> "Hi zoq ! Would it be possible to..." <- Hm, okay, I guess you are not available otherwise?
<EshaanAgarwal[m]> zoq[m]: Sorry, I can't understand what you mean here.
<EshaanAgarwal[m]> <EshaanAgarwal[m]> "Sorry, I can't understand what..." <- I am available but it would be difficult to gmeet on poor network. I would love to make some of the changes that you were talked abt earlier.
<EshaanAgarwal[m]> * I am available but it would be difficult to gmeet on poor network. I would love to make some of the changes that you were talked abt earlier and code.
Guest7860 has quit [Quit: Client closed]
Guest7895 has joined #mlpack
<Guest7895> Hi Team i have a mlpack model which is originally written in C++, how can i load to python for inferencing ?
<Guest7895> could you please any one help me on that
<rcurtin[m]> Guest7895: I already answered the question but I don't think you saw the response:
<rcurtin[m]> > if you trained the model with an mlpack binding you should be able to pickle or depickle the model to load it. it's easiest to do this if you create the model originally with the python bindings
<rcurtin[m]> it could be tricky, if the model was written in C++; the Python interface to mlpack is all through the bindings, so if you have a way to retrain the model through the bindings, you will have a much easier time
<Guest7895> the question then would be if model is trained with python, can a c++  load the model ?
<rcurtin[m]> you would need to make sure that you know the exact C++ type produced by the binding, but yes, that should work
<rcurtin[m]> the `__getstate__()` function for a model should produce the exact serialized representation as a binary blob; you should be able to write that to a file, and then load it from C++ with `data::Load()`
<rcurtin[m]> there may be some tricky bits to get that right, but that should be the general idea; if you serialize with pickle, I think that will add some extra header information, so you need to get the exact serialized representation with `__getstate__()`
<rcurtin[m]> I think you can also use `_get_cpp_params()` to get the JSON representation of the model (which you can then write to file and load from C++), but JSON is bigger than the binary serialization, so that is a disadvantage of that approach
rcurtin_matrixor has quit [Quit: You have been kicked for being idle]
<Guest7895> I have a mlpack model which was written in C++, how can i retrain it using aws data bricks ?
rcurtin_matrixor has joined #mlpack
<Guest7895> any idea suggestion pls
<zoq[m]> <Guest7895> "any idea suggestion pls" <- Have you seen the response above?
<Guest7895> no
<zoq[m]> Has the channel log
Guest7895 has quit [Quit: Client closed]