ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
< xiaohong> Hi, I face a weird error when compiling the mlpack, I didn't have too much change on ppo_test.hpp. The compiling was success last time, but failed now.
< xiaohong> The error message like that: error: no member named 'unit_test' in namespace 'boost'; did you mean simply 'unit_test'?
< xiaohong> The full error I pasted it in there. https://paste.ubuntu.com/p/xS8ndvMwsj/
< xiaohong> Any comments or suggestions would be appreciated.
xiaohong has quit [Ping timeout: 256 seconds]
Toshal has joined #mlpack
sakshamB has joined #mlpack
< rcurtin> xiaohong: do you maybe have an unclosed namespace in your code or an extra 'using namespace' directive or something?
< rcurtin> ohh, yeah, looking at the error I bet you have an unclosed namespace mlpack::rl
sreenik has joined #mlpack
xiaohong has joined #mlpack
< akhandait> sreenik: I was reviewing the PR, can you post the "network3.json" file somewhere?
xiaohong_ has joined #mlpack
xiaohong_ has quit [Ping timeout: 256 seconds]
xiaohong has quit [Ping timeout: 248 seconds]
xiaohong has joined #mlpack
< xiaohong> rcurtin: Thanks, that is. The unclosed namespace problem is solved.
xiaohong has quit [Ping timeout: 256 seconds]
< jenkins-mlpack2> Project docker mlpack nightly build build #351: STILL UNSTABLE in 3 hr 28 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/351/
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Quit: Page closed]
sreenik has quit [Quit: Page closed]
xiaohong has joined #mlpack
< akhandait> sreenik: You there?
< akhandait> Oh he isn't here
ImQ009 has joined #mlpack
vivekp has joined #mlpack
xiaohong has quit [Ping timeout: 268 seconds]
sreenik has joined #mlpack
< sreenik> akhandait: Sorry I wasn't there
< sreenik> But I would be here for the next 8 hours with probably an hour break in between
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< akhandait> sreenik: Whenever you are ready
< sreenik> Yup I'm here
< akhandait> Hey there
< sreenik> Hey
< akhandait> I went over the PR in detail today, you have done some really good work!
< sreenik> Thanks :)
< akhandait> I had some suggestions, let's see what you think of them
< sreenik> Yes, I'd be happy to hear them
< akhandait> Right now, I am not posting a full review on github.
< sreenik> Okay..
< akhandait> Can you open a new PR splitting the file into two and one separate file for testing
< akhandait> Also, do this from a new branch this time
< sreenik> Okay
< akhandait> You have opened this one from sreenikSS:master, create a new branch and open a PR from there
< sreenik> Okay, got it
< akhandait> I did start a review but then decided to do the rest of comments on the new PR
< akhandait> I will just post those comments on this one so that you can include those changes in the new PR
< sreenik> Sounds good. But I had one question from the beginning.
< akhandait> Sure
< sreenik> Doesn't the location of the file seem awkward?
< akhandait> Hmm yeah, I thought so too
< akhandait> mlpack/bindings can be another possible location
< sreenik> Like, there would be a separate repo for the onnx converter, so why not create it now and put this there itself?
< akhandait> Oh, I do remember reading a discussion between you and Ryan regarding this I think.
< sreenik> mlpack/bindings also seems reasonable for now
< sreenik> Yes we had a discussion about that long ago
< akhandait> Had you finalized the idea for the separate repo then?
< akhandait> Or was it just an idea you discussed?
< sreenik> It was an idea and it was early April if I am not wrong, so we might need to have a small discussion again
< akhandait> Hmm okay, I think for now let's decide a proper location in this repo itself for this model parser
< sreenik> Hmm
< zoq> I like the idea of having a seperate module for the converter.
< akhandait> zoq: Do you mean a separate repository?
< zoq> ah, yeah
< zoq> but up to you
< akhandait> Oh, okay. Then I guess we should just start with it instead of leaving it for later
< akhandait> If that's okay with everybody
< akhandait> rcurtin:
< sreenik> It would be great for me. It would act as a nice playground where I can upload the test-models as well for everyone to check out and not confuse someone who is not familiar with this particular project
< zoq> fine with me :)
jeffin143 has joined #mlpack
< akhandait> Okay then
< akhandait> sreenik: Had you decided how the repo would be structured?
< sreenik> No, not back then
< sreenik> That might need some discussion as well
< akhandait> Okay, how about you try and come up with structure and an initial API in a couple days for the new repo/module. We can then discuss with senior members and finalize.
< sreenik> Okay, sounds good :)
< akhandait> Great, let's come back to your PR.
< sreenik> Yes
< akhandait> Do keep in mind our style guidelines in the next PR, try to follow them as much as possible. If you miss something, no problem, we can rectify it during the review
< sreenik> Yes I will conform to the guidelines this time
< akhandait> Great
< akhandait> Right now, the `traverseModel` function calls the `getInitType` function which called `getLossType` which calls `createModel`, which calls the `trainModel`
< akhandait> Hmm, do you think we should separate the functions where we create the model and train the model?
< akhandait> I mean the `createModel` function will return a model which we then train separately
< akhandait> I think that would be useful later when someone would want to just get a model from the network parameters and train it themselves
< sreenik> Yes that seems nice
< sreenik> I'll change the return type of the creatModel() function and call trainModel() through a separate program
< akhandait> Also, was there a reason you designed it this way, calling one function inside the other?
< sreenik> Yes, because there aren't any superclass to the initializers, loss functions or optimizers
< akhandait> Oh okay
< akhandait> Yeah we discussed this on phone that day I think, I was just not that familiar with the PR then
< sreenik> If you look at getNetworkReference(), you will see that it returns a class type LayerTypes. A similar feature is not available with inits and losses
< akhandait> sreenik: That sounds good, separate programs
< akhandait> Hmm, yeah
< sreenik> akhandait: Yes we had discussed this over the phone
< akhandait> That's good then, don't worry about it right now.
< sreenik> Hmm
< akhandait> How are we doing with our timeline?
< rcurtin> akhandait: yeah, we can move around the final location later, but I would say don't put it in mlpack/bindings/ since it's not an automatically-generated bindings... maybe mlpack/converters/onnx or something? or a separate repository is okay too
< rcurtin> ideally we want to be careful with how many dependencies we add, though, so it may be better to keep the ONNX converter in a separate repository if it depends on, e.g., TensorFlow headers and ONNX headers and other packages
< sreenik> akhandait: Two days behind because of the OS reinstallation, unfortunately
< rcurtin> so, I am not sure which is best
< akhandait> rcurtin: Yeah, that's right.
< akhandait> sreenik: What dependencies are we gonna add
< akhandait> onnx
< sreenik> Currently just onnx is what I have in mind
< sreenik> Tf, pytorch, etc are converted by onnx itself
< akhandait> Yeah, okay
< akhandait> rcurtin: Marcus is for a new repo I think. So if you don't have any particular problem with that, I think we can go with that
< rcurtin> yeah, let's do that
< sreenik> But I'm not sure if we would need a caffe or some other backend (couldn'y find out if onnx uses its own backend)
< rcurtin> if we think it can be merged into the mlpack/mlpack repository, we can try and do that later
< sreenik> rcurtin: Yes, I agree
< akhandait> sreenik: Hmm, let's see if we need anything
< akhandait> rcurtin: Yeah, I agree too
< rcurtin> sounds good :)
< akhandait> sreenik: Take a couple of days to design a structure and the API. Tell me when you think you are ready to discuss it.
< akhandait> Sounds good?
< sreenik> akhandait: Okay
< akhandait> akhandait: Awesome, do you have anything else you want to discuss?
< sreenik> akhandait: I might need a little help regarding how to manually initialize the weights of an mlpack model in the next couple of days
< akhandait> Okay, just post it here then
< sreenik> Okay
< akhandait> sreenik: You can carry on with your work now, good night!
< sreenik> Good night :)
jeffin143 has quit [Ping timeout: 248 seconds]
vivekp has quit [Ping timeout: 258 seconds]
sakshamB has left #mlpack []
sakshamB has joined #mlpack
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
< sreenik> What kind of an encoding does this seem to be? Is it a byte stream? The bottom half contains the decoded "raw_data". https://i.imgur.com/jCexZfE.png
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 258 seconds]
< zoq> sreenik: What are you trying to do, couldn't you use the API to read the model file?
< sreenik> In python there is this wrapper to read them. But there is nothing of that sort in C++
< sreenik> I mean, the values are stored in a field called "raw_data" instead of the "floats" or "ints" fields
vivekp has joined #mlpack
< sreenik> zoq: Yes, the thing addressed there is to extract the network paramters (i.e., the architecture but not the weights). That part is done.
< zoq> So you already have the network structure and now you need the weights for each layer?
< sreenik> The weights are stored in a TensorProto array obtained by calling graph.initializer() where graph is the corresponding GraphProto object. The network parameters are also obtained from the graph but not by calling initializer()
< sreenik> zoq: Yes right
< sreenik> Looking into the workflow of the python version of this where the values can be successfully extracted (as shown in the image), it seems that there is a numpy function called frombuffer() that does the trick
< zoq> Currently looking into: https://github.com/lcskrishna/onnx-parser
< sreenik> zoq: Looks promising at the first glance. Let's see
< sreenik> Looking into it. I regret not having found this earlier. I have recreated a lot of what he has done. The similarity is so striking even the complex data structures I used are mostly the same
< zoq> At a first glance I think we could adapt a lot of the things he as done for the mlpack converter
< sreenik> Yes, actually!
< zoq> We should load a model with the code and check the output for some lines.
< sreenik> Right, I am doing it now
< zoq> unfortunate that there is no onnx C++ parser
< zoq> or C
< sreenik> I am not sure I understand completely. That seems to be C++
< zoq> I was talking about some reference implementation or something that is mentioned on the onxx page. This is great, but it's not mentioned anywhere.
< sreenik> Ohh, yes. ONNX's support is, well, I should say, but it is terrible, especially when it comes to C++
< sreenik> I have tried opening some issue and using their gitter channel, but they don't generally respond
< sreenik> Oops, subtle typos above
< zoq> I see, so hopefully, this will work, I guess if it does we should write put some time into documentation, I'm sure others will find this helpful.
< sreenik> Agreed. Just a query, his implementation seems great, I plan to borrow a couple of ideas from there as well, but how do we give him credit? A mention, maybe, in the documentation seems okay?
< zoq> We could mention the author in the code as well, I don't think we will use some of his functions directly as we like to convert it to an mlpack model; but still the code is helpful so what about we mention the author in the first paragraph?
< sreenik> Sounds good
sreenik has quit [Quit: Page closed]
ImQ009 has quit [Quit: Leaving]