ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
xiaohong has quit [Ping timeout: 256 seconds]
< jeffin143>
Sreenik
< jeffin143>
Didn't order for one ,
< jeffin143>
Will do so , after I make some space on my laptops :) for mlpack stickers
sreenik has joined #mlpack
Suryo has joined #mlpack
< Suryo>
zoq: For some of the new test functions, I was trying to condense the functional forms of the gradients by using aliases for expressions that are repeated for the gradients of different variables. Also, I thought that the aliases could help prevent repeated computations. But for the GoldsteinPrice Function, the number of such aliases became too many so I resorted to code the entire expressions for the gradients. Hope that's okay.
< Suryo>
Also, I've seen your comments on using CMAES and CNE for testing the non-differentiable functions. I'll test them accordingly.
< Suryo>
Thanks
Suryo has quit [Client Quit]
< jeffin143>
rcurtin , zoq : strange enough I used grep and found out that we haven't used PARAM_VECTOR_IN and PARAM_VECTOR_OUT anywhere
< jeffin143>
So here is my results of many trials and error that I did today morning
< jeffin143>
Saw the definition of PARAM_VECTOR_IN and found out that , it's define as vector<t> and hence may take any type, therefor declare a test class* with some datatypes * and then tried passing that
< jeffin143>
Threw some big 15 pages of error , after going through some of those , understood we have to overlood << and >> operator and did that and successfully compiled
sreenik has quit [Quit: Page closed]
< jeffin143>
But the only issue is take input from command line, -v = 1,2,3 throws error saying (=) is invalid argument
< jeffin143>
So I decide to make a vector of int and tried taking input from command line , and that is also throwing error
< jeffin143>
Used something as --vec = 1,2 ,3 after going through the documentation at website. but I am sure there is some error in that
jeffin143 has quit [Read error: Connection reset by peer]
< toshal>
ShikharJ: No I didn't got them. I have sent the mail quite a lot before. Frankly speaking I don't trust our post mail service.
< zoq>
you have to repeat the key --vec 1 --vec 2 --vec 3
KimSangYeon-DGU has joined #mlpack
vivekp has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas_: Hey Sumedh, I figured out the QGMM python code uses radian, not degree when calculating the cos(phi), so I edited it accordingly.
< favre49>
Unless we decide to make certain things hard coded or the mutation probabilities the same, the constructors might turn very large.
< favre49>
Of course i could assign default values and the user could use setter functions to change them, this is just a design question of what would be better
< favre49>
It seems more user friendly, but it hasn't been used anywhere else in mlpack and there's probably a reason for that.
< zoq>
favre49: Yeah, we could do that, but I guess for now I would just go with hardcoded settings.
< favre49>
zoq: Alright thanks :)
< zoq>
We could use the serialization feature to achive something like that which would allow us to save/load the model as txt,xml, etc.
< favre49>
yup i was thinking that as well, I'm gonna work on getting it basically working first though.
< zoq>
right, I was thinking the same
vivekp has quit [Read error: Connection reset by peer]
favre49 has quit [Quit: Page closed]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
< akhandait>
sreenik: Hey, sorry I am quite late.
< sreenik>
akhandait: Hi
< akhandait>
Okay, so let's start.
< sreenik>
Yes
< akhandait>
Did you make any progress with the onnx C++ API after that problem was solved?
< sreenik>
Yes, I extracted the layers, attributes and values and am mapping them to the corresponding mlpack layer attributes now
< sreenik>
I tried extracting the weights but have still not been able to. It was easy with python, perhaps some digging into protobuf is needed for doing this in c++
< sreenik>
One thing that is a concern is that there are quite a few simple layer types not implemented n mlpack
< sreenik>
Like say logsoftmax is there but no softmax
< akhandait>
Hmm, okay
< sreenik>
Then, the LRN layer is not there yet, which is present in the alexnet model. These are not my top concerns but these need to be addressed as well
< zoq>
We could add a Softmax layer, but I think LogSoftmax is what people often use anyway.
< akhandait>
zoq: Yeah, I am not sure it will be useful having just Softmax when we already have LogSoftmax
< sreenik>
zoq: You are right but sadly the pretrained models in the onnx zoo have softmax all over. But adding one wouldn't be much of issue since logsoftmax is already there
< zoq>
I see, I guess if we like to reuse some of thise models, it makes sense to implement softmax?
< akhandait>
sreenik: I think we should build a simple onnx trained model with basic layers like Linear which we have right now
< akhandait>
to at least test our framework
< sreenik>
atharva: Okay that is reasonable
< zoq>
That sounds like a good idea to me as well.
< sreenik>
Yes I will try linear, sigmoid, relu and then move on to convolutions then?
< akhandait>
Sounds good, once the basic framework is set and tested for a simple model with linear, sigmoid, relu, etc. adding more layers which we have in mlpack should not be difficult
< sreenik>
Right
< akhandait>
What is the problem with protobuf you said you were facing?
< sreenik>
Can't really seem to extract the weights from a TensorProto object
< sreenik>
There are fields like "int", "float", etc. which do not contain the value
< akhandait>
Is it compiling? Or is it some extension of the same issue we were facing that day?
< akhandait>
sreenik: Oh, okay
< sreenik>
Along with another field called "raw_data" which actually has the value but in the form of a byte stream
< sreenik>
So it has to be converted
< akhandait>
Okay, what ways have you thought of to do that?
< sreenik>
The python implementation has an implementation to directly convert using a function call. Will dig deep into that.
< akhandait>
Cool
< sreenik>
I am currently focussing to the layer conversion from onnx to mlpack which we were discussing earlier. Once that is done I will try to solve this
< sreenik>
*on the
< akhandait>
Any other issues? If not we could move on to discuss the timeline
< sreenik>
No other issues right now
< akhandait>
sreenik: Okay
< akhandait>
sreenik: About that PR, I guess we will need that in a while
< akhandait>
I will try and review it in a couple days.
< sreenik>
That is done for now. WIll modify it as and when needed (I don't see a need right now)
< akhandait>
Sure.
< akhandait>
Now coming to the timeline, I guess we are a little behind right now.
< sreenik>
Yes
< sreenik>
It needs a little modification
< akhandait>
akhandait: Yeah, can you make a google doc and just copy paste this timeline?
< akhandait>
So that we can make changes as needed
< sreenik>
Ya sure that will be of help to us
< akhandait>
Let's set the tasks for the next week, till 9th that is.
< akhandait>
What do you think is a reasonable goal for next week?
< sreenik>
Completing the mapping along with..
< sreenik>
Weight extraction and transferring the weights to the corresponding mlpack model
< akhandait>
Okay, sounds good
< sreenik>
Basically create the onnx to mlpack convertor and maybe test it with some simple models
< akhandait>
Yes
< sreenik>
If you see the original timeline it is actually somewhat in line with it (since I had kept an entire week to test it with small models)
< akhandait>
akhandait: I saw that just now, that will give us some breathing space.
< akhandait>
I think for now, some things which you had mentioned like the RNN functionality for the parser can be kept aside
< sreenik>
Yes that is not reasonable to invest time in right now
< akhandait>
So, I think that's it for now then.
< sreenik>
Yup, will communicate with you over anything else through hangouts or here if required
< akhandait>
Sure, good night!
< sreenik>
Good night :)
manik has joined #mlpack
manik has quit [Client Quit]
sreenik has quit [Ping timeout: 256 seconds]
vivekp has quit [Ping timeout: 246 seconds]
Suryo has joined #mlpack
< Suryo>
zoq: For my PR #117, the Travis build isn't running, IDK why.
< Suryo>
zoq, rcurtin: would it be okay to test all the test functions using CNE? As of now, I have only tested three of them that are non-differentiable. I had to select initial points that are 'appropriately close' to the global solutions, otherwise, the solution points were getting stuck in local minima, it seems :(
< Suryo>
Let me know. Based on that, I'll wrap up the test development. I'm almost done, except for style fixes, etc. Then I'll resume development of PSO :)