ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< Suryo>
On page No. 298 (10/70 in PDF), you will see a list of termination strategies. I was wondering if it would be wise to incorporate these within the existing framework. For instance, we already have termination strategies based on 'maxIter' and 'no improvement'
< Suryo>
Two additional strategies would be to observe improvement over a certain number of iterations and make a decision, or to see if the normalized radius of all the particles is below a tolerance level.
< Suryo>
We can do any of the following: (i) incorporate one of these into PSO (ii) incorporate both of these into PSO [because, why not?] (iii) templatize the termination strategies
< Suryo>
Let me know how you would like to proceed. If you like (iii) then we should spend a little time discussing the policy design.
< Suryo>
Thanks!
Suryo has quit [Remote host closed the connection]
abernauer has joined #mlpack
abernauer has quit [Remote host closed the connection]
vivekp has quit [Ping timeout: 245 seconds]
Suryo has joined #mlpack
< Suryo>
zoq: I actually take that back. After some contemplation, I think that it would be best to stick to our original plan of having PSO's performance evaluated over the most recent k epochs.
< Suryo>
The reason is that in order to use the radius-normalization method, we would have to identify the point that is furthest away from the swarm leader. That would require iterating over all the particles and finding the point that is furthest away.
< Suryo>
And although we can default this to an L2 norm, there is actually no particular reason to.
< Suryo>
Additionally, there's the curse of dimensionality..
< Suryo>
So I think that it would be best to stick to the original plan. Let me know what you think and based on that I'll implement whatever is needed
Suryo has quit [Remote host closed the connection]
< favre49>
zoq Good news, after implementing RK4 in the double pole cart test, NEAT is able to solve it, and lasts 10,000 timesteps (I only allowed it that long)
< favre49>
I think in the unit test, we should let the initial state be at 1 degrees, instead of the 4.5 degrees in the paper, since it takes far longer to cross 500 time steps, and may not perform too well every time.
< favre49>
This is also documented in sharpneat, which is why they also use one degree as the start in their testing, so it's not a bad sign I don't think
< favre49>
I'll open a PR for the fix soon. Should I also include the learning tests for the environments, or should I focus on finishing up the NEAT PR first?
favre49 has quit [Remote host closed the connection]
ImQ009 has joined #mlpack
k3nz0 has quit [Quit: Leaving]
k3nz0 has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
abernauer has joined #mlpack
< abernauer>
rcurtin: So I' am making some progress with a promising error message. unable to load shared object '/home/andrew/Documents/tests/source/pca_binding.so': /home/andrew/Documents/tests/source/pca_binding.so: undefined symbol: _ZNK5boost15program_options22error_with_option_name4whatEv
abernauer has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143>
rcurtin , zoq : How does the blog get updated as soon as someone pushes, I mean did u use any hook ? how is the whole preocedure done
< jeffin143>
procedure *
< akhandait>
sreenik[m]: Hey!
< sreenik[m]>
akhandait: Hey
< akhandait>
Can we start?
< sreenik[m]>
Yeah sure
< akhandait>
Okay, so good news that they replied.
< sreenik[m]>
Yes that's certainly some good news
< akhandait>
Okay, that's for the mlpack-onnx translator. What's the status on the onnx-mlpack translator?
< akhandait>
I think we should finish that first
< sreenik[m]>
Yup I think we can go on a continuous improvement process on that since the basic structure is fixed (I have updated the file in github)
< sreenik[m]>
That's because we cannot possibly test it on all possible models at once.
< akhandait>
Did you solve that issue with Add operators?
< sreenik[m]>
There are some onnx models in the onnx model zoo. I am going through them and trying to make them compatible
< sreenik[m]>
I think I have done something with it. What I have done is that I have merged some known combinations into a single layer
< akhandait>
Hmm, I think we should definitely start with mlpack-onnx within 10 days.
< sreenik[m]>
Currently there is just one known combination, which is matmul + Add. So I have created a map where we can store these and the program will take care of the rest
< akhandait>
I would suggest for now, don't try with the onnx model zoo models. Just simple support for Linear and Conv is fine
< sreenik[m]>
Yes that is done alright. Simple linear and conv are getting converted
< akhandait>
Hmm, I can think of only two important combinations: matmul and matmul + Add. Can you map matmul to LinearNoBias layer?(when there's no Add after it)
< sreenik[m]>
Yes I have done that too
< akhandait>
Great
< sreenik[m]>
Actually if you go through the code you can see a number of data structures being used for different purposes. As we find new combinations and stuff we caan just add a few elements to it and it will be fine
< sreenik[m]>
Especially the function called generateModel()
< akhandait>
Okay so one thing, I think for now, you should focus on testing and structuring the onnx-mlpack code properly and get it ready for merge. No need to add support for more layers right now. Just make sure that simple mnist conv model you created works. The models from the onnx zoo would probably be complex and we should start with mlpack-onnx first.
< sreenik[m]>
Yes I agree with it
< akhandait>
Okay, I will go through the changes tomorrow.
< sreenik[m]>
Sure
< akhandait>
Hmm, and did you figure out the reshaping in onnx?
< sreenik[m]>
Not quite. That is actually one reason why I am afraid to go for the mlpack-onnx translator. I think if we go for something like mlpack-onnx via libtorch we can get a solid API from torch and while converting from torch to onnx, onnx will figure out what to do
< akhandait>
Hmm, I see.
< sreenik[m]>
Need to go through that in detail before proceeding though. I think I will do that tonight so that we can take a decision as fast as possible
< akhandait>
In this case though(onnx-mlpack), we need to support reshaping for models with both conv and linear layers.
< akhandait>
I will try and see tomorrow what we can do.
< sreenik[m]>
Yeah sure. Just one doubt..
< akhandait>
Sure, go on
< sreenik[m]>
Can there generally be any occasion where the user explicitly decides to use a reshape layer?
< akhandait>
Let me think
< sreenik[m]>
If not, then we can treat it as an internal manipulation and ignore it (or rather find out what layers cause the reshape and add it to the mergeable layers)
< sreenik[m]>
Yea sure. Take your time and let me know
< akhandait>
Personally, I have only used Reshape for going from Linear-Conv and vice versa. Mlpack does it on it's own, so I think we could treat it as an internal thing and ignore it.
jeffin143 has quit [Ping timeout: 260 seconds]
< sreenik[m]>
Even I think so
< akhandait>
So, go forward with it then. We will see if some issue comes up
< akhandait>
About the conv layer, is it working now, except the groups parameter of course?
< sreenik[m]>
Yes. I'll proceed it that and keep an eye on the mlpack-onnx translator meanwhile
< akhandait>
Yeah, I guess we have until we finish this one to think what to do with mlpack-onnx. Even I will explore our options.
< sreenik[m]>
Yes, sounds good
< akhandait>
About the conv layer, is it working now, except the groups parameter of course? I think you missed this message. :)
< sreenik[m]>
Oh yes it is
< sreenik[m]>
I have a converted model too in the temp section of the translator repo
< akhandait>
Awesome!
< sreenik[m]>
:)
< akhandait>
That means if we treat the reshaping as internal in mlpack, we can convert the mnist_conv model you created. Am I right?
< sreenik[m]>
Yes right
< sreenik[m]>
But I am yet to run the mlpack model and see if the accuracy matches
< akhandait>
So, till when do you think you can do this?
< akhandait>
Oh, okay, no problem
< sreenik[m]>
Wouldn't take much time I suppose. Testing the accuracy will hardl take an hour if everything is fine]
< akhandait>
Just let me know when it's matching. I think we are close to merging the onnx-mlpack translator now.
< sreenik[m]>
In case there is a problem with the accuracy it will take an additional couple of hours (hopefully) to debug
< sreenik[m]>
Yup!
< akhandait>
Will just need some reviewing and testing
< sreenik[m]>
Yes
< akhandait>
You would need to come up with a unit testing file like the ones in mlpack/tests.
< akhandait>
That will take some time I think
< akhandait>
There would be a lot of tests
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#34 (mc_kde_error_bounds - a677fa3 : Roberto Hueso Gomez): The build is still failing.
< sreenik[m]>
Oh yes. I've never done it so yes it will take some time
< akhandait>
Yeah, you will learn a very important programming skill next week then. :) Good luck!
xiaohong has quit [Read error: Connection timed out]
< sreenik[m]>
Haha
< akhandait>
Can you push the latest changes to the repo by tomorrow(including the reshaping) so that I can play around with a little.
xiaohong has joined #mlpack
< sreenik[m]>
I think it's already there. But I'll check it once more and push it again by tonight
< akhandait>
Last thing, for now you are just using the default value for groups(=1), right?
< sreenik[m]>
Technically yes, I mean, since mlpack doesn't support the parameter yet, I didn't need to set anything.
< akhandait>
Yup, and what about dilations? Are you using the AtrousConvolution layer when they are not 1?
< sreenik[m]>
Ohh thanks for reminding. I had forgotten about that. A small change probably. Anyway, can I just use atrous conv instead of conv keeping dilations as 1 when converting a normal conv layer?
< akhandait>
Hmm, I see no reason why that won't work. Go ahead with it.
< sreenik[m]>
Okay that should make it a lot easier
< akhandait>
Okay then, I gotta go now. I will get back in a couple of days after looking at the changes.
< akhandait>
Good night:)
< sreenik[m]>
Sure, good night :)
abernauer has joined #mlpack
< abernauer>
rcurtin: Update the code was seg faulting and I ran it through gdb. The issue is armadillo related. Program received signal SIGSEGV, Segmentation fault.arma::arma_warn<char [28]> (x=...) at /usr/include/armadillo_bits/debug.hpp:316316arma_warn(const T1& x)
< zoq>
favre49: That is great news indeed; Using an already optimized initial state for the test sounds reasonable to me, not too easy since we still like to see if the method works.
< zoq>
favre49: It might be a good idea to work on the learning tests for the environments as well, yes.
< zoq>
favre49: Btw, really enjoyed reading the latest update.
< zoq>
jeffin143: Yes, we use a webhook; each commit triggers a Jenkins build: http://ci.mlpack.org/job/blog/ which builds the static html pages using a bunch of bash and python scripts: https://github.com/mlpack/blog/tree/master/script/doxygen-src. The basis is still the Doxygen build, similar to what we use for the documentation, or I guess what we used for the documentation.