ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
vivekp has quit [Ping timeout: 246 seconds]
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
KimSangYeon-DGU has joined #mlpack
robb_ has joined #mlpack
< robb_> hey, has anyone done an implementation of YOLO using this library?
robb_ has quit [Ping timeout: 256 seconds]
conrad__ has joined #mlpack
< conrad__> new version of Armadillo is out: http://arma.sourceforge.net/download.html
< conrad__> Armadillo 9.500 contains fixes for the major Fortran / LAPACK snafu: https://developer.r-project.org/Blog/public/2019/05/15/gfortran-issues-with-lapack/
conrad__ has left #mlpack []
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
< rcurtin> conrad__: thanks for the info, I need to update our matrix build to add some new versions...
< rcurtin> robertohueso: I am playing with the KDE code and don't immediately see anything wrong yet; do you have a particular test script you were using?
< rcurtin> I need to go to bed, but I noticed while reviewing the paper that there are two *different* Rs in Eq. 2. The first is the calligraphic R, like in the denominator
< rcurtin> that's the number of descendant points of the node
< rcurtin> but then there is the regular R, used in the numerator... there, |R| is the total number of reference points in the reference set, I think
< rcurtin> I need to think about this a little bit more, but that may be the source of the issue
< rcurtin> I'm noticing that in my simulations mThresh ends up very large when I have MCProb = 0.05 on random uniform data
favre49 has joined #mlpack
< favre49> zoq: I have a question as to the methodology of crossover when we have feedforward networks and equal fitnesses.
< favre49> Since we will be mixing structures in this case, I have to prevent cycles from forming
< favre49> But when cycles are formed I have multiple choices as to which connection genes to remove to break a cycle.
< favre49> I was thinking what we could do in this case, is first choose only the matching genes and add that to the connection gene list, since intuitively these seem the most important, and then randomly add the excess/ disjoint genes, and check for cycles on every addition
< favre49> the way that SharpNEAT does it is a bit different, and seems more computationally efficient. From what I understand, he just randomly chooses one genome to be more fit, and proceeds that way.
< favre49> But that way isn't technically right, since we aren't choosing each gene randomly like the paper says we should. I don't know how much of a difference it would make, though.
< favre49> Let me know what you think
favre49 has quit [Quit: Page closed]
< jenkins-mlpack2> Project docker mlpack nightly build build #353: STILL UNSTABLE in 3 hr 37 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/353/
< zoq> favre49: Right, in the long run it shouldn't make much of a difference, so I would opt for the faster solution here.
< zoq> favre49: Btw. great music recommendation, I like Modal Soul as well
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Quit: Page closed]
< rcurtin> zoq: favre49: I'll add it to my list to check out :)
< rcurtin> (Modal Soul that is)
< rcurtin> I only see 5 blog post updates so far---GSoC students please don't forget to provide an update each week :)
< akhandait> sreenik: I see you haven't posted any updates on our gsoc blog. Please try not to forget to give updates on the blog every Sunday night.
< akhandait> It will help everyone know what's going on in our project.
< akhandait> sreenik[m]: I am not sure if that [m] is a part of the nick.
favre49 has joined #mlpack
< favre49> zoq: rcurtin: Great :) If you liked this, you might like his other albums as well, Metaphorical Music and Spiritual State. I've listened to his entire discography now, he has quite a few beautiful songs
< favre49> I found him because he made the music for an anime i like, Samurai Champloo, which had a great OST
favre49 has quit [Quit: Page closed]
ImQ009 has joined #mlpack
abernauer has joined #mlpack
< abernauer> rcurtin: I checked the pca binding against the iris data set and it produces the expected output. The documentation looks solid too.
abernauer has left #mlpack []
< robertohueso> rcurtin: Thanks for the time you're taking to make this work :)
< robertohueso> This is the test script I've been using (as well as many variations of it) https://gist.github.com/robertohueso/066f864cc9c79dc61de40020a9b3669e
< sreenik[m]> akhandait: Oh, okay I'll post an update within tomorrow
< robertohueso> I didn't notice about the tho different Rs in Eq. 2 so today I tried all possible combinations of R (I think it was all of them) with the number of descendant points of the node and the total number of reference points in the reference set. Results got even worse, i.e. more points were out of the relative error bound.
< robertohueso> I think all Rs mean the amount of descendants of the node, otherwise I'm not sure the math would make sense
< robertohueso> When you say MCProb = 0.05 do you mean line 70 of kde_impl.hpp?
< rcurtin> robertohueso: I set MCProb directly to 0.05 in KDERules
< rcurtin> abernauer: awesome, want to send an email to Dirk, me, and James showing the result? I am sure they would be interested to see it
< rcurtin> abernauer: actually I see I owe you an email, I'll respond to that shortly
< akhandait> sreenik[m]: Nice, do keep the title as week 2 to be consistent with other posts, it's okay if you don't post anything for week 1.
< sreenik[m]> akhandait: Okay :)
< robertohueso> rcurtin: According to the paper, in that case you have set Monte Carlo probability at 95% (same as the one I'm using)
vivekp has joined #mlpack
< robertohueso> maybe I should change the name of that variable to MCAlpha or something similar
< rcurtin> maybe, in either case, what I believe I did was set beta = 0.05 at the top level of the tree
< robertohueso> exactly :)
ImQ009 has quit [Quit: Leaving]
abernauer has joined #mlpack
< abernauer> :rcurtin I will set up a repo and push the code my internet is getting upgraded, so might be a delay.
< abernauer> Also compiled the code with R CMD SHLIB which lets you actually load the shared object into a R session.
abernauer has quit [Quit: Page closed]
< rcurtin> abernauer: sounds great, looking forward to it. no hurry
< sreenik[m]> Mlpack ann models seem to store its weights as an armadillo matrix but is always single-columned (which I understand is to keep things simple). Is there any exception where a multi-columned parameter matrix is possible?
< zoq> sreenik[m]: The Concat and merge layer do not use a vectorised representation.
< zoq> But you could pass a vectorised input, they will internally transform the input if necessary.
< sreenik[m]> I am thinking of directly copying the weights of the onnx model one by one to the "parameter" variable obtained from the mlpack model
< zoq> yeah, that should work just fine.
< sreenik[m]> So will the internal transform work there?
< zoq> yes
< zoq> we might have to check the order for some layer, like for the linear layer is the bias at the end or at the front.
< sreenik[m]> Oh that's great then! Anyway, I don't think supporting concat and merge would be of much use
< sreenik[m]> Yes that's a valid point regarding the bias
< sreenik[m]> zoq: Do you remember https://github.com/mlpack/mlpack/issues/1906? The variable "network" is defined in ffn.hpp as type "std::vector<LayerTypes<CustomLayers...> >". So in this case should it have any variable called "parameter"?
< zoq> sreenik[m]: Ahh, thanks for the reminder, do you think the solution is good enough?
< sreenik[m]> I tried it out. Didn't work for me but I didn't dig deep enough to come up with something
< sreenik[m]> In any case, do the objects of type LayerTypes<> actually store the weights?
< sreenik[m]> As the private variable "network" which we are trying to access is a vector of LayerTypes<>
< zoq> Once initialized each layer holds a reference to the actual parameter: https://github.com/mlpack/mlpack/blob/46a14f30efaebf33195b7aa0d18466c46133c46e/src/mlpack/methods/ann/ffn.hpp#L403.
< zoq> So parameter (continues memory) contains every model parameter and each layer just uses a subset of the memory.
< zoq> In the end if you change the weights inside the layer it is the same as changing the parameter value of the FFN class.
< sreenik[m]> Yes, hence I actually don't need to access the weights of individual layers
ozan-cansel has joined #mlpack
ozan-cansel has quit [Ping timeout: 256 seconds]
robb_ has joined #mlpack
< robb_> hey, is there an implementation of non-max suppression in this library?
< rcurtin> I'm not sure what non-max suppression is... so if we do have it it's probably under a different name :)
vivekp has quit [Ping timeout: 245 seconds]