ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
jeffin143 has joined #mlpack
< jeffin143> lozhnikov : can u look at pr1814 once today if you are free, Thanks :)
< jenkins-mlpack2> Project docker mlpack nightly build build #352: STILL UNSTABLE in 3 hr 24 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/352/
jeffin143 has quit [Quit: AndroIRC - Android IRC Client ( http://www.androirc.com )]
vivekp has quit [Ping timeout: 272 seconds]
KimSangYeon-DGU has joined #mlpack
ImQ009 has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
favre49 has joined #mlpack
< favre49> zoq: I'm working on crossover for the genomes, and while reading the paper I saw it says
< favre49> atching genes are inheritedrandomly, whereas disjoint genes (those that do not match in the middle) and excessgenes (those that do not match in the end) are inherited from the more fit parent
< favre49> Matching*
< favre49> This should mean that when fitnesses are not equal, the resultant genome's structure is the same as the fittest genome, but the weights are different?
< favre49> So the structures would change only if fitnesses are equal that way.
KimSangYeon-DGU has joined #mlpack
KimSangYeon-DGU has quit [Ping timeout: 256 seconds]
favre49 has quit [Ping timeout: 256 seconds]
Toshal has quit [Ping timeout: 252 seconds]
sakshamB has quit [Ping timeout: 258 seconds]
< zoq> favre49: I don't think that is correct, for the matching genes you randomly inherit from either Parent1 or Parent2. For the disjoint/excess genes if they are present in the more fit parent you can inherit from that parent if they are not you just skip them. What you could also do is to take the genes from the lesser fit parent and disable them, that way they could be used for mutation. We have to test which
< zoq> one works better.
sreenik has joined #mlpack
< sreenik> Some bad (and good) news. I broke my OS again, but this time I could fix it. Turns out the issue is with mlpack. I use archlinux, so I used the default pkgbuild from AUR. It, however, attempts to create a folder called /usr/lib64 whereas /usr/lib64 is an existing symlink to /usr/lib which again is vital for the OS to boot (and I gave it permission to do so). I fixed it by chroot-ing into the system and recreating the symlink.
< sreenik> It's back to normal for me but I think we should contact the maintainers of AUR
< sreenik> Or leave a note for arch linux users in the mlpack installation guide (even though someone using arch is unlikely to visit the installation page)
favre49 has joined #mlpack
< favre49> zoq: Yes but matching genes are those which have the same innovation IDs, and thus have the same source and target IDs right? So even if we inherit those randomly, only the weights would be different in the matching genes?
< favre49> Or am I misunderstanding something?
< zoq> Right, the weights are different in this case.
< favre49> Okay good I've got it then
< favre49> How do we check for equality in doubles in mlpack?
favre49 has quit [Quit: Page closed]
< zoq> favre49: You could use: |a - b| < epsilon
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
saksham189 has joined #mlpack
< ShikharJ> saksham189: toshal: Are you guys here?
< saksham189> yes I am here
< ShikharJ> saksham189: Let's begin then :)
< saksham189> alright. What do you want to discuss?
< ShikharJ> saksham189: I looked at the MiniBatchDiscrimination PR, and read through the paper, if you don't mind, I'd like to take today to finalize a final review for that?
< saksham189> okay no problem
< ShikharJ> saksham189: Also, what paper did you reference for the Inception scoring?
< ShikharJ> I wanted to be sure of that as well.
< saksham189> hmm..I looked at the same paper for Improved Training Techniques and verified from numerous python implementation I found online
vivekp has quit [Ping timeout: 268 seconds]
< ShikharJ> Hmm, could you comment out an implementation on the PR? Also please cite the paper for that?
< saksham189> ShikharJ: also I am not sure about the alternative to boost::visitor. I tried working on boost::make_variant_over but was unsuccessful. Since we will be adding more layers in the future I think this could be quite important.
< saksham189> alright I will do that
< ShikharJ> saksham189: Oh, let me try that out then.
< ShikharJ> I assume you mean boost::variant in place of boost::visitor?
< saksham189> ShikharJ: yes
< saksham189> ShikharJ: also what bouncer are you using? I am unable to connect through elitebnc right now.
< ShikharJ> Cool, I'll try handling that. Have you decided on a topic to work on next?
Toshal has joined #mlpack
< ShikharJ> saksham189: I'm using ELiteBNC only.
< Toshal> Hi
< Toshal> It looks like my free bouncer is gone
< saksham189> ShikharJ: yes VirtualBatchNormalization and regularizers
< ShikharJ> saksham189: Okay, so I guess that would be the end of our planned features right?
< saksham189> Yes
< ShikharJ> Toshal: Hey, again I'm not sure, my EliteBNC account is working fine IMO.
< ShikharJ> saksham189:
< ShikharJ> saksham189: Nice, then let's spend the rest of the time in this phase getting the work merged in.
< ShikharJ> saksham189: Or if you have other ideas for features, please let me know?
< saksham189> ShikharJ: alright cool. So you want to merge in the PRs that are currently open and also the VirtualBatchNormalization and regularizers
< ShikharJ> saksham189: Yes.
< saksham189> Yes, alright sounds good.
< ShikharJ> saksham189: Other than that, feel free to implement anything you wish to have merged in mlpack as well.
< ShikharJ> I'll discuss with Toshal now, have a fun week!
< Toshal> ShikharJ: Hi once again.
< ShikharJ> Toshal: I wasn't sure I understood your concerns in https://github.com/mlpack/mlpack/pull/1770#issuecomment-485422302.
< Toshal> Ah Okay hold on
< Toshal> I am posting two lines just refer them one by one.
< Toshal> Wait one more is there
< Toshal> Okay, So now if look at the first line I have posted it calls the second line I have posted. The second line calls the function at the third line function which execute the fourth line. Let me know if this is not correct
< Toshal> If we look at the fourth line the error which you substituted is getting replaced again. Let me know if this is not correct.
< ShikharJ> Toshal: Okay, that makes sense to me, do you mind if I take a deeper look later for that? I kinda need to go catch a bus right now.
< Toshal> As the generator's responses are set during construction of GANs. They are by default to zero And error is calculated with respect to those responses. This is not correct thus I have directly called generator's backward function,
< ShikharJ> Toshal: Your arguments seem correct to me.
< Toshal> ShikharJ: Thanks
< Toshal> Okay
< Toshal> Are we concluding now?
< ShikharJ> Toshal: I guess apart from the PRs you have opened, I don't think you have any further PRs to open right?
< Toshal> ShikharJ: Yes Most Probably. If required I can open one PR for FID.
< Toshal> It's a good metric
< ShikharJ> Toshal: I used to (wrongly) think FID and Inception score are the same :) Sure go ahead.
< ShikharJ> Toshal: I will be available later for discussion regarding the above issue, but for now I need to go.
< Toshal> ShikharJ: Thanks. I will need to get into knitty gritty of it though, but yes I will open a pr once I finish with weight Norm layer.
< ShikharJ> saksham189: Toshal: Thanks for your time, have a good week.
< Toshal> ShikharJ: Thanks for your time as well.
< akhandait> sreenik: Interesting, I guess we should leave a note in our docs. I also think it's a good idea to let AUR maintainers know.
< saksham189> Toshal: ShikharJ if this error is true, then I am not sure how the GAN was producing quality images up until now
< saksham189> from my understanding then the output of the generator would be all zeros after training
< Toshal> saksham189: I was getting black images in my testing before.
< saksham189> Toshal: if you look at some of the older PR's related to GAN then you will see the good quality images produced by the GAN ther
< saksham189> there*
< Toshal> Yes you are correct. I don't know. Let me know if there is any logical error in the issue I have pointed out.
Toshal has quit [Ping timeout: 256 seconds]
saksham189 has quit [Ping timeout: 256 seconds]
< sreenik> akhandait: Yup
vivekp has joined #mlpack
< robertohueso> rcurtin: Hi Ryan :) So basically the idea that seems to work best for KDE Monte Carlo estimations is this
< robertohueso> Avoid using Monte Carlo estimations if referenceNode.NumDescendants() > X (e.g. X = 500). Or we can also make this some proportion from the total amount of points in the reference set (i.e. the if reference dataset has 300,000 and we set it at 10% -> don't use Monte Carlo for any node that has more than 30,000 points)
< robertohueso> but I think using an absolute value could be simpler and work better
< robertohueso> also this can be combined with your idea of recursing when mThresh > w * |R|
< robertohueso> what do you think about this?
vpal has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vpal is now known as vivekp
< rcurtin> robertohueso: when you say "works best", do you mean runtime or approximation?
< robertohueso> approximation (although runtime is also better compared to brute force)
< ShikharJ> saksham189: Yeah, that's my suspicion as well. Hence I asked for more time regarding that.
< ShikharJ> I need sometime to revisit the implementation and check for myself.
< rcurtin> robertohueso: sorry for the slow response---I'm at ICML this week so I was in a talk :)
< rcurtin> anyway, good to hear that recursing helps with the approximation
< rcurtin> I think that a static rule like referenceNode.NumDescendants() > X is something we should ideally avoid, though
< rcurtin> since there may exist some nodes for which we can get great approximations: consider a node where there are 1M descendant points but they are all the same point, for instance
< rcurtin> but there may also exist small nodes for which we can't get good approximations---imagine a node with only 20 descendant points but they are all very far apart
< rcurtin> so I think it would be better to try and recurse on estsimated values of mThresh instead
< rcurtin> if the initial m value is, e.g., 20, then it shouldn't take long to sample 20 points and compute the new mThresh, which will be very large if those 20 sampled points are very far apart
< rcurtin> have you tried the mThresh > w * |R| rule, out of curiosity?
< rcurtin> in any case, you're right that the two ideas can be combined
< rcurtin> I'll try and take a look at your computation of \Phi^l(q, R) today
< rcurtin> unfortunately the paper isn't really clear on *exactly* what that is
< robertohueso> rcurtin: Makes sense, referenceNode.NumDescendants() > X is not the best solution in those situations.
< robertohueso> yes, I have tried the mThresh > w * |R| rule but the thing is, unless w is a very small value (e.g. 0.0001) then this rule is never used in the nodes from the top and therefore the whole tree is estimated.
vivekp has quit [Ping timeout: 268 seconds]
< rcurtin> oh, interesting, I see. what is the probability of success you are using?
< robertohueso> my main concern here is that mThresh should increase to use a sample big enough. The fact is that it either doesn't increase or increases not enough.
< robertohueso> probability of success = 95%
< robertohueso> and after many tests I get like 50% of my results wrong
< robertohueso> i.e. 50% of the query points are over the relative error bound (not by a lot, but still, it's too much to be right)
< rcurtin> right
< rcurtin> is the code currently checked in? I'd like to play with it a bit
< robertohueso> Yes, now it is
< rcurtin> sounds good, I'll find some time to play with it later today
< rcurtin> don't worry about this taking longer than expected, by the way. this always happens when implementing papers :)
< rcurtin> if you want, you could start working on other parts of the paper and we can come back to try to debug this issue later (or maybe after I have a chance to poke around with it)
vivekp has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#11 (mc_kde_error_bounds - 476da87 : Roberto Hueso Gomez): The build was broken.
travis-ci has left #mlpack []
ImQ009 has quit [Read error: Connection reset by peer]
Toshal has joined #mlpack
sakshamB has joined #mlpack
sreenik has quit [Quit: Page closed]
sreenik[m] has joined #mlpack
< rcurtin> anybody in this channel at ICML also this year? I know Sumedh said he would be here
< rcurtin> I think I just saw Sumedh at the Deepmind booth :)
< zoq> :)