ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#12 (mc_kde_error_bounds - bd964c6 : Roberto Hueso Gomez): The build is still failing.
< favre49>
Never mind, I was able to work around it by predefining the activation function for the XOR task instead of passing it as a template argument. Which way would be better, hough?
< zoq>
Looks like the XORTask dosn't take a template argument: so <XORTask<LogisticFunction> isn't valid
< zoq>
template<typename ActivationFunctionType> class XORTask { };
< zoq>
but not sure you need the activation function inside the task?
favre49 has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
< favre49>
Ah i had modified the task to take activation function as a template argument. I did so because since Genome is templated, we have to define the activation function in the Evaluate() function now.
< zoq>
I see, if you push the latest version, I can see if I can spot the issue.
< zoq>
unless it#s already solved
xiaohong has joined #mlpack
< ShikharJ>
sakshamB: Toshal: Are you guys there?
< sakshamB>
yes
< ShikharJ>
sakshamB: Let's begin then? I have half an hour to spare right now.
< sakshamB>
I’ll open a PR for Virtual Batch normalization by tomorrow. Just stuck on the numerical gradient test right now.
< sakshamB>
Also I found a bug in the implementation of Backward() in BatchNorm Layer.
< sakshamB>
will create an issue about that.
< ShikharJ>
sakshamB: No worries there.
< ShikharJ>
sakshamB: Yeah an issue should be better to point out a bug, and if we find it's viable, feel free to go ahead.
< ShikharJ>
sakshamB: Though even I had found a bug in its implementation a while ago I think, not sure if it was Backward() or not.
< ShikharJ>
Maybe I didn't check that thoroughly for further bugs.
< favre49>
zoq: I just removed the template argument for now, so i could fix the other issues. Now i just have one last compile issue, which I can't really understand https://pastebin.com/NhdJLUXM
< ShikharJ>
sakshamB: Do you think it makes sense to merge MiniBatchDiscrimination and Inception Score into a single PR? Or would you prefer that they were merged separately?
< favre49>
I've pushed the current state of the code (it's a bit ugly at places though, I'll work on cleaning it now)
< ShikharJ>
If I remember right, you had mentioned that one makes use of the other to form a valid test?
< favre49>
When you get the time, please check out that compile issue
< sakshamB>
ShikharJ: I don’t mind either way. Unless we need to test them together which would be easier if they were in a single PR.
< Toshal>
ShikharJ: Hi, I am here.
< ShikharJ>
sakshamB: Let's do that then, since these PRs aren't super big, it shouldn't be a lot difficult to review as well.
< sakshamB>
ShikharJ: alright yes I think it would be easier to test and review by having them in a single PR.
< zoq>
favre49: Can you add a default constructor to the ConnectionGene class?
< ShikharJ>
sakshamB: Can you also elaborate n your Backward() in BatchNorm bug?
< sakshamB>
yes I am in the process of creating an issue for it. will share the link.
< ShikharJ>
sakshamB: Cool then.
< ShikharJ>
sakshamB: Let's try and have those two in a single PR, I'll review and merge within the coming week.
favre49 has quit [Ping timeout: 256 seconds]
< ShikharJ>
Toshal: I have to apologize again, I couldn't find the time to review your comment on Generator Gradient routine.
< Toshal>
ShikharJ: You don't need to apologize. It's fine I know you are quite busy.
< ShikharJ>
Toshal: I'm pretty convinced of the GAN serialization though, I'm just trying to test that out by creating a dummy network, saving it and loading it back again on savannah. That would be the gold standard test for the PR.
< Toshal>
Okay
< ShikharJ>
Toshal: And what's the progress on getting savannah to work for you? Are you stuck somewhere there?
< ShikharJ>
sakshamB: Thanks, I'll take a look shortly.
< ShikharJ>
Toshal: What's the progress on Weight Norm and Frechlet Distance?
< robertohueso>
He :) I'm working on adding new features to an mlpack method, they require new parameters. Can I change the API of the method (change order of parameters and add new ones)? Am I supposed not to break any dependency users might have with mlpack?
< robertohueso>
Hey*
< Toshal>
ShikharJ: Weight Norm layer is completed.
< Toshal>
I have added the gradient test.
< ShikharJ>
Toshal: Okay, better to remove the [WIP] tag on the PR then :)
< Toshal>
I will add the serialization test soon.
< Toshal>
Okay I will remove it tommorow.
< Toshal>
Regarding FID I will start working on it soon
< ShikharJ>
Toshal: Okay cool, I'll approve the Serialization PR when I can get the networks running :)
< Toshal>
ShikharJ: Great.
< ShikharJ>
sakshamB: Toshal: You guys have got a lot of quality work done, and I wish to see that merged that in as soon as possible. Let's designate the coming week for that.
< Toshal>
ShikharJ: Okay thanks.
< ShikharJ>
sakshamB: Hmm, are you sure that BatchNorm is the only layer that fails that way?
< ShikharJ>
By adding a Linear layer before the intended layer to be tested in the Gradient test?
< sakshamB>
some of the layers already have a layer before them in the numerical gradient test.. while I have not looked at other tests so far
< sakshamB>
ShikharJ: yes
< ShikharJ>
Toshal: Feel free to log off for now. I'll review the label smoothing as soon as I get a chance.
< ShikharJ>
sakshamB: Sorry? Is BatchNorm the only layer to fail? I didn't catch what you said yes to.
< sakshamB>
ShikharJ: I haven’t tried other tests yet.
< sakshamB>
I was working on VirtualBatchNorm and looking at implementation of BatchNorm layer
< ShikharJ>
sakshamB: It might be in our interests to do that. If other gradient tests already have a layer before them, leave them be. But I guess a lot of them don't. If only a small bunch of those tests fail that would strengthen our hypothesis regarding BatchNorm, else there's a flaw in our logic.
< sakshamB>
ShikharJ: hmm.. I am not sure what the flaw could be though
< ShikharJ>
robertohueso: I think we have done that in the past, though not very frequently.
< ShikharJ>
sakshamB: Yeah, better to run the tests and report what you find.
< sakshamB>
ShikharJ: anyways I think that for BatchNorm layer I was able to exactly find the bug in the implementation of Backward (because the paper has the equations for the derivative) and fix it.
< ShikharJ>
robertohueso: What method are we talking about though?
< robertohueso>
KDE
< ShikharJ>
robertohueso: I don't see an issue with adding new parameters, though I'm not sure about re-arranging, probably better to ask Ryan about that :)
< ShikharJ>
sakshamB: Please mention about that on the issue as well.
< sakshamB>
ShikharJ: also I am having a doubt when finding the derivating of standard deviation wrt to the mean
< robertohueso>
Maybe the best idea is to keep the old constructor and add a new one with all the new parameters :)
ImQ009 has joined #mlpack
ShikharJ has left #mlpack []
< robertohueso>
Thanks ShikharJ! :D
ShikharJ has joined #mlpack
< ShikharJ>
robertohueso: Will all of the new parameters have default values for them?
< ShikharJ>
sakshamB: Please mention your doubts on the issue? I'll have to leave and catch a bus to my workplace, and since it's gonna be a 40 minute ride, I think I can answer your questions from the phone.
< sakshamB>
no its not regarding the batchnorm layer
< sakshamB>
but for the virtualBatchNorm
< ShikharJ>
sakshamB: Okay, maybe push your local code and then we can have a chat?
< sakshamB>
ShikharJ: alright sounds good
< ShikharJ>
robertohueso: If some of the parameters cannot have default values, the adding any new parameters will eventually break API. So better to do both adding and re-arranging if that's the case, else you could wait for Ryan to reply :)
< ShikharJ>
sakshamB: Okay I'll be off for now. Please leave me a message and if you have further doubts :)
< sakshamB>
ShikharJ: alright will do so
xiaohong has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
< favre49>
zoq: Thanks, that was it. I fixed the compile and linking errors, I'll start testing and debugging now.
favre49 has quit [Client Quit]
vivekp has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143>
lozhnikov : any suggestion for naming of CharSplit class ..?? I don't seem to find one :)
jeffin has joined #mlpack
jeffin143 has quit [Ping timeout: 252 seconds]
< rcurtin>
robertohueso: re-arranging is fine, but let's keep the old constructor around with an mlpack_deprecated annotation and a note that it'll be removed in mlpack 4.0.0
< rcurtin>
but yeah, I am agreed with Shikhar on this one; if you can manage to just add new parameters to the end, then we can avoid breaking the API :) if not, though, the idea above works
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#13 (mc_kde_error_bounds - 27a53c7 : Roberto Hueso Gomez): The build is still failing.
< favre49>
Sorry to bother you guys again, but I have a runtime error from kmeans clustering I don't know how to debug : unknown location(0): fatal error in "NEATXORTest": memory access violation at address: 0x00000008: no mapping at fault address
< jeffin>
The error is due to accessing of index which is out of bound or element which is not present
< rcurtin>
favre49: you can try compiling with debug symbols (so with cmake this is the option -DDEBUG=ON) and then using gdb to trace where the error is