ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has quit [Ping timeout: 245 seconds]
robb_ has joined #mlpack
< robb_>
hey, all. Is it just me or does the doxygen search not work for anybody else?
xiaohong has joined #mlpack
robb_ has quit [Ping timeout: 256 seconds]
xiaohong has quit [Ping timeout: 252 seconds]
sreenik has joined #mlpack
< sreenik>
robb: You are right. You aren't alone :)
Sergobot has quit [Remote host closed the connection]
johnsoncarl[m] has quit [Remote host closed the connection]
aleixrocks[m] has quit [Remote host closed the connection]
chandramouli_r has quit [Remote host closed the connection]
< rajs123>
Is there any alternate link to it or does it needs to be fixed?
< rajs123>
Also, would it be better to have a requirements file for pre-requisites in https://github.com/mlpack/benchmarks? It would be easier to track all requirements and versions
< sumedhghaisas>
Also I think changing both alphas and theta together is fine cause we would be plotting the unnormalized distribution
< sumedhghaisas>
rather than normalized
< sumedhghaisas>
but the shape will be same
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas_ has joined #mlpack
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas_: Yeah, I'll check it. Thanks!
< KimSangYeon-DGU>
You mean psi, right?
< KimSangYeon-DGU>
the phase variable
< KimSangYeon-DGU>
between the two classes
< KimSangYeon-DGU>
After plotting with changed variables, I'll leave a link for reference :)
rajs123 has joined #mlpack
KimSangYeon-DGU has quit [Quit: Page closed]
< ShikharJ>
sakshamB: Toshal: Sorry for the late reply, I was travelling.
< ShikharJ>
sakshamB: I think zoq has helped you regarding the issue. Thanks zoq!
rajs123 has quit [Quit: Page closed]
Rj has joined #mlpack
Rj has quit [Client Quit]
< ShikharJ>
Toshal: I'll suggest again to setup an IRC bouncer, it'll be easier to keep track of messages.
Rj has joined #mlpack
Rj has quit [Client Quit]
rj has joined #mlpack
< ShikharJ>
Toshal: Whenever you're free, please let me know of your idea of implementation of the Label Smoothing technique. We can discuss that briefly here, and maybe you can open a template PR, and we'll take it on from there.
rj has quit [Remote host closed the connection]
rj has joined #mlpack
rj has quit [Remote host closed the connection]
rj has joined #mlpack
< rcurtin>
ShikharJ: hope your flight went well, how is California?
rj has quit [Remote host closed the connection]
jeffin143 has joined #mlpack
< jeffin143>
lozhnikov : can we operator() in a class and use that inside the class itself
< jeffin143>
I am trying to move the defination inside the class, just like you suggested.
< ShikharJ>
rcurtin: So far so good. I haven't been outside though, probably today I'll go out, fetch some groceries, and do some other mundane stuff :)
< jeffin143>
The issue is, the class is not getting created by the time i want to access it inside the variable field and thus it throws up the error.
< rcurtin>
:) the temperature is always nice in California, I like when I get to travel out there
< jeffin143>
Since i tried declaring the map inside the check function instead of keeping it as private on it works fine.
rj has joined #mlpack
rj has quit [Ping timeout: 248 seconds]
< sumedhghaisas_>
KimSangYeon-DGU: Cool
< sumedhghaisas_>
looking forward to it
< jeffin143>
lozhinkov : "// Whenever you use a class as a template parameter, the declaration of that class must be complete and not simply forward declared."
< ShikharJ>
rcurtin: Yeah, the weather is much better than what I usually face in India, especially during this time of the year.
< jeffin143>
so in case of std::unordered_map<boost::string_view, size_t, Hasher>m , Hasher should have complete defination and hence i cannot move the defination inside the class.
< jeffin143>
or may be i missunderstood you, May be you were telling me to move the class definition inside DictionaryEncoding, and if that is so, I guess it won't be a problem since that would be a inner class for the DictionaryEncoding
jeffin143 has quit [Ping timeout: 256 seconds]
rj has joined #mlpack
rj has quit [Remote host closed the connection]
rj has joined #mlpack
Toshal has joined #mlpack
< Toshal>
ShikharJ: Hi.
rj has quit [Client Quit]
< Toshal>
Ah, regarding the bouncer setup. Actually I am having one bouncer set but it looks like it is not set properly. I will fix it soon.
< Toshal>
Regarding Label Smoothing. I am thinking to add two parameters to our train function namely `realLabels` and `fakeLabels` respectively.
< Toshal>
Their default values would be 1 and 0 respectively. Because of default values we can also achieve one-sided label smoothing with it.
< ShikharJ>
Toshal: Ah, I see. So you're not planning to include that as a layer?
< Toshal>
Yes. What are your views?
< ShikharJ>
Hmm, I haven't thought that through. But I think the approach with the added parameters should work equally fine.
< Toshal>
For testing I was thinking that we could use `ffn's` Responses() parameter as we our discriminator is going to share the same memory and responses are only for discriminator network.
< Toshal>
A small test would check the smoothed responses.
< ShikharJ>
To be applicable across all GANs, it'll have to be developed in such a way that the API is consistent.
< ShikharJ>
Some of the GAN implementations are going to be independent from the regular gan_impl.hpp file, like the CycleGAN code.
< Toshal>
Okay. Let me take a look through it.
< ShikharJ>
Hence, I was thinking of it as a layer or a default class function, where we call that function if a certain passed parameter is set (just like we do with reset parameter) or so.
< ShikharJ>
Also, I just merged in the memory sharing PR, so feel free to rebase your serialization PR to the master and resolve the conflicts.
< Toshal>
Okay,
< ShikharJ>
Toshal: Alternatively, we can develop it as a templatised policy. But that would be overkill in my opinion. For now, just try pushing up a basic code snippet on your idea of the implementation and we'll discuss there.
< Toshal>
Just one thing it looks like you have made some changes in gan, wgan and other files what are those changes for I have not went deeply at this moment. just curiosity.
< sakshamB>
hmm.. with label smoothing my initial idea was to pass a paramater to the GAN constructor like others such as generatorUpdateStep, preTrainSize etc. Also the paper mentions to “smooth only the positive labels to a[alpha].” and leaving negative labels set to 0.
< Toshal>
sakshamB : It's called one sided label smoothing.
< ShikharJ>
sakshamB: Yeah, that's the idea I had in mind.
< Toshal>
Actually we set labels during training so I was thinking to pass them over there. A user may want to change the label parameters between two train calls. Based on the complexity?
< Toshal>
ShikharJ: Okay I will push the code snippet soon. Right after serialization gets merged.
< Toshal>
sorry right after seriazation get's ready.
< ShikharJ>
Toshal: Yeah, that makes sense to pass onto the Train() function. I just realized, for independent implementations like CycleGAN, it'll have to be re-written all along. So feel free to go with your idea.
< ShikharJ>
I'll incorporate the changes into CycleGAN later.
< Toshal>
Thanks.
< ShikharJ>
That's because CycleGAN will use two discriminators.
< Toshal>
Okay
< Toshal>
If you don't mind can I give you a suggestion on cycleGAN PR?
< ShikharJ>
I don't think of any reason why I would mind someone giving me free advice :)
< ShikharJ>
It's upto me whether to take that advice or not :)
< Toshal>
It looks like you should also serialize the reset parameter in cycleGAN. :)
< Toshal>
Okay. I will merge conflicts soon. If possible today itself.
< ShikharJ>
Ah, the CycleGAN PR isn't complete as of now. I'm currently only focussed on getting the GAN to work and produce good output. Rest will follow after that, but I hear you.
< sakshamB>
Toshal: I have only seen one sided label smoothing being used for GANs.
< Toshal>
sakshamB: yes you are correct. But I would need two parameters for LSGANs so it would be useful over there.
< sakshamB>
Toshal: thanks for the clarification.
< Toshal>
Also, this is quite vagues but I was thinking that a researcher may try to experiment with two-sided as well.
< Toshal>
to invent something new. :) Sorry if this is getting boring.
< ShikharJ>
Toshal: A researcher would also be designing his own GANs from scratch while doing that.
< Toshal>
yeah you are correct. I just forgot that
< ShikharJ>
sakshamB: Any progress on mini-batch discrimination?
< sakshamB>
ShikharJ will open a WIP PR by tomorrow
< ShikharJ>
Cool, I'll be off for now then. See you guys on Friday 9.30pm IST.
< lozhnikov>
jeffin143: Yes, sure. I meant that we should move the definition of Hasher inside DictionaryEncoder.
< lozhnikov>
jeffin143: I think it might be a better way to define a specialization of boost::hasher for some boost versions.
< lozhnikov>
jeffin143: I think it is a more general way than introducing the Hasher class.
vivekp has quit [Ping timeout: 258 seconds]
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#7108 (master - 89085da : Shikhar Jaiswal): The build has errored.