verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
Ankit has joined #mlpack
< Ankit>
Hello people
< Ankit>
I am an CS under grad and a beginner in the field of Machine learning .I am interested in contributing . So can you please guide me and tell me the prerequisites.It will e a great help and I will be grateful to you all
kris1 has quit [Quit: kris1]
sumedhghaisas__ has joined #mlpack
mikeling has joined #mlpack
Ankit has quit [Ping timeout: 260 seconds]
sumedhghaisas__ has quit [Ping timeout: 240 seconds]
< partobs-mdp>
zoq: rcurtin: Implemented Parameters() method as rcurtin described (make individual function parameters as memory pointers to some contiguous memory block), but there are still only zeros in the HAMUnit paramters. Can you take a look at the issue?
kris1 has joined #mlpack
< kris1>
Hi, Lozhnikov
< kris1>
I will send you the files that i have for GAN PR and also the gan.cpp file.
< kris1>
I don’t get the results that you mentioned.
< kris1>
Also i was able to create ResizeLayer. There is some error regarding layer types i still have to figure that out.
< lozhnikov>
kris1: I'll look through the ResizeLayer PR today
< kris1>
Okay sure…. i am not pretty sure about the error in ReseizeLayer…..symbol not found for Linear.hpp layer ?? seems strange…..
sumedhghaisas__ has joined #mlpack
< kris1>
Ahh i think it might be working now(gan)…. my learning rate was wrong….stupid mistake….
sumedhghaisas__ has quit [Ping timeout: 240 seconds]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
< kris1>
I was able to the results now for the gan.cpp :)
< kris1>
Just curious to know did you try it out for other parameters as well. Basically having a higher number of iterations….
< lozhnikov>
I spent 2 days in order to obtain these arguments and get these results. That's the best result that I obtained.
< lozhnikov>
I think it is possible to improve that
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
sumedhghaisas__ has joined #mlpack
govg has joined #mlpack
govg has quit [Quit: changing clients]
govg has joined #mlpack
< kris1>
lozhnikov: I started testing gan implementation with the train4 dataset using the architechture from keras adverserial example.
< kris1>
It’s peculiar how we are getting all gradients as zeros all the time.
< kris1>
I just wanted to get gan working before testing it out on the orilley example.
< lozhnikov>
I didn't look at the keras example
< lozhnikov>
but many examples use the Adam optimizer (I replaced that by mini-batch SGD since the mlpack version of Adam hasn't got mini-batch support)
< lozhnikov>
so, I guess our architecture differs from that example
< lozhnikov>
in that case it is reasonable to try to vary some arguments
< lozhnikov>
maybe it is reasonable to minimize -log(D(G(z))) instead of log(1-D(G(z)))
< lozhnikov>
I mean to say that there is no quick answer
< kris1>
Hmmm….. What i trying to do is to try to see if i can get the discriminator preTrained first without Gan that would give me a idea about the kinda parameters that would be required.
< kris1>
In minibatch desecent if we have no learning at all then overallObjective == lastObjective. That would actually cause the program to terminate saying that we have converged which would be incorrect.
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
< lozhnikov>
kris1: If I remember right there was a trivial workaround: you could use negative tolerance
< kris1>
Even pretrain of discriminator is giving me 0 gradients.