verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
deep-book-gk_ has joined #mlpack
deep-book-gk_ has left #mlpack []
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
sumedhghaisas has quit [Ping timeout: 268 seconds]
kris1 has joined #mlpack
< kris1>
lozhnikov: hi i have updated the gan PR could you have look.
< kris1>
With low no of iterations i am getting very bad results but i am. testing with high number of iterations(takes lot of time) now lets see how it performs
< kris1>
zoq: if you have the time could you also have a look particularly at the evaluate function and gradient function of GAN. That would be very helpful.
< kris1>
lozhnikov i actually reset parameters first and then do generator.Parameters()
< kris1>
= arma::mat()
< kris1>
but it works now….
< lozhnikov>
did you test that with valgrind?
< lozhnikov>
I think this is incorrect since the layers still use the previous pointer
< kris1>
you mean for the memory leak.
< lozhnikov>
No, I mean invalid pointer
< lozhnikov>
And I've got the second comment: How do you think, is it possible to move the code of Train() to the Gradient() function
< lozhnikov>
?
< kris1>
Can you please elaborate. I don’t understand the comment.
< lozhnikov>
the first or the second?
< kris1>
second
< lozhnikov>
I mean is it possible to refactor the gradient function in such a way that Train() contains only "optimizer.Optimize()"?
< kris1>
for 1 i think it is to check i will the where the generator.parameters are pointing to usning memptr if it is the same as parameters.memptr() that would be sufficient i guess.
< kris1>
I would have to think about the refactoring. Intially i was thinking of creating 2 seprate function trainGenerator and trainDiscriminator and then calling them from the train function. But just having optimizer.Optimize would be require some thinking. I will see.
< kris1>
the result just came for 1000 iterations for gan. Will have to see where i am going wrong.
< kris1>
lozhnikov could you comment on if the train and gradient function are correctly implemented.
< lozhnikov>
regarding the first comment: pointers are different since you use the following:
< kris1>
how do you mean…. for the gradient calculations of discriminator i just call the discriminator.Gradient().
< kris1>
for generator i am passing the error from discriminator.network.front() to the genertor.error() and then i am bascically calling the generator of the generator.
< kris1>
i have taken the fakeLabels to 0 and real labels to 1 btw for disriminator calculation and fakelabels = 1 for generator calculation.
< lozhnikov>
okay, It seems I'have understood that. Right now I haven't got any comment except
< lozhnikov>
I think you should replace that by "Optimizer.MaxIterations() = k"
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
< ironstark>
rcurtin: zoq: I wanted to discuss regarding the new library we should benchmark. Can we install R and benchmark it? I think it would be good if we could have a benchmarking system that could compare the performance of R against Python. Please let me know what you think
< zoq>
ironstark: We can install R, yes, however my understanding is limited, so I can't provide much help here; I think rcurtin suggested dlib-ml a while back, i guess that could also be an option. Let me know what you think. I'm sure we can find something everyone agrees on.
< ironstark>
zoq: i can work on dlib-ml too but I dont have much experience in that. I can first work on dlib-ml and then R. Please let me know your thoughts on this.
< zoq>
ironstark: Let's wait for rcurtin's thoughts on this. I like both ideas.
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
shikhar has joined #mlpack
shikhar_ has joined #mlpack
shikhar has quit [Ping timeout: 260 seconds]
shikhar_ has quit [Ping timeout: 240 seconds]
shikhar_ has joined #mlpack
shikhar_ has quit [Ping timeout: 240 seconds]
shikhar_ has joined #mlpack
shikhar_ has quit [Ping timeout: 240 seconds]
shikhar has joined #mlpack
shikhar has quit [Quit: WeeChat 1.7]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
kris1 has joined #mlpack
< kris1>
i fixed the reset function for the GAN and pushed the changes. Regarding the refactoring of training i am still working on it.
< kris1>
lozhnikov
< kris1>
thought the samples produced still are pretty random it seems. i will work on that first and then refactor the train function.
< lozhnikov>
kris1: I looked through the fix. I think there is an error.