verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
kris__ has quit [Quit: Connection closed for inactivity]
govg has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
partobs-mdp has joined #mlpack
< partobs-mdp>
zoq: rcurtin: Got back to the computer, re-implemented Reset() as zoq mentioned. Now it crashes at compile time, because for some reason the FFN<>.Model() method disappeared. How to fix that?
< partobs-mdp>
(update) Weird stuff: merging ffn*.hpp files from upstream into my branch resolved the problem
< partobs-mdp>
Now brushing up the unit test and pushing it
vivekp has quit [Ping timeout: 240 seconds]
< lozhnikov>
kris__: I don't think that separate training of the generator and discriminator network could improve the results since the optimizer updetes weights independently, the gradient of the generator doesn't depend on the gradient of the discriminator. Actually, our approach doesn't differ significantly. We calculate the gradient of the generator when the weights of the discriminator are not updated yet. That's the only
< lozhnikov>
difference. So, I think our approach should give the same results.
< lozhnikov>
I guess some parameters are incorrect
vivekp has joined #mlpack
< lozhnikov>
Did you try to vary the batch size, the discriminator (generator) hidden layer size, the noise size, the network architecture?
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
cat_ has joined #mlpack
cat_ has quit [Client Quit]
kris1 has joined #mlpack
kris__ has joined #mlpack
kris1 has quit [Read error: Connection reset by peer]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
partobs-mdp has quit [Ping timeout: 255 seconds]
partobs-mdp has joined #mlpack
partobs-mdp has quit [Ping timeout: 246 seconds]
govg has quit [Ping timeout: 240 seconds]
bvr has quit [Remote host closed the connection]
govg has joined #mlpack
< kris__>
Hey zoq: I need a little help...with resize layer.... when i am addding it to the layer_types.hpp. I am gettting a strange error could you help.
< rcurtin>
partobs-mdp: glad you got it worked out; I have some meeting today but I can help out with anything later int he day if needed
< rcurtin>
*later in the day
< lozhnikov>
kris__: Could you describe the issue?
< kris__>
Uploaded the code the error seems to be in layer_types.hpp file.
partobs-mdp has joined #mlpack
< kris__>
Lozhnikov: Were you able to reproduce the error....
< lozhnikov>
kris__: remove ',' from layer_types.hpp:120
< kris__>
Ahhh that gives another error for me.
< kris__>
the undefined symbols error that it was giving earlier.
< lozhnikov>
kris__: try to fix incorrect #ifndef at resize_impl.hpp:13
< kris__>
Ahhhh done....
< kris__>
Could you explain the test you were talking about. For resize layer i wrote different test if passed. But i don't think its of much use.
< lozhnikov>
Try to resize a linear function, (for example x + y). The result should be the same (linear interpolation is exact for linear functions)
< kris__>
is the bilinear interpolation distributive over addition. ie should resize(x) + resize(y) == x + y. Scaling factor = 1.
< lozhnikov>
kris__: yeah, I think that's correct
< kris__>
I updated the PR. It took time because i was thinking of the same test for backward pass but found that simple test would not work for the backward pass / un zooming.
< kris__>
we have treat the scale = 1 as special case for backward pass.
crater_kamath has joined #mlpack
< crater_kamath>
Hey guys..I want to contribute to mlpack
< crater_kamath>
I am a beginner and have submitted a patch to mozilla
< crater_kamath>
Can anyone help me in getting started?
< crater_kamath>
and please tell the ML concepts that I need to learn cause machine learning is new to me
< zoq>
crater_kama: The necessary knowledge highly depends on what you like to do, but as long as you are willing to learn the necessary concepts you are good to go, so some knowledge and is often sufficient. Also, mlpack uses lots of different C++ paradigms including a lot of template metaprogramming, the gsoc page includes some helpful reference that you should checkout if you dig into the code.
< zoq>
partobs-mdp: Sorry for the slow response, sounds like you fixed the issue?
shikhar has joined #mlpack
< zoq>
kris__: Build issue solved?
crater_kamath has quit [Ping timeout: 246 seconds]
< kris__>
Yes it's done now.
< zoq>
great
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
partobs-mdp has quit [Ping timeout: 246 seconds]
shikhar has quit [Quit: WeeChat 1.4]
< kris__>
lozhnikov: So after changing hte some parameters i think the 1d gaussian example is working now.....
< kris__>
Here are the result i did not set the y scale since the image gets cropped.
< kris__>
./gan_test.o -o gaussian1_output.txt -e 1 -b 8 -r 0.001 -g 2 -v here are some of the parameters i tried.
< kris__>
I changed the epoch from 1 to 20
< kris__>
but i saw that later the learning doesn't take place.
< lozhnikov>
sounds good
< lozhnikov>
were you able to obtain correct parameters (and the layer structure) for the Digit dataset?
< kris__>
No the parameters are harder to tune for digits.
< kris__>
I would like try the keras example now.
< kris__>
Since i have certain guide for parameters there.
< kris__>
Otherwise the orilley example. I think the resize layer can be merged with few changes let me know what you think?
< lozhnikov>
I think we should implement the test for a linear function first. Actually, you always can merge that PR into the GAN PR
< kris__>
I did implement the test for the linear function. Forward pass alone in the latest commit.
< kris__>
There is no similar test for the backward pass.
< lozhnikov>
ok, I'll think about that
< kris__>
Also I tried changing some parameters for the ssRBM test. Either the test fails or the time difference is not that big.
< lozhnikov>
I know. The test is slow since the ssRBM is ten times slower than the binary RBM. I didn't dig into that yet
< rcurtin>
zoq: I think that FFN::Serialize() should also serialize the network type itself; what do you think? this would allow deserializing any FFN<> and then the structure of the network would be loaded too
< rcurtin>
I am working on a patch for it (because I at least need the support for other things, even if you'd like to leave FFN::Serialize() as-is)
kris1 has quit [Quit: kris1]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
kris1 has joined #mlpack
< zoq>
rcurtin: Sounds like a great idea.
< rcurtin>
looks like the guy installing the GPU in masterblaster is ready, so I'll go ahead and take it offline in the next few minutes I think
< rcurtin>
ah, he says actually that he'll do it in ~2-3 hours, so I'll wait to bring it down until then
< kris__>
I am not able to run mnist dataset even 4000 images we extracted on my machine with given set of pramas in enough time to do the parameter tuning for that.