verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has joined #mlpack
sumedhghaisas__ has quit [Ping timeout: 240 seconds]
kris1 has joined #mlpack
partobs-mdp has joined #mlpack
shikhar has joined #mlpack
< partobs-mdp>
zoq: rcurtin: Finally got the non-zero parameters in HAM, but the unit test is still messed up. It seems like I've done some armadillo magic wrong, but could you take a look into the issue?
< partobs-mdp>
The code in the latest commit
< partobs-mdp>
One of the problems is that the JOIN doesn't get valid parameters and emits some negative values (despite having weight matrix consisting of positive elements)
shikhar has quit [Ping timeout: 240 seconds]
sumedhghaisas__ has joined #mlpack
shikhar has joined #mlpack
rohit has joined #mlpack
rohit has quit [Quit: Page closed]
partobs-mdp has quit [Remote host closed the connection]
govg has quit [Ping timeout: 248 seconds]
govg has joined #mlpack
vivekp has joined #mlpack
sumedhghaisas__ has quit [Ping timeout: 240 seconds]
< zoq>
Guest75930: We are always open for new contributions, let us know if we should clarify anything.
< Guest75930>
thank you. will do
Guest75930 has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< kris1>
lozhnikov: I think now the resize layer is fixed. You can have a look.
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< kris1>
Lozhnikov: I was think that we should add a parameter for noise size in Gan.hpp.
< kris1>
Right now we have only one noise sample. On each batch we train the Gan on batchSize * trainData(0, batchSize) + noise*BatchSize.
< kris1>
or we are training on noise very less. So if the batchSize is small this would not be a big matter. But if the batchSize = 100. the diffrence is pretty huge. I think this the reason the gradients are not big in out case.
< kris1>
What do you think ?
vivekp has quit [Ping timeout: 246 seconds]
shikhar has joined #mlpack
< kris1>
Also rather than having an extra parameter i was thinking we always set the noiseSize = batchSize so the now the predictor would (trainData.n_rows + batchSize, trainData.n_cols)