verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq>
uzipaz: Also, using the BinaryClassificationLayer layer as output layer, should also decrease the runtime, but I guess, I'll have to add the confidence feature first.
< uzipaz>
zoq: is there much of a difference in implementation? I took a look at both BinaryClassificationLayer and MulticlassClassificationLayer and the only difference was in OutputClass
< zoq>
uzipaz: ah, right, I thought about a different layer
uzipaz has quit [Quit: Page closed]
uzipaz has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#714 (master - ba826b1 : Ryan Curtin): The build was fixed.
< uzipaz>
zoq: If I use RMSprop as the optimizer, when I call predict on the FFN, the prediction matrix either outputs probability 1 on all cases or 0 on all cases
< uzipaz>
zoq: doesnt make any sense
< uzipaz>
zoq: calling the Train function on FFN with MiniBatchSGD is giving me segmentation fault
Nilabhra has joined #mlpack
Mathnerd314 has quit [Ping timeout: 276 seconds]
witness_ has joined #mlpack
ranjan123 has joined #mlpack
< ranjan123>
zoq: you there ?
ranjan123 has quit [Ping timeout: 250 seconds]
uzipaz has quit [Ping timeout: 250 seconds]
ranjan123 has joined #mlpack
< ranjan123>
I think I got some serious bug In existing sgd while using sgd to optimize a function for my research work. Could you please confirm whether it is bug or am I making some mistake using it?
govg has joined #mlpack
govg has quit [Ping timeout: 260 seconds]
witness_ has quit [Quit: Connection closed for inactivity]
govg has joined #mlpack
govg has quit [Quit: leaving]
tsathoggua has joined #mlpack
tsathoggua has quit [Client Quit]
Mathnerd314 has joined #mlpack
Mathnerd314 has quit [Ping timeout: 268 seconds]
umberto has joined #mlpack
umberto has quit [Ping timeout: 250 seconds]
Mathnerd314 has joined #mlpack
Mathnerd314 has quit [Ping timeout: 240 seconds]
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#715 (master - f4b3464 : Marcus Edel): The build passed.
< uzipaz>
zoq: when using SGD or MiniBatchSGD, I think we should move the tolerance check outside the currentFunction % numFunctions == 0 check, and also before calling arma::shuffle we need to set a seed for the rng
< zoq>
uzipaz: If we do that, we compare the lastObjective with the current value of the overallObjective parameter and not with the overallObjective over all samples. We have to evaluate the same amount of samples, before we can check the tolerance. If we move the tolerance ouside the numFunctions check, we check the tolerance after each sample.
< zoq>
uzipaz: Why do you think we have to set a seed for the generator?
< uzipaz>
zoq: I tried arma::shuffle on simple matrix/vector and each time I ran the program, I got the same result
< zoq>
uzipaz: So this is only a problem if you run the Train function more than once right? In this case, you could use the RandomSeed function core/math/random.hpp before calling Train again. Maybe it's a good idea to do that inside the optimizer, not sure right now.
< uzipaz>
zoq: I was thinking about this in the loop inside sgd optimize function, where we call arma::shuffle on the visitationOrder after each rotation around the trainingsamples
< zoq>
uzipaz: Not sure what you mean, you mean, we should shuffle the visitationOrder inside the main for loop?
awhitesong has left #mlpack []
< uzipaz>
zoq: I meant that before we call arma::shuffle on visitationOrder vector on line 83, we should change the seed of rng
< zoq>
uzipaz: ah, I see; If that returns the same order every time this is definitely a bug. Maybe, you can open a issue on github?
< uzipaz>
zoq: sure, will do
< zoq>
uzipaz: Thanks!
Nilabhra has quit [Remote host closed the connection]
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#717 (master - 592fdcd : Marcus Edel): The build passed.
< uzipaz>
zoq: If I use MinibatchSGD with batchSize = 1, is it safe to assume that it will behave the same as SGD with other parameters being identical?
< rcurtin>
uzipaz: yes, it will behave the same
< zoq>
uzipaz: Btw. I thought line 83 would always return the same order, but that's not the case, so I'm not sure, there is a reason to set the random seed.
< uzipaz>
zoq: sry if I caused confusion, your right, it does not return the same order, even if using the same seed, the visitationOrder in each rotation will indeed be different, but only if your run SGD as optimizer in the same experiment with identical parameters, we assume identical result each time, because the seed is identical
tsathoggua has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
< uzipaz>
zoq: im running FFN with MiniBatchSGD with batchSize = 50, maxIterations = 100,000. with one hidden layer. Dataset contains 773 features and 842 samples... its been training for 1 hr and 20 mins now and still ongoing... is this normal??