verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
Nilabhra has joined #mlpack
tsathoggua has joined #mlpack
tsathoggua has quit [Client Quit]
umberto has joined #mlpack
umberto has quit [Ping timeout: 250 seconds]
tsathoggua has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
mentekid_ has quit [Ping timeout: 244 seconds]
wasiq has quit [Ping timeout: 264 seconds]
mentekid_ has joined #mlpack
archangel4 has joined #mlpack
archangel4 has quit [Ping timeout: 260 seconds]
wasiq has joined #mlpack
Nilabhra has quit [Remote host closed the connection]
Rodya has quit [Ping timeout: 260 seconds]
Rodya has joined #mlpack
keonkim has quit [Ping timeout: 250 seconds]
keonkim has joined #mlpack
mentekid_ has quit [Ping timeout: 250 seconds]
Nilabhra has joined #mlpack
mentekid_ has joined #mlpack
zoq_ is now known as zoq
ranjan123 has quit [Ping timeout: 250 seconds]
uumberto has joined #mlpack
uumberto has quit [Client Quit]
ranjan123 has joined #mlpack
< ranjan123>
Hello everybody ! :D .
< ranjan123>
rcurtin: you there ?
< rcurtin>
mentekid_: any chance I can get a copy of the sift10k and gist10k datasets you were testing with?
< rcurtin>
ranjan123: hello! I am here
< ranjan123>
In psgd you have commented "I am concerned that this is a lot slower than it could be. It looks like you are checking for convergence of all of the threads at once, instead of letting each thread run its own SGD instance. This means there are lots of barriers and atomic sections when I don't think they need to be there. You might be able to simplify this significantly if you use the existing SGD class."
< ranjan123>
I dont get this line "You might be able to simplify this significantly if you use the existing SGD class"
< ranjan123>
I mean what to do with existing SGD class ?
< rcurtin>
something like
< rcurtin>
for (each thread)
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#778 (master - 0e6f351 : Ryan Curtin): The build passed.
< ranjan123>
That is very simple but it is not written in any literature. There will not be any random number generator. If the number of function is huge say N, then we have to allocate a array of size N instead of selecting it randomly.
< ranjan123>
I hope this is not a problem. right ?
< rcurtin>
no, that's not what I mean...
< rcurtin>
the parallel SGD algorithm you proposed splits the dataset up
< rcurtin>
and runs SGD on each subset
< ranjan123>
selecting function randomly in each thread
skon46 has joined #mlpack
< rcurtin>
I don't know if this will paste correctly...
< rcurtin>
nope, guess not
< rcurtin>
okay, line 2 of Algorithm 2 (the one you are implementing):
< rcurtin>
so I have misunderstood, you are not selecting a subset, you're just running SGD with different random seeds on each thread
< ranjan123>
hmmm
< rcurtin>
so then all that needs to be done is make sure that the random number generator being used is threadsafe and then you can just run an SGD instance for each thread
< ranjan123>
yes
< ranjan123>
I can replace algorithm with existing SGD
< ranjan123>
I can replace algorithm 1 with existing SGD
< rcurtin>
I don't know what you mean, Algorithm 1 already is the existing SGD class
< ranjan123>
yes! but not the sgd which are in implemented mlpack
< ranjan123>
*in
< rcurtin>
why do you say that?
< ranjan123>
Draw j 2f 1 :::m g uniformly at random
< ranjan123>
as you said: running SGD with different random seeds on each thread
< ranjan123>
ok .
< rcurtin>
the SGD implementation in mlpack shuffles the points instead of sampling uniformly at random, but the real-life difference is going to be completely negligible so I don't even see that as effectively different
< ranjan123>
yes! exactly
< ranjan123>
one more thing
archangel4 has joined #mlpack
< ranjan123>
@stephentu said "provide support for sparse gradients "
< ranjan123>
Truly I don't get the point. Providing support means what ? it would be good if you could explain the point!
archangel4 has quit [Read error: Connection reset by peer]
< ranjan123>
everal high level points: I think you should provide the option for a Hogwild style implementation as well. I think this is generally what people think of when they think of parallel SGD. However, to do this correctly, one should also provide support for sparse gradients-- in fact this is the case when you actually expect parallel SGD to win. When gradients are fully dense, I think the current approach you have is probably the way
< rcurtin>
okay, in the PR, thanks
< rcurtin>
supporting sparse gradient types will be a little more difficult, it requires a change to the DecomposableFunctionType policy
< rcurtin>
right now a DecomposableFunctionType must implement Evaluate(arma::mat& coordinates) and Gradient(const arma::mat& coordinates, arma::mat& gradient)
< ranjan123>
hmm
< rcurtin>
but what stephen is saying is that in many situation, the gradient will be sparse (i.e. better represented as an arma::sp_mat)
< ranjan123>
ohk
< rcurtin>
so in order to support sparse gradients, the class should be refactored to handle cases where the DecomposableFunctionType returns a sparse gradient instead of a dense gradient
< ranjan123>
hmm
< rcurtin>
probably some template metaprogramming should be used here to figure out if a class has void Gradient(const arma::mat&, arma::mat&) or void Gradient(const arma::mat&, arma::sp_mat&)
< rcurtin>
but I think you should leave that for another time, it needs some more thought about the right way to do it
< ranjan123>
hmmm .
mentekid_ has quit [Ping timeout: 244 seconds]
< ranjan123>
From your explanation, I guess It is not that hard to extend it. :P.
< ranjan123>
Please make comments some on psgd code when ever you get time. I will change the style at the end.
< ranjan123>
Thanks
< rcurtin>
sure, I will take a look when I have a chance
palashahuja has joined #mlpack
< palashahuja>
hello zoq
< rcurtin>
when you update the code, make sure to leave a comment on the PR too; github doesn't notify me when there are just new commits to a PR
< ranjan123>
ok
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#780 (master - 28ae007 : Ryan Curtin): The build has errored.