changed the topic of #mlpack to: -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs:
< rcurtin> zoq: no clue, I sent a follow-up email just to be sure (I don't want to get in trouble or anything for continuing to use their resources)
< rcurtin> but I have no idea when they will get back to me
< rcurtin> I don't think Julia is a DB language, but what the team seems to be building is a database written in Julia that can be queried with a language they are designing
< rcurtin> it will still be some weeks before I understand it fully...
< zoq> rcurtin: I guess, at some point they get back, with some good news :)
< rcurtin> yeah, hopefully---we'll see :)
< Samir> zoq: Hello zoq :), thanks for your reply, sorry I couldn't reach you yesterday. I intend to implement Proximal Policy Optimization Algorithms and Persistent Advantage Learning DQN algorimths. I plan first to study mlpack reinforcement learning methods and to get familiar with the source code. then reading Playing Atari with Deep Reinforcement Learning paper. Do you recommend anything else to do?
cjlcarvalho has quit [Quit: Konversation terminated!]
cjlcarvalho has joined #mlpack
< Atharva> zoq: rcurtin: What is the reason we don't return matrices and instead take them as modifiable arguments? Is it to have better performance?
< ShikharJ> zoq: Still working on that. I will push by Friday.
< ShikharJ> zoq: Julia is primarily used for scientific computing, computational algebra and high precision arithmetic sort of stuff.
< zoq> ShikharJ: Great, I guess we could say about the same about haskell.
< zoq> Atharva: Are you talking about arma::mat& Matrix() { return matrix; } instead of arma::mat Matrix() { return matrix; }? Depending on the case the first one could avoid a copy, I guess the compiler might be able to produce the same results for both cases, at least in some situations. But the first case allows us to modifiy the parameter, .Matrix()[0] = 1
< zoq> Samir: Sounds like a good plan to me; make sure to checkout and run the existing rl tests; e.g. you can run the test suite with bin/mlpack_test -t QLearningTest or a single test with bin/mlpack_test -t QLearningTest/CartPoleWithDQN; Also might be helpful to get a
< zoq> first overview. If you have any questions please don't hesitate to ask.
ImQ009 has joined #mlpack
< Atharva> zoq: Yeah, also we have functions, for example void Forward(arma::mat input, arma::mat& results) instead of arma::mat Forward(arma::mat input). I was thinking about such cases.
< zoq> Atharva: I see, that would allow us to cascade/stack the Forward calls, but this would also include some uncessary copies, especially for layer that reuse the input/output value.
< Atharva> zoq: Oh, thanks for explaining. I don't have a problem with it, was just curious as to why this approach was chosen.
< Atharva> zoq: I had another doubt, using openBLAS led to no speed improvements, I am pretty sure I used it correctly. Why might that be happening?
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< rcurtin> Atharva: right, when we originally designed the library it was a way to avoid unnecessary copies
< rcurtin> I suspect that there are C++11 features now that could allow us to definitely avoid those copies, but I don't think that it would be fun to refactor the entire library...
< rcurtin> also, returning multiple matrices from a single call is possible with passing references as parameters, which is nice
< rcurtin> as well as using a matrix as both input and output... for instance, many times the function Optimize(arma::mat& iterate) available in optimizers will used the given iterate as a starting point
< rcurtin> and then also write the output into the 'iterate' parameter
< Atharva> rcurtin: Oh I see, there are lots of advantages of that approach.
< rcurtin> right, maybe some newer C++ features could address these issues also, but it would be a lot of refactoring work for little gain, in my opinion
< Atharva> rcurtin: Oh sorry, I meant this* approach
< Atharva> rcurtin: I agree that it's not worth it
< rcurtin> yeah, I know what you meant, I was just saying that it could be there are new features that could help :)
< rcurtin> so if we restarted mlpack from scratch today it might be better to decide differently or something (maybe, I am not sure)
< Atharva> rcurtin: Okay, some confusion :p
< Atharva> rcurtin: Can you tell me why openBLAS isn't giving any speedup at all?
< rcurtin> hm, is it possible that the algorithm you are running is not bottleneck by BLAS calls?
< rcurtin> if you're doing lots of big matrix multiplications, OpenBLAS can be helpful
< rcurtin> but, e.g., for something like k-nearest-neighbor search with trees, which doesn't use many BLAS/LAPACK calls, OpenBLAS doesn't help much
< rcurtin> you could watch 'htop' or something to ensure that all the processor cores are being used when you run your code
< rcurtin> (there are lots of other ways to watch CPU usage too, that one is just my personal favorite :))
< Atharva> rcurtin: I tried it while training ANN models, so I am sure there were lots of big matrix multiplications.
< Atharva> Yeah, I will try seeing with htop if all cores are being used
< rcurtin> the batch size is important for that, so make sure that you are using, e.g., a batch size of 64 or something like this
< rcurtin> if you use a batch size of 1, it becomes vector-matrix multiplications, which won't benefit as much
< Atharva> rcurtin: Oh! I will try it with batch size 64
< rcurtin> (plus, in general, you can see big speedups by increasing the batch size from 1 up to something more like 32, 64, etc.)
< zoq> just to be sure, did you check ldd, for a reference to OpenBLAS?
< Atharva> zoq: Sorry I didn't understand
< Atharva> Do you want to check if there is some reference for openBLAS in
< zoq> right, just to make sure, the setup is correct
< Atharva> I just checked, is a binary file, where did you want to check exactly?
< rcurtin> Atharva: I think the idea was to run 'ldd', to ensure that it is linked against OpenBLAS (
< Atharva> rcurtin: Okay, sorry for that.
< Atharva> rcurtin: It is linked to openBLAS.
< Atharva> I will try using batch size 64
< zoq> okay, at least we know that armadillo links against OpenBLAS
ImQ009 has quit [Quit: Leaving]