ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
togo has quit [Quit: Leaving]
< Saksham[m]>
ryan I see that is there is no current implementation of Denoising autoencoder, i would like to work on adding this! ?
k3nz0_ has quit [Ping timeout: 260 seconds]
< rcurtin>
Saksham[m]: sounds good to me, do we need any special layers for it or anything?
< Saksham[m]>
I don’t think we do, while going through the implementation if I require something, I’ll see to it then
< rcurtin>
Saksham[m]: sounds good then; maybe it makes sense to add into the models/ repo? or perhaps into its own directory in mlpack/methods/? I'm not sure exactly what you're thinking, just tossing some ideas out there :)
< Param-29Gitter[m>
Hey @rcurtin regarding #2169 do you want me to implement OpenMP along with SIMD block? because it works as you expect it to work.
< rcurtin>
HimanshuPathakGi: hmm, maybe BOOST_ALL_DYN_LINK is needed? that code snippet you pasted, have you tried it? if it works I have no problem including it as a patch into mlpack
< rcurtin>
Param-29Gitter[m: right, yes, the way to accelerate it will be a combination of OpenMP and SIMD like I said; if you can show good speedup for large sizes of labels arrays, and if there is not significant slowdown if OMP_NUM_THREADS=1 then I think it would be nice to include
< Param-29Gitter[m>
I have also added my views on the above. Please have a look once you are free.
UmarJ has quit [Ping timeout: 265 seconds]
UmarJ has joined #mlpack
< Saksham[m]>
Ryan Curtin> also how about Depth Gated RNN layers, i want to add this first, as i was going through some literature for a research project and came across this as recent improvement in this field . I've also referenced the paper <https://arxiv.org/pdf/1508.03790v2.pdf
AbhiSaphire has quit [Remote host closed the connection]
AbhinavvermaGitt has joined #mlpack
outmanipulateGit has joined #mlpack
volhard[m] has joined #mlpack
RohitKartikGitte has joined #mlpack
Keyur[m] has quit [Ping timeout: 252 seconds]
Keyur[m] has joined #mlpack
AbhiSaphire has joined #mlpack
< AbhiSaphire>
Hello everyone, my name is abhishek and I am a pre-final year CSE grad from India. I am very much interested in contributing to one of the ideas for GSOC 2020 "Application of ANN Algorithms implemented in mlpack" as a student participant for GSOC 2020. Can anyone help me know where do I start from ?
< LakshyaOjhaGitte>
Hi, can anyone tell me How can I write this code in accordance to the doxygen syntax.
< PrinceGuptaGitte>
Hi @zoq , I was looking through GSOC idealist and I've a good idea for **Application of ANN Algorithms Implemented in mlpack**. Since MLPack already has convolution layers implementing Object Detection using YOLO algorithm seems like a nice idea to me. However I'm unsure if it's enough.
< AbhiSaphire>
PrinceGuptaGitte You can also add batch normalization on all of the convolutional layers in YOLO get more improvement. Batch normalization will also help in regularizing the model and prevent overfitting. And thanks for your help O:3
< PrinceGuptaGitte>
Yes, batch norm has been useful many times.
< PrinceGuptaGitte>
And MLPack already have it implemented
AbhiSaphire has quit [Remote host closed the connection]
< zoq>
PrinceGupta: I like the YOLO idea.
< PrinceGuptaGitte>
That's good to hear.
< PrinceGuptaGitte>
However, I had one doubt. Does MLPack use GPU? Because I couldn't find it and training the model with CPU could take a lot of time
< kartikdutt18Gitt>
Hi @prince776, currently mlpack doesn't support GPU.
< zoq>
PrinceGupta: You could use nvblas, or maybe we could make use of bandicoot.
< PrinceGuptaGitte>
@zoq I believe NVBLAS will act as backend for armadillo, right?
< zoq>
right
< PrinceGuptaGitte>
I looked through all activation function codes and some actually manually loop and then apply the function, instead of using armadillo functions.
< PrinceGuptaGitte>
So I think we'll need to fix them.
< zoq>
That would probably be a good idea.
< kartikdutt18Gitt>
Agreed that's why I opened #2178 to benchmark the differences
< PrinceGuptaGitte>
@kartikdutt18 that's what I was wondering, how was GPU backend slower than manual for loops.
< kartikdutt18Gitt>
@zoq, If you get the chance, could you have a look at #2195 ( I wanted to know how I should proceed).
< kartikdutt18Gitt>
@prince776 , exactly what I thought, Matrix operation (with parallel computation) should be faster.
< KhizirSiddiquiGi>
@kartikdutt18 , doesn't GPU usage in armadillo be better than using it in MLPack?
< KhizirSiddiquiGi>
I mean, matrix operations in armadillo.
< PrinceGuptaGitte>
@khizirsiddiqui yes and to test that @kartikdutt18 did some tests #2178, but normal for loops performed better.
< kartikdutt18Gitt>
Yes they should be , I haven't tested them with GPU yet, with BLAS I got some contradictory results to what logic dictates so I closed the above PR once I am certain redo all benchmarks and I am certain that the changes I made are faster, I will reopen it.
< kartikdutt18Gitt>
@prince776 , sorry about this the misinformation regarding the GPU. I thought armadillo could only be optimised by BLAS/ OpenBLAS.
< SriramSKGitter[m>
What benefits will bandicoot offer over NVBLAS?
< Param-29Gitter[m>
Hey @rcurtin I have made the changes #2169 please have a look once you are free. We get almost same time with/without use of SIMD . Also i would like to make same changes to information_gain for better performance using OpenMP.
< sreenik[m]>
freenode_gitter_sriramsk1999[m]: When you use frameworks like tensorflow or pytorch, the gpu operations are done using cuBlas, which means that the entire model is transferred onto the gpu and the operations are carried out by the GPU. Currently, Armadillo does not support CuBlas, it only supports nvblas. With nvblas, on the other hand, the gpu is used to perform computations but the entire model is not transferred to the
< sreenik[m]>
GPU at once, it is done operation by operation and varies model to model, which means that there is a significant overhead in transferring values to the GPU and it brings down the advantage of fast computations on a gpu. This is what I remember I found when I had the same doubt, zoq once do confirm if it's correct
< sreenik[m]>
Bandicoot, I guess, uses cuBlas or something similar
< SriramSKGitter[m>
@sreenik : Isn't NVBLAS built on top of cuBLAS?
< rcurtin>
Saksham[m]: that sounds good to me
< rcurtin>
I dropped the ball on the video chat announcement for today, but anyway, 2200 UTC (7 hours from now)
< rcurtin>
last time we used this time everyone unanimously agreed that it was a bad time, so if nobody says it's a good time this time, we can just switch always to thursdays at 1800 UTC
< zoq>
Either is fine for me, a little bit late, but still works.
< Param-29Gitter[m>
@rcurtin how do i ensure my program is compiled using SIMD instructions?
< PrinceGuptaGitte>
It'll be 3:30 AM in my timezone. I think 1800 UTC is better. How do I access the video chat though?
< Saksham[m]>
Anytime would work, how we do access it ?
< rcurtin>
Param-29Gitter[m: that's a bit outside the scope of what I can write in a chat message; I'd suggest using a search engine to find more information about how to get the instruction-level output of a compiler
< GauravSinghGitte>
@rcurtin Yeah, Thursday at 1800 UTC will be fine.
zoso_floyd has joined #mlpack
< rcurtin>
GauravSingGitte: right, the reason we do it in different timezones each time is to make sure that anyone in any time zone can attend either of them
zoso_floyd has quit [Client Quit]
< jeffin143[m]>
Probably will attend the Thursday one , not a morning person :)
ImQ009 has joined #mlpack
< Param-29Gitter[m>
@rcurtin Yes, Thusday 1800 UTC will be fine.
< KhizirSiddiquiGi>
@rcurtin Thursday 1800 UTC please.
AbhiSaphire has joined #mlpack
AbhiSaphire has quit [Remote host closed the connection]
tae has joined #mlpack
tae has quit [Remote host closed the connection]
k3nz0_ has quit [Remote host closed the connection]
< HimanshuPathakGi>
Hey rcurtin I tried to add a patch But I think it's not working
< HimanshuPathakGi>
May be because we are not initializing git inside the source zip of mlpack3.2.2
< HimanshuPathakGi>
Any idea how can I do this
ImQ009 has quit [Quit: Leaving]
< LakshyaOjhaGitte>
Actually the main problem is how to represent the piecewise function here.
< rcurtin>
LakshyaOjhaGitte: wrap it in a formula block (@f[ ... @f]) and then use LaTeX syntax
< PrinceGuptaGitte>
Hi, I'm working on implementing Residual Block for making ResNets. I opened Issue #2225 describing the implementation. I wanted to make sure the method I chose is appropriate so I can proceed further with it.