rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<heisenbuugGopiMT> Yup...
<heisenbuugGopiMT> It's okay, if it's getting late, we can catch up tomorrow...
<shrit[m]> Okay let me tomorrow of the answers
<heisenbuugGopiMT> Okay...
AyushSingh[m] has quit [*.net *.split]
rcurtin_matrixor has quit [*.net *.split]
bisakh[m] has quit [*.net *.split]
ArunavShandeelya has quit [*.net *.split]
mlpack-inviter[m has quit [*.net *.split]
swaingotnochills has quit [*.net *.split]
SaiVamsi[m] has quit [*.net *.split]
MohomedShalik[m] has quit [*.net *.split]
SiddhantJain[m] has quit [*.net *.split]
SoumyadipSarkar[ has quit [*.net *.split]
068AABMUC has quit [*.net *.split]
Shadow3049[m] has quit [*.net *.split]
TrinhNgo[m] has quit [*.net *.split]
gitter-badgerThe has quit [*.net *.split]
jonpsy[m] has quit [*.net *.split]
NippunSharmaNipp has quit [*.net *.split]
AvikantSrivastav has quit [*.net *.split]
MatrixTravelerbo has quit [*.net *.split]
ServerStatsDisco has quit [*.net *.split]
swaingotnochills has joined #mlpack
068AABMUC has joined #mlpack
rcurtin_matrixor has joined #mlpack
SaiVamsi[m] has joined #mlpack
ArunavShandeelya has joined #mlpack
mlpack-inviter[m has joined #mlpack
Shadow3049[m] has joined #mlpack
bisakh[m] has joined #mlpack
gitter-badgerThe has joined #mlpack
TrinhNgo[m] has joined #mlpack
jonpsy[m] has joined #mlpack
MohomedShalik[m] has joined #mlpack
SoumyadipSarkar[ has joined #mlpack
NippunSharmaNipp has joined #mlpack
SiddhantJain[m] has joined #mlpack
AyushSingh[m] has joined #mlpack
AvikantSrivastav has joined #mlpack
_slack_mlpack_19 has quit [Ping timeout: 245 seconds]
_slack_mlpack_16 has quit [Ping timeout: 245 seconds]
Kaushalc64[m] has quit [Ping timeout: 245 seconds]
ShivamShaurya[m] has quit [Ping timeout: 245 seconds]
EricTroupeTester has quit [Ping timeout: 245 seconds]
AyushKumarLavani has quit [Ping timeout: 245 seconds]
KumarArnav[m] has quit [Ping timeout: 245 seconds]
GauravTirodkar[m has quit [Ping timeout: 245 seconds]
SergioMoralesE[m has quit [Ping timeout: 245 seconds]
_slack_mlpack_U0 has quit [Ping timeout: 256 seconds]
JatoJoseph[m] has quit [Ping timeout: 256 seconds]
DivyanshKumar[m] has quit [Ping timeout: 256 seconds]
SlackIntegration has quit [Ping timeout: 256 seconds]
heisenbuugGopiMT has quit [Ping timeout: 240 seconds]
MayankRaj[m] has quit [Ping timeout: 250 seconds]
GaborBakos[m] has quit [Ping timeout: 250 seconds]
kartikdutt18kart has quit [Ping timeout: 250 seconds]
RishabhGoel[m] has quit [Ping timeout: 250 seconds]
M7Ain7Soph77Ain7 has quit [Ping timeout: 250 seconds]
AbdullahKhilji[m has quit [Ping timeout: 250 seconds]
ABoodhayanaSVish has quit [Ping timeout: 250 seconds]
shrit[m] has quit [Ping timeout: 245 seconds]
_slack_mlpack_31 has quit [Ping timeout: 252 seconds]
_slack_mlpack_34 has quit [Ping timeout: 252 seconds]
_slack_mlpack_37 has quit [Ping timeout: 256 seconds]
swaingotnochill[ has quit [Ping timeout: 256 seconds]
ShahAnwaarKhalid has quit [Ping timeout: 256 seconds]
_slack_mlpack_28 has quit [Ping timeout: 256 seconds]
halfy has quit [Ping timeout: 256 seconds]
_slack_mlpack_25 has quit [Ping timeout: 253 seconds]
RishabhGarg108Ri has quit [Ping timeout: 253 seconds]
ryan[m]1 has quit [Ping timeout: 253 seconds]
Cadair has quit [Ping timeout: 253 seconds]
RishabhGarg108[m has quit [Ping timeout: 240 seconds]
068AABMUC has quit [Ping timeout: 272 seconds]
ArunavShandeelya has quit [Ping timeout: 272 seconds]
bisakh[m] has quit [Ping timeout: 272 seconds]
swaingotnochills has quit [Ping timeout: 272 seconds]
mlpack-inviter[m has quit [Ping timeout: 272 seconds]
MohomedShalik[m] has quit [Ping timeout: 272 seconds]
SoumyadipSarkar[ has quit [Ping timeout: 272 seconds]
SiddhantJain[m] has quit [Ping timeout: 272 seconds]
SaiVamsi[m] has quit [Ping timeout: 272 seconds]
Shadow3049[m] has quit [Ping timeout: 272 seconds]
gitter-badgerThe has quit [Ping timeout: 272 seconds]
AyushSingh[m] has quit [Ping timeout: 272 seconds]
TrinhNgo[m] has quit [Ping timeout: 272 seconds]
NippunSharmaNipp has quit [Ping timeout: 272 seconds]
jonpsy[m] has quit [Ping timeout: 272 seconds]
AvikantSrivastav has quit [Ping timeout: 272 seconds]
zoq[m] has quit [Ping timeout: 252 seconds]
psydroid has quit [Ping timeout: 256 seconds]
say4n[m] has quit [Ping timeout: 272 seconds]
rcurtin_matrixor has quit [Ping timeout: 272 seconds]
Saksham[m] has quit [Ping timeout: 256 seconds]
ShivamNayak[m] has quit [Ping timeout: 240 seconds]
_slack_mlpack_U4 has quit [Ping timeout: 240 seconds]
OleksandrNikolsk has quit [Ping timeout: 240 seconds]
DillonKipke[m] has quit [Ping timeout: 240 seconds]
KshitijAggarwal[ has quit [Ping timeout: 240 seconds]
Gauravkumar[m] has quit [Ping timeout: 240 seconds]
TathagataRaha[m] has quit [Ping timeout: 240 seconds]
ronakypatel[m] has quit [Ping timeout: 240 seconds]
rcurtin[m] has quit [Ping timeout: 240 seconds]
AbhinavGudipati[ has quit [Ping timeout: 240 seconds]
_slack_mlpack_13 has quit [Ping timeout: 252 seconds]
abernauer[m] has quit [Ping timeout: 252 seconds]
GauravGhati[m] has quit [Ping timeout: 252 seconds]
HrithikNambiar[m has quit [Ping timeout: 252 seconds]
Gman[m] has quit [Ping timeout: 252 seconds]
NitikJain[m] has quit [Ping timeout: 252 seconds]
VedantaJha[m] has quit [Ping timeout: 252 seconds]
LolitaNazarov[m] has quit [Ping timeout: 252 seconds]
FranchisNSaikia[ has quit [Ping timeout: 252 seconds]
ABHINAVANAND[m] has quit [Ping timeout: 252 seconds]
AyushiJain[m] has quit [Ping timeout: 251 seconds]
_slack_mlpack_U7 has quit [Ping timeout: 251 seconds]
ManishKausikH[m] has quit [Ping timeout: 251 seconds]
Gulshan[m] has quit [Ping timeout: 251 seconds]
fazamuhammad[m] has quit [Ping timeout: 251 seconds]
sailor[m] has quit [Ping timeout: 250 seconds]
RudraPatil[m] has quit [Ping timeout: 250 seconds]
LokeshJawale[m] has quit [Ping timeout: 250 seconds]
VarunGupta[m] has quit [Ping timeout: 250 seconds]
fieryblade[m] has quit [Ping timeout: 250 seconds]
HarshVardhanKuma has quit [Ping timeout: 250 seconds]
DavidportlouisDa has quit [Ping timeout: 256 seconds]
jjb[m] has quit [Ping timeout: 256 seconds]
Amankumar[m] has quit [Ping timeout: 256 seconds]
074AABGKS has quit [Ping timeout: 272 seconds]
huberspot[m] has quit [Ping timeout: 272 seconds]
HemalMamtora[m] has quit [Ping timeout: 272 seconds]
dkipke[m] has quit [Ping timeout: 272 seconds]
GopiManoharTatir has quit [Ping timeout: 272 seconds]
ZanHuang[m] has quit [Ping timeout: 272 seconds]
Pushker[m] has quit [Ping timeout: 276 seconds]
jonathanplatkiew has quit [Ping timeout: 276 seconds]
zoq[m]1 has quit [Ping timeout: 276 seconds]
AbhishekMishra[m has quit [Ping timeout: 276 seconds]
AmanKashyap[m] has quit [Ping timeout: 276 seconds]
Aakash-kaushikAa has quit [Ping timeout: 276 seconds]
_slack_mlpack_10 has quit [Ping timeout: 269 seconds]
_slack_mlpack_22 has quit [Ping timeout: 269 seconds]
HARSHCHAUHAN[m] has quit [Ping timeout: 269 seconds]
jeffin143[m] has quit [Ping timeout: 269 seconds]
KrishnaSashank[m has quit [Ping timeout: 269 seconds]
AlexNguyen[m] has quit [Ping timeout: 269 seconds]
MatheusAlcntaraS has quit [Ping timeout: 269 seconds]
ChaithanyaNaik[m has quit [Ping timeout: 269 seconds]
prasad-dashprasa has quit [Ping timeout: 269 seconds]
DirkEddelbuettel has quit [Ping timeout: 269 seconds]
Prometheus[m] has quit [Ping timeout: 269 seconds]
AbhishekNimje[m] has quit [Ping timeout: 256 seconds]
sdev_7211[m] has quit [Ping timeout: 252 seconds]
_slack_mlpack_U0 has joined #mlpack
_slack_mlpack_U4 has joined #mlpack
_slack_mlpack_U7 has joined #mlpack
_slack_mlpack_10 has joined #mlpack
_slack_mlpack_16 has joined #mlpack
_slack_mlpack_19 has joined #mlpack
_slack_mlpack_22 has joined #mlpack
_slack_mlpack_25 has joined #mlpack
_slack_mlpack_28 has joined #mlpack
_slack_mlpack_31 has joined #mlpack
_slack_mlpack_34 has joined #mlpack
SergioMoralesE[m has joined #mlpack
AyushKumarLavani has joined #mlpack
Kaushalc64[m] has joined #mlpack
ShivamShaurya[m] has joined #mlpack
_slack_mlpack_14 has joined #mlpack
KumarArnav[m] has joined #mlpack
shrit[m] has joined #mlpack
GauravTirodkar[m has joined #mlpack
EricTroupeTester has joined #mlpack
_slack_mlpack_17 has joined #mlpack
JatoJoseph[m] has joined #mlpack
DivyanshKumar[m] has joined #mlpack
_slack_mlpack_37 has joined #mlpack
swaingotnochill[ has joined #mlpack
_slack_mlpack_13 has joined #mlpack
MayankRaj[m] has joined #mlpack
AbdullahKhilji[m has joined #mlpack
ABoodhayanaSVish has joined #mlpack
GaborBakos[m] has joined #mlpack
M7Ain7Soph77Ain7 has joined #mlpack
RishabhGoel[m] has joined #mlpack
kartikdutt18kart has joined #mlpack
ShahAnwaarKhalid has joined #mlpack
_slack_mlpack_24 has joined #mlpack
heisenbuugGopiMT has joined #mlpack
RishabhGarg108[m has joined #mlpack
say4n[m] has joined #mlpack
zoq[m] has joined #mlpack
RishabhGarg108Ri has joined #mlpack
_slack_mlpack_27 has joined #mlpack
ryan[m]1 has joined #mlpack
Saksham[m] has joined #mlpack
Gman[m] has joined #mlpack
GauravGhati[m] has joined #mlpack
abernauer[m] has joined #mlpack
VedantaJha[m] has joined #mlpack
FranchisNSaikia[ has joined #mlpack
NitikJain[m] has joined #mlpack
DavidportlouisDa has joined #mlpack
jjb[m] has joined #mlpack
Amankumar[m] has joined #mlpack
M068AABMUC has joined #mlpack
SaiVamsi[m] has joined #mlpack
NippunSharmaNipp has joined #mlpack
AvikantSrivastav has joined #mlpack
jonpsy[m] has joined #mlpack
TrinhNgo[m] has joined #mlpack
MohomedShalik[m] has joined #mlpack
SoumyadipSarkar[ has joined #mlpack
gitter-badgerThe has joined #mlpack
HrithikNambiar[m has joined #mlpack
Gauravkumar[m] has joined #mlpack
DillonKipke[m] has joined #mlpack
AbhinavGudipati[ has joined #mlpack
ShivamNayak[m] has joined #mlpack
OleksandrNikolsk has joined #mlpack
KshitijAggarwal[ has joined #mlpack
TathagataRaha[m] has joined #mlpack
rcurtin[m] has joined #mlpack
ronakypatel[m] has joined #mlpack
ArunavShandeelya has joined #mlpack
RudraPatil[m] has joined #mlpack
sailor[m] has joined #mlpack
M074AABGKS has joined #mlpack
fieryblade[m] has joined #mlpack
HemalMamtora[m] has joined #mlpack
dkipke[m] has joined #mlpack
LokeshJawale[m] has joined #mlpack
huberspot[m] has joined #mlpack
ABHINAVANAND[m] has joined #mlpack
ZanHuang[m] has joined #mlpack
GopiManoharTatir has joined #mlpack
fazamuhammad[m] has joined #mlpack
ManishKausikH[m] has joined #mlpack
AyushiJain[m] has joined #mlpack
VarunGupta[m] has joined #mlpack
Gulshan[m] has joined #mlpack
HarshVardhanKuma has joined #mlpack
AmanKashyap[m] has joined #mlpack
Aakash-kaushikAa has joined #mlpack
AbhishekMishra[m has joined #mlpack
LolitaNazarov[m] has joined #mlpack
zoq[m]1 has joined #mlpack
jonathanplatkiew has joined #mlpack
Pushker[m] has joined #mlpack
AbhishekNimje[m] has joined #mlpack
SlackIntegration has joined #mlpack
MatheusAlcntaraS has joined #mlpack
jeffin143[m] has joined #mlpack
KrishnaSashank[m has joined #mlpack
AlexNguyen[m] has joined #mlpack
prasad-dashprasa has joined #mlpack
HARSHCHAUHAN[m] has joined #mlpack
ChaithanyaNaik[m has joined #mlpack
Prometheus[m] has joined #mlpack
DirkEddelbuettel has joined #mlpack
sdev_7211[m] has joined #mlpack
SiddhantJain[m] has joined #mlpack
AyushSingh[m] has joined #mlpack
swaingotnochills has joined #mlpack
mlpack-inviter[m has joined #mlpack
Shadow3049[m] has joined #mlpack
bisakh[m] has joined #mlpack
ServerStatsDisco has joined #mlpack
MatrixTravelerbo has joined #mlpack
rcurtin_matrixor has joined #mlpack
halfy has joined #mlpack
Cadair has joined #mlpack
<jonpsy[m]> @zoq yo
<jonpsy[m]> for all our replay buffer, we need to extend them to use ```arma::vec reward``` rather than ```double reward```
<jonpsy[m]> so do i re-write them in ```moq``` folder or just modify the current implementation? prolly use template parameter
<jonpsy[m]> or mayb there's a smarter way to go about it
<jonpsy[m]> * so do i re-write them in `moq` folder or just modify the current implementation? prolly using template parameter
<zoq[m]> Maybe we can just switch to `arma::vec` in general? And modify the existing methods.
<zoq[m]> But I think a template parameter might be the easiest solution.
<RishabhGarg108Ri> @rcurtin I just now realized. The reason that OMP_NUM_THREADS is not working is that our implementation is not actually parallelized.
<RishabhGarg108Ri> This is because we can't parallelize the boosting iterations because they are all dependent on the output of previous one.
<rcurtin[m]> Ah, ok; I thought we had some `#pragma omp`s in various places
<rcurtin[m]> That's true, but we could parallelize the gain searching in each dimension maybe?
<RishabhGarg108Ri> We have one, but that's in prediction; not training
<rcurtin[m]> ah, ok
<RishabhGarg108Ri> Yeah. That's what I wanted to discuss.
<RishabhGarg108Ri> I am not sure what should be the best way to parallelize the decision tree
<RishabhGarg108Ri> I checked online and I see there are pros and cons of both...i.e. parallelizing over gain computation for each feature for a particular node; or parallelize over different nodes in a same level
<rcurtin[m]> yeah, agreed; I think it would be easiest to try parallelizing across features for a single node
<rcurtin[m]> you could make the loop over features OpenMP-parallelized in `decision_tree_impl.hpp` and see what kind of speedup that gives?
<rcurtin[m]> if you wanted to parallelize over nodes, you could make the recursion into the children an OpenMP task, but that might not be as easy to implement
<RishabhGarg108Ri> Yeah. Can you help me with omp stuff here? I don't know how to write omp code. I usually copy that from the existing mlpack codes๐Ÿ˜…
<rcurtin[m]> sure! it is pretty simple, basically all we need to do is tell OpenMP to run each iteration of the for loop in parallel
<RishabhGarg108Ri> No, I mean's somewhere I read in the code, we are using `reduction` and somewhere we don't.
<rcurtin[m]> yes, actually, and now that I look at it it is not entirely trivial
<rcurtin[m]> our iteration is not a simple for loop over all dimensions, but instead we iterate only over some dimensions
<rcurtin[m]> maybe the easier way, then, might be to use OpenMP tasks here?
<rcurtin[m]> do you want to do some reading about how OpenMP tasks work? then, perhaps, we can just enclose the call to `SplitIfBetter` in an OpenMP task, and see if that gives speedup?
<RishabhGarg108Ri> I am not familiar with the openmp terminology, so whatever you are talking about tasks is not making sense to me..
<rcurtin[m]> maybe start here? I think this is a pretty good resource: https://en.wikibooks.org/wiki/OpenMP
<rcurtin[m]> it refers to itself as a "book", but it must be the shortest book I have ever encountered :)
<RishabhGarg108Ri> Okay, can you give me a short roadmap of concepts that I need to learn about omp to tackle this thing?
<RishabhGarg108Ri> Perhaps in a topological order :)
<rcurtin[m]> I think that book should cover it? basically I think what would be really helpful to understand (and this will be useful outside of just mlpack too, OpenMP is a great tool), is the basics of what OpenMP is for, the concept of what OpenMP will do when `#pragma omp for` is used, and then the idea of task scheduling. I *think* that wikibook should mostly cover those things, but let me know if something is unclear afterwards
<rcurtin[m]> (and maybe it is worthwhile to write some simple example programs to play with it?)
<RishabhGarg108Ri> Sure! That sounds like a plan. I will read the shortest book :)
<RishabhGarg108Ri> I have a workaround for this. Can we have an array `dimensions` which stores the dimensions we have to iterate and then rather than doing `for (i = dimensionSelector.Begin()........)` we can do `for (auto i : dimensions)`? Will this work?
<rcurtin[m]> I think maybe it might be better to refactor something like this... `for (size_t i = 0; i < dimensionSelector.NumDimensions(); ++i)`, and then in the first line of the loop we do something like `const size_t dim = dimensionSelector.GetDimension(i);`
<rcurtin[m]> the reason I say that is that with the strategy I proposed, we can avoid allocating the memory to create a vector when the dimension selection strategy is very simple
<RishabhGarg108Ri> Does these `GetDimension()` & `NumDimensions()` already exists?
<rcurtin[m]> no, we'd have to add them, but they would be really simple functions
<rcurtin[m]> for `AllDimensionSelect`, we'd just need to return the number of dimensions in the input for `NumDimensions()` (so I guess maybe `NumDimensions()` needs to take the shape of the input data), and for `GetDimension(i)` we would just return `i`
<rcurtin[m]> for `MultipleRandomDimensionSelect`, `NumDimensions()` could return `numDimensions` and also fill `values`, and then `GetDimension(i)` just returns `values[i]`
<RishabhGarg108Ri> Yeah! It seems pretty straightforward.
<rcurtin[m]> ๐Ÿ‘๏ธ
<RishabhGarg108Ri> Then after this, all I have to do is add `#pragma omp parallel for` right?
<rcurtin[m]> yeah, I think so, although the reduction might be a little bit tricky... we are trying to find the index of the maximum dimension (and its gain value)
<RishabhGarg108Ri> Got it. I will think about that :+1:
<RishabhGarg108Ri> Sure! Thanks for the stackoverflow link :)
<jonpsy[m]> so...is anybody coming?
<say4n[m]> zoom?
<jonpsy[m]> aye
<say4n[m]> ๐Ÿ‘๏ธ
<zoq[m]> still two minutes :)
<jonpsy[m]> punctuality ++
<jonpsy[m]> shrit: coming?
<shrit[m]> yes
<shrit[m]> looking for the link
<jjb[m]> @shrit: <https://zoom.us/j/3820896170>
<jjb[m]> password is the name of the library in lowercase
<shrit[m]> Perfect, just a second
<say4n[m]> @jonpsy: https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=forward#torch.nn.Module.forward the note on this explain why the class itself is called instead of the forward method of the class.
<say4n[m]> I mean calling an instance of the class :)
<say4n[m]> instead of the forward method
<heisenbuugGopiMT> @shrit:matrix.org, can we get on call today? I mean tomorrow, on 10th.
<heisenbuugGopiMT> Some indentation issues are a bit confusing as it looks correct on my system, is there a way to handle it?
<heisenbuugGopiMT> What I am planning to do is, bring everything to the left most, and then indent each line with spaces.
<shrit[m]> Yes, that is a good idea
<shrit[m]> you can count the spaces your are typing
<heisenbuugGopiMT> Okay, we also need to see about sparse matrix. I think we can do that on call if you free. I will try some things till then...
<shrit[m]> Tomorrow at 14 UTC?
<heisenbuugGopiMT> Yup
<shrit[m]> Perfect
<heisenbuugGopiMT> Cool, I will ping you if I have any updates by then.