rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
_whitelogger has joined #mlpack
zoq[m]1 has quit [Ping timeout: 260 seconds]
HimanshuPathak[4 has quit [Ping timeout: 260 seconds]
kuries has quit [Ping timeout: 260 seconds]
kuries has joined #mlpack
<ShubhamAgrawal[m> <rcurtin[m]> "Shubham Agrawal: you didn't..." <- I think it was about weight
<ShubhamAgrawal[m> You can see that we are using weight.slice(outMap) which will just iterate from 0 to maps-1 only
zoq[m]1 has joined #mlpack
HimanshuPathak[4 has joined #mlpack
<rcurtin[m]> Right, but there is only a 2D weight for each map. The same 2D weight is applied to all the higher dimensions of a particular input. But, it seems that this might not be the behavior you expected? I don't have a problem with holding one weight map for each higher input dimension, if you want to change the code
<ShubhamAgrawal[m> But you initialized weight dimensions as
<ShubhamAgrawal[m> `MakeAlias(weight, weightPtr, kernelWidth, kernelHeight, maps * inMaps);`
<ShubhamAgrawal[m> I think we are doing something wrong here
<ShubhamAgrawal[m> <rcurtin[m]> "Right, but there is only a 2D..." <- Actually I am not talking about higher dimension input here
<ShubhamAgrawal[m> See this
psydroid has quit [*.net *.split]
SlackIntegration has quit [Ping timeout: 252 seconds]
jonpsy[m] has quit [Ping timeout: 252 seconds]
fieryblade[m] has quit [Ping timeout: 248 seconds]
zoq[m]1 has quit [Ping timeout: 248 seconds]
AnwaarKhalid[m] has quit [Ping timeout: 248 seconds]
EshaanAgarwal[m] has quit [Ping timeout: 248 seconds]
Cadair has quit [Ping timeout: 248 seconds]
kartikdutt18[m] has quit [Ping timeout: 248 seconds]
rcurtin[m] has quit [Ping timeout: 248 seconds]
HimanshuPathak[4 has quit [Ping timeout: 255 seconds]
kuries has quit [Ping timeout: 260 seconds]
shrit[m] has quit [Ping timeout: 265 seconds]
ShubhamAgrawal[m has quit [Ping timeout: 265 seconds]
JeffinSam[m] has quit [Ping timeout: 260 seconds]
zoq[m] has quit [Ping timeout: 272 seconds]
rcurtin[m] has joined #mlpack
rcurtin[m] has quit [Quit: Bridge terminating on SIGTERM]
Cadair has joined #mlpack
rcurtin[m] has joined #mlpack
psydroid has joined #mlpack
SlackIntegration has joined #mlpack
zoq[m]1 has joined #mlpack
EshaanAgarwal[m] has joined #mlpack
HimanshuPathak[m has joined #mlpack
jonpsy[m] has joined #mlpack
JeffinSam[m] has joined #mlpack
AnwaarKhalid[m] has joined #mlpack
jjb[m] has joined #mlpack
kartikdutt18[m] has joined #mlpack
fieryblade[m] has joined #mlpack
ShubhamAgrawal[m has joined #mlpack
zoq[m] has joined #mlpack
shrit[m] has joined #mlpack
kuries has joined #mlpack
<rcurtin[m]> how is that response different than what I suggested above? I am not following what you mean; can you please clarify the precise problem and what the fix would be?
<ShubhamAgrawal[m> > <@shubhamag:matrix.org> I think it was about weight... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/8b8c84fe7123028f4a8692b7d45e0a61d2992e98)
<ShubhamAgrawal[m> Modify `weight.slice(outMap)`
<ShubhamAgrawal[m> to `weight.slice(outMap * inMaps + inMap)`
<ShubhamAgrawal[m> > <@shubhamag:matrix.org> Modify `weight.slice(outMap)`
<ShubhamAgrawal[m> > to `weight.slice(outMap * inMaps + inMap)`
<ShubhamAgrawal[m> Something on these lines
<rcurtin[m]> yes, I see what you mean now; do you want to open a PR?
<ShubhamAgrawal[m> I actually want to implement groups in convolution layer
<ShubhamAgrawal[m> What do I need to change in this case?
<rcurtin[m]> I'm not familiar with grouped convolutions; you should implement it as an entirely new layer, though
<ShubhamAgrawal[m> rcurtin[m]: why
<ShubhamAgrawal[m> > <@ryan:ratml.org> I'm not familiar with grouped convolutions; you should implement it as an entirely new layer, though
<ShubhamAgrawal[m> * why?
<rcurtin[m]> well, like I said, I am not familiar with grouped convolutions; but assuming it is a different operation than the regular convolution layer, it is better to keep the `Convolution` class as simple as possible
<ShubhamAgrawal[m> rcurtin[m]: I think it will require to just adjust loops
<rcurtin[m]> you're welcome to make the change and open a PR and see what it looks like, but be aware I may request it to be an entirely separate layer. it's important to keep each layer simple: as one related example, it's better to have a separate layer for convolution and transposed convolution, instead of adding some flag to the convolution class that toggles transposed convolution mode. This is because of the huge amount of increased code complexity
<rcurtin[m]> of handling that flag, and also that it makes it harder for a new person to come along and quickly understand that code---if it is two separate classes, they can understand each class as performing one operation (convolution or transposed convolution), but if it's one, they have to understand both
<ShubhamAgrawal[m> > <@shubhamag:matrix.org> Modify `weight.slice(outMap)`
<ShubhamAgrawal[m> > to `weight.slice(outMap * inMaps + inMap)`
<ShubhamAgrawal[m> And is this right?
<ShubhamAgrawal[m> Or do I need to write something else?
<rcurtin[m]> yes, I think you are correct about the change to `weight` there
rcurtin[m] has quit [*.net *.split]
Cadair has quit [*.net *.split]
SlackIntegration has quit [*.net *.split]
psydroid has quit [*.net *.split]
EshaanAgarwal[m] has quit [*.net *.split]
kartikdutt18[m] has quit [*.net *.split]
HimanshuPathak[m has quit [*.net *.split]
JeffinSam[m] has quit [*.net *.split]
zoq[m]1 has quit [*.net *.split]
fieryblade[m] has quit [*.net *.split]
kuries has quit [*.net *.split]
AnwaarKhalid[m] has quit [*.net *.split]
jjb[m] has quit [*.net *.split]
ShubhamAgrawal[m has quit [*.net *.split]
jonpsy[m] has quit [*.net *.split]
zoq[m] has quit [*.net *.split]
shrit[m] has quit [*.net *.split]
kuries has joined #mlpack
jonpsy[m] has joined #mlpack
EshaanAgarwal[m] has joined #mlpack
zoq[m] has joined #mlpack
jjb[m] has joined #mlpack
fieryblade[m] has joined #mlpack
SlackIntegration has joined #mlpack
psydroid has joined #mlpack
Cadair has joined #mlpack
ShubhamAgrawal[m has joined #mlpack
shrit[m] has joined #mlpack
JeffinSam[m] has joined #mlpack
HimanshuPathak[m has joined #mlpack
AnwaarKhalid[m] has joined #mlpack
zoq[m]1 has joined #mlpack
kartikdutt18[m] has joined #mlpack
rcurtin[m] has joined #mlpack
_slack_mlpack_U0 has joined #mlpack