rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
CaCode_ has joined #mlpack
CaCode- has quit [Ping timeout: 252 seconds]
CaCode has joined #mlpack
CaCode- has joined #mlpack
CaCode_ has quit [Ping timeout: 256 seconds]
CaCode has quit [Ping timeout: 252 seconds]
CaCode- has quit [Quit: Leaving]
ShubhamAgrawal[7 has joined #mlpack
<ShubhamAgrawal[7> Hello everyone,
<ShubhamAgrawal[7> My name is Shubham Agrawal. I am currently doing my undergrad in Computer Science.
<ShubhamAgrawal[7> I think your slack integral broke down cause I can't see any messages here
<ShubhamAgrawal[7> That I am sending from slack
<ShubhamAgrawal[7> So, I will ask my question again.
<ShubhamAgrawal[7> In Adam if I want to use weight decay, then how should I proceed with it? Cause ensmallen ig does not support weight decay with Adam rn.
<ShubhamAgrawal[7> In ensmallen, for any learning rate-based optimizer such as Adam, SGD, etc. How can we proceed with the group-wise learning rate approach?
<ShubhamAgrawal[7> Cause I am unable to see how to proceed with it?
<ShubhamAgrawal[7> See this for reference
CaCode has joined #mlpack
CaCode has quit [Quit: Leaving]
<ShubhamAgrawal[7> > <@sirish07-5fc90c95d73408ce4ff59d85:gitter.im> Hey everyone,... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/a201578f39f7e29ff38d8ebb6451d5214e2f2a6f)
<ShubhamAgrawal[7> See line 44,50,55 particularly
<Sirish07Sirish[m> Sure thanks.
<ShubhamAgrawal[7> Sirish07Sirish[m: I have just put a PR related to this issue.
<ShubhamAgrawal[7> If that PR works, then use that test pipeline for time being unless admin approve that PR.
<ShubhamAgrawal[7> > <@sirish07-5fc90c95d73408ce4ff59d85:gitter.im> Hey everyone,... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/16035af1a0908246da37a46f9df2bddd04918871)
<zoq[m]1> > <@shubhamag:matrix.org> In ensmallen, for any learning rate-based optimizer such as Adam, SGD, etc. How can we proceed with the group-wise learning rate approach?... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/f176d2284e1d52d6004f2349614803e2fffad4f3)
<zoq[m]1> zoq[m]1: I can provide some pseudo code how that could look like, if you think that could be helpful.
CaCode has joined #mlpack