rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<ShubhamAgrawal[m> <zoq[m]> "Maybe, I'll take a closer look..." <- Can you send the dockerfile too 😅
kristjansson has quit [Ping timeout: 240 seconds]
kristjansson has joined #mlpack
AnwaarKhalid[m] has quit [Quit: You have been kicked for being idle]
<ShubhamAgrawal[m> rcurtin: Quick Question
<ShubhamAgrawal[m> In FFN, in `Forward`
<ShubhamAgrawal[m> rcurtin: Quick Question
<ShubhamAgrawal[m> In FFN, `Forward()` and `Evaluate()` method sets Training to false
<ShubhamAgrawal[m> Why so?
<rcurtin[m]> I'm out right now but it should only do that when Predict() is called; when Train() is called it should be set to true. I'll double check when I get home
<ShubhamAgrawal[m> Ig there is a bug in CheckNetwork
<ShubhamAgrawal[m> In this, we are unintentionally setting Training to false
<ShubhamAgrawal[m> By using this method InitializeWeights()
<rcurtin[m]> Shubham Agrawal: I see what you mean; I think the `SetNetworkMode(false)` call can be removed from `InitializeWeights()`. I think that's leftover from the refactoring. Nice catch! In any case, all calls to `Train()` and `Predict()` do set the network mode in the right way; the problem would only occur if you called `Forward()` directly and the weights needed to be reset (and you wanted the network in training mode)
<rcurtin[m]> do you want to open a quick PR to fix that?
<ShubhamAgrawal[m> Ok
<ShubhamAgrawal[m> And one more thing
<ShubhamAgrawal[m> How can we know which variables having incompatible dimensions?
<ShubhamAgrawal[m> Is there a way to enable verbose logs of ctest
<ShubhamAgrawal[m> error: addition: incompatible matrix dimensions: 0x0 and 10x1
<ShubhamAgrawal[m> Currently stuck with this in BatchNorm Serialization test
<rcurtin[m]> I think what you are looking for is to use gdb, and get a backtrace when the exception is thrown
<rcurtin[m]> you can just do `gdb bin/mlpack_test`, then when you get the gdb prompt, you can type `catch throw` so that the program breaks when an exception is thrown, then `run <name of test>` to run it
<rcurtin[m]> once you get the exception, you can hit `bt` to see the backtrace, then jump to the right frame with the `frame` command, and inspect local variables to see what is going wrong
NhtHongPhan[m] has quit [Quit: You have been kicked for being idle]
<zoq[m]> <ShubhamAgrawal[m> "Can you send the dockerfile too..." <- I can do that yes.
<zoq[m]> Is the mlpack bot down? Or maybe the auto labeling doesn't work anymore.
<ShubhamAgrawal[m> network.template Add<LayerType>(args...);
<ShubhamAgrawal[m> Is this line correct?
<zoq[m]> yes
<ShubhamAgrawal[m> ok
<rcurtin[m]> Huh, I'll check on the auto labeling, but at least the auto-approve is working
<rcurtin[m]> Shubham Agrawal: another thanks for https://github.com/mlpack/mlpack/pull/3198 a few weeks ago :) someone in the Fedora community found your patch and used it to fix the mlpack build: https://src.fedoraproject.org/rpms/mlpack/pull-request/6#request_diff