rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
aakashi2001 has joined #mlpack
<jonpsy[m]> sure, thursday works
<Aakash-kaushikAa> Hi everyone, i was able to parse the documentation successfully this time and would need some guidance and help, head over this [comment](https://github.com/mlpack/mlpack/pull/2990#issuecomment-903291032) for a clear look.
<Aakash-kaushikAa> Over to this*
<heisenbuugGopiMT> Looks great, we needed to this soo much. Thank you...
<heisenbuugGopiMT> * Looks great, we needed this soo much. Thank you...
aakashi2001 has quit [Remote host closed the connection]
aakashi2001 has joined #mlpack
<heisenbuugGopiMT> @shrit:matrix.org So when we changed `getline()` to `.good()` a bug got introduced.... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/317c5e15084f4f05683426b77cbf635d121aef97)
<heisenbuugGopiMT> Now the thing is somehow when we were using `while(getline(inFile, line))` it was ignoring the empty line.
<heisenbuugGopiMT> But with `getline` we were getting an extra line and thus failing cases.
<heisenbuugGopiMT> I smiply added a check for length of line...
<heisenbuugGopiMT> Passing the tests
<heisenbuugGopiMT> I will make the push.
<heisenbuugGopiMT> @shrit:matrix.org What is [this](https://github.com/mlpack/mlpack/tree/master/src/mlpack/core/boost_backport) directory exactly?
<heisenbuugGopiMT> To remove `boost` completely we have to replace everything here?
<heisenbuugGopiMT> There is a gamma function implementation in [`math.h`](https://www.cplusplus.com/reference/cmath/tgamma/)
<heisenbuugGopiMT> We can use that?
<heisenbuugGopiMT> To implement digamma?
aakashi2001 has quit [Ping timeout: 248 seconds]
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has joined #mlpack
aakashi2001 has quit [Quit: Leaving]
<PranshuSrivastav> Hey @shrit:matrix.org I was working on a file but it encountered errors while building so I fixed the errors, but now I am getting error in the files that I did not tweak, should I fix those also or should I leave it?
<PranshuSrivastav> errors like these
<heisenbuugGopiMT> Which file did you made changes to?
<heisenbuugGopiMT> How can I access `arma_config::_____`?
<heisenbuugGopiMT> Like `arma_config::mp_threads`
<heisenbuugGopiMT> I saw that we do have a file named `arma_config.hpp` I think which gets generated based on if we are keeping `OpenMP` support on or off.
<heisenbuugGopiMT> This is for making parser faster by using `OpenMP` directives as suggested by Conrad [here](https://github.com/mlpack/mlpack/pull/2942/#issuecomment-878735632)
<heisenbuugGopiMT> Maybe we should add this in another PR? Not sure. Just wanted to check the speed-ups, so thought I would implement it, didn't take much time.
<shrit[m]> heisenbuug (Gopi M Tatiraju): I think it is the best to add it to another PR
<shrit[m]> We need to wait on this one getting merged
<shrit[m]> Did you push the modification related to the one row bug?
<shrit[m]> I believe it is better to make mlpack header only and then accelerate the parser
<shrit[m]> I think the best thing to do now is to open an issue and list all the improvement we need to do in the `data/` directory this includes: OpenMP support, csv sparse matrix support, fuse Load interface functions, etc...
<shrit[m]> For you boost question, this directory is for backward compatibility with old boost versions, once boost is removed, the directory will disappear automatically
<shrit[m]> * For your boost question, this directory is for backward compatibility with old boost versions, once boost is removed, the directory will disappear automatically
<shrit[m]> For your tgamma question, If the function from the math library handle mlpack usage then that would be a great improvement šŸ‘ļø
<shrit[m]> PranshuSrivastava (Pranshu Srivastava): What changes you applied to your code, I can not fully understand the error message.
<heisenbuugGopiMT> Okay, I will open a new PR.
<heisenbuugGopiMT> We have gamma function in `math.h, digamma is basically a logarithmic derivative of gamma, so maybe we can implement a function to find derivatives(logarithmic in this case) and then pass `gamma(x)` to that function, sounds good?
<heisenbuugGopiMT> * We have gamma function in `math.h`, digamma is basically a logarithmic derivative of gamma, so maybe we can implement a function to find derivatives(logarithmic in this case) and then pass `gamma(x)` to that function, sounds good?
<heisenbuugGopiMT> Yes, I have solved the bug related to one row, pushed the code as well.
<heisenbuugGopiMT> I don't understand why build are failing and that too each time some different build.
<jonpsy[m]> i've showen the code for ```loss``` and I want to find the equation for ```gradient function``` w.r.t to ```Q``` and ```HQ```
<jonpsy[m]> like for ex: for MSE the grad function is `2 (prediction - target)`
<jonpsy[m]> and im having a hard time figuring the grad fn here
<PranshuSrivastav> > @PranshuSrivastava: What changes you applied to your code, I can not fully understand the error message.
<PranshuSrivastav> I just created a new file for the extra trees algorithm which had some syntactical errors that I had to fix
<zoq[m]> <jonpsy[m]> "and im having a hard time figuri" <- Wouldn't it be easier to calculate the derivative form the loss formula instead of from the PyTorch code? In this case you could use e.g use `deriv ` on https://www.wolframalpha.com/ ?
<jonpsy[m]> bt this is matrix multiplication
<shrit[m]> heisenbuug (Gopi M Tatiraju): Looks good to me šŸ‘ļø for digamma, Also we use the trigamma functions
<heisenbuugGopiMT> Yea, I am on to it as well...
<PranshuSrivastav> > bt this is matrix multiplication
<jonpsy[m]> dang, i think i figured it out
<zoq[m]> Will be 5 minutes late
aakashi2001 has joined #mlpack
aakashi2001 has quit [Changing host]
aakashi2001 has joined #mlpack
<jonpsy[m]> i'll be in the zoom room
<jonpsy[m]> if A is (8192, 6)
<jonpsy[m]> and B is (8192, 6)
<jonpsy[m]> so, it'll do A (8192, 1, 6), B (8192, 6, 1)
<jonpsy[m]> so now, ```torch.bmm``` will perform 8192 dot products of each (1,6) * (6,1) vectors giving us (8192, 1)
<jonpsy[m]> (8192, 1, 1).squeeze() => (8192, 10
<jonpsy[m]> * (8192, 1, 1).squeeze() => (8192, 1)
aakashi2001 has quit [Quit: Leaving]
<jonpsy[m]> <say4n[m]> "Hi Sai, are you planning to have..." <- could you check the doc once?
<say4n[m]> Iā€™m out presently. I can have a look at it tomorrow for sure