<PranshuSrivastav>
Hey @shrit:matrix.org I was working on a file but it encountered errors while building so I fixed the errors, but now I am getting error in the files that I did not tweak, should I fix those also or should I leave it?
<heisenbuugGopiMT>
I saw that we do have a file named `arma_config.hpp` I think which gets generated based on if we are keeping `OpenMP` support on or off.
<heisenbuugGopiMT>
Maybe we should add this in another PR? Not sure. Just wanted to check the speed-ups, so thought I would implement it, didn't take much time.
<shrit[m]>
heisenbuug (Gopi M Tatiraju): I think it is the best to add it to another PR
<shrit[m]>
We need to wait on this one getting merged
<shrit[m]>
Did you push the modification related to the one row bug?
<shrit[m]>
I believe it is better to make mlpack header only and then accelerate the parser
<shrit[m]>
I think the best thing to do now is to open an issue and list all the improvement we need to do in the `data/` directory this includes: OpenMP support, csv sparse matrix support, fuse Load interface functions, etc...
<shrit[m]>
For you boost question, this directory is for backward compatibility with old boost versions, once boost is removed, the directory will disappear automatically
<shrit[m]>
* For your boost question, this directory is for backward compatibility with old boost versions, once boost is removed, the directory will disappear automatically
<shrit[m]>
For your tgamma question, If the function from the math library handle mlpack usage then that would be a great improvement šļø
<shrit[m]>
PranshuSrivastava (Pranshu Srivastava): What changes you applied to your code, I can not fully understand the error message.
<heisenbuugGopiMT>
Okay, I will open a new PR.
<heisenbuugGopiMT>
We have gamma function in `math.h, digamma is basically a logarithmic derivative of gamma, so maybe we can implement a function to find derivatives(logarithmic in this case) and then pass `gamma(x)` to that function, sounds good?
<heisenbuugGopiMT>
* We have gamma function in `math.h`, digamma is basically a logarithmic derivative of gamma, so maybe we can implement a function to find derivatives(logarithmic in this case) and then pass `gamma(x)` to that function, sounds good?
<heisenbuugGopiMT>
Yes, I have solved the bug related to one row, pushed the code as well.
<heisenbuugGopiMT>
I don't understand why build are failing and that too each time some different build.
<jonpsy[m]>
i've showen the code for ```loss``` and I want to find the equation for ```gradient function``` w.r.t to ```Q``` and ```HQ```
<jonpsy[m]>
like for ex: for MSE the grad function is `2 (prediction - target)`
<jonpsy[m]>
and im having a hard time figuring the grad fn here
<PranshuSrivastav>
> @PranshuSrivastava: What changes you applied to your code, I can not fully understand the error message.
<PranshuSrivastav>
I just created a new file for the extra trees algorithm which had some syntactical errors that I had to fix
<zoq[m]>
<jonpsy[m]> "and im having a hard time figuri" <- Wouldn't it be easier to calculate the derivative form the loss formula instead of from the PyTorch code? In this case you could use e.g use `deriv ` on https://www.wolframalpha.com/ ?
<jonpsy[m]>
bt this is matrix multiplication
<shrit[m]>
heisenbuug (Gopi M Tatiraju): Looks good to me šļø for digamma, Also we use the trigamma functions
<heisenbuugGopiMT>
Yea, I am on to it as well...
<PranshuSrivastav>
> bt this is matrix multiplication