<_slack_mlpack_U7>
I don't really understand what you are saying. do you mean I should set `model.Parameters()` to something else?
<_slack_mlpack_U7>
I just tried to use the cifar CNN from the examples repo but always get the error `error: Mat::operator(): index out of bounds`
_slack_mlpack_U7 has joined #mlpack
<_slack_mlpack_U7>
Ok, thank you for the quick response. So if I want to try to write a DeepDream program using mlpack I would need to manually define a Gradient with respect to the input or try to use an Optimizer that doesn't need a Gradient, is that correct?
<_slack_mlpack_U7>
Yes, I would only need the Gradients with respect to the activations, often the mean of them
<_slack_mlpack_U7>
well missing a const in the argument list for Evaluate was a stupid mistake by me.
<_slack_mlpack_U7>
* of them, of the layer one choses as the last. For proof of work concept one could only make this choosable at compile time.
<_slack_mlpack_U7>
* of them, of the layer one chooses as the last. For proof of work concept one could only make this choosable at compile time.
<_slack_mlpack_U7>
Hey there. I recently discovered this library and wondered if it is possible to optimize/compute gradients w.r.t. something different than the weights/parameters of an ANN.
<heisenbuugGopiMT>
`const long double num = 0.234512`
<rcurtin[m]>
it seems like that function wouldn't work unless `T` was a floating-point type?
<heisenbuugGopiMT>
Yup Yup
<rcurtin[m]>
you could force it to compile by doing, e.g., `const T num = T(0.23456512);`, but if `T` is an integral type like `int`, this will result in `num` having a value of `0`
<heisenbuugGopiMT>
Yes, then we would get wrong answer.
<heisenbuugGopiMT>
It for calculating `Digamma` actually.
<rcurtin[m]>
or, if you want it to fail to compile unless `T` is a floating-point type, you could use SFINAE with `std::is_floating_point<T>` or similar
<heisenbuugGopiMT>
No, I don't want it to fail, it should execute and it should consider it as `1.0`
<rcurtin[m]>
hmm, maybe you should do something like this?
<rcurtin[m]>
const T num = (std::is_floating_point<T>::value) ? T(0.23456512) : T(1.0);
<rcurtin[m]>
```
<rcurtin[m]>
```
<rcurtin[m]>
I included some extra paranoia with that cast to `T()`
<heisenbuugGopiMT>
`const T num = (std::is_floating_point<T>::value) ? T(num) : T(numf);`
<heisenbuugGopiMT>
Will this work?
<heisenbuugGopiMT>
I meant that if I get any integer I want to convert it to float.
<rcurtin[m]>
what is `numf`?
<heisenbuugGopiMT>
Using type literals...
<rcurtin[m]>
oh, ok
<heisenbuugGopiMT>
I think it should be `f.0`?
<heisenbuugGopiMT>
Not sure.
<rcurtin[m]>
you mean like, e.g., `1.0f`, right?
<heisenbuugGopiMT>
Yes.
<rcurtin[m]>
if `T` is `int` and you do `T(1.0f)`, you will get an `int` with value `1`, yeah
<rcurtin[m]>
but if you do, e.g., `T(0.5f)` and `T` is `int`, then what will happen is that it will be truncated to an integer, so you will end up with an `int` with value `0`
<heisenbuugGopiMT>
Oh, I think I should explain the whole situation.
<heisenbuugGopiMT>
So to calculate digamma we have some floating point consts(some of which are also less than 0, eg 0.2341325).
<heisenbuugGopiMT>
But in boost even when we are passing an int it's working.
<heisenbuugGopiMT>
Now when I am passing an int I am getting conversion error.
<rcurtin[m]>
maybe are they just casting the given `int` into a `float` or `double`?
<heisenbuugGopiMT>
They have there own type, in which they are doing something. Should I cast it? If yes, then is there a way to do it compile time?
<rcurtin[m]>
I don't think I understand the situation well enough to give good advice... but you can in general convert an `int` type to a floating-point type simply by casting. if the int is reasonably small (less than something like 2^13 or so) then the conversion will be exact
<heisenbuugGopiMT>
Okay, they are using `static_cast`
texasmusicinstru has quit [Remote host closed the connection]
<heisenbuugGopiMT>
Oh, it's okay. It's not a big deal. I already got an idea of what to do, so your help was enough. I pushed the code just in case you wanna see. I will mostly push the replace for `boost::digamma()` in some hours, hope it passes all the cases this time.
<heisenbuugGopiMT>
I tested some locally and we are getting exactly same values as boost.
texasmusicinstru has joined #mlpack
<rcurtin[m]>
awesome! it will definitely be great when we can replace that part of boost
<heisenbuugGopiMT>
On to it!!!
texasmusicinstru has quit [Remote host closed the connection]
texasmusicinstru has joined #mlpack
texasmusicinstru has quit [Remote host closed the connection]
texasmusicinstru has joined #mlpack
texasmusicinstru has quit [Remote host closed the connection]
texasmusicinstru has joined #mlpack
texasmusicinstru has quit [Remote host closed the connection]
texasmusicinstru has joined #mlpack
texasmusicinstru has quit [Remote host closed the connection]