rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<Aakash-kaushikAa> I am trying to build an image with ubuntu 21.04, this is my first time with Dockers let's see if i am able to make it work.
<Aakash-kaushikAa> Btw i can see that this container only solves the dependencies and adds jenkins as a user group, do we then create containers out of those images and then jenkins uses that to build the actual docs ?
<Aakash-kaushikAa> I thought that we can have a image that has all the dependencies and we you actually use that image it copies models repo and and generates documentation.
<Aakash-kaushikAa> https://gist.github.com/Aakash-kaushik/883f64cf5e148e542982add87ceaed64 here is the docker file i generated, I created an image with this ran it and generated docs inside that container and it worked fine. So let me know what the next step should be ?
<rcurtin[m]> I noticed today that `dealgood`, one of our build systems that's supposed to have 16 cores to build on, actually only has 1. this explains why the nightly docker build is taking forever when some of the jobs run on dealgood. :) I'm looking into it now, but it might be a BIOS setting which means I have to negotiate with the Georgia Tech College of Computing support to set up an appointment to look at the BIOS (or get one of the support people
<rcurtin[m]> to look through the BIOS)
<Aakash-kaushikAa> I guess getting the appointment would be the hard part ?
<rcurtin[m]> maybe, we'll see how quickly they respond. last time dealgood went down, I couldn't get an appointment set up for a year 🙈
<rcurtin[m]> (admittedly, I kept forgetting about it, and so did they...)
<Aakash-kaushikAa> Maybe you can keep reminding them about this.
<Aakash-kaushikAa> I mean the best reminder are the things that cause you problem everytime you look at them so you are dedicated to solve them
<rcurtin[m]> yeah :) I have to remind myself about it too
<rcurtin[m]> for now I'll just drop the number of executors on dealgood down to 1
<zoq[m]1> <Aakash-kaushikAa> "https://gist.github.com/Aakash-..."; <- Yes, that is great, the reason why we don't do this for the other docker image is that we use it for different workloads.
<zoq[m]1> <rcurtin[m]> "I noticed today that `dealgood`,..." <- Does it fully utilize the core?
<zoq[m]1> I thought we have enough nodes at this point now :) But maybe that is not true.
<rcurtin[m]> yeah, it uses one just fine, but with 16 executors it just thrashes
<heisenbuugGopiMT> For `digamma` we need to store some constants in order to evaluate the function. Constants like these... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/ac6fd6d9f0e8b16e0ac489e65c43ebe738cb4e4e)
<Aakash-kaushikAa> makes sense this particular file also doesn't do that but i can make it do that.
<heisenbuugGopiMT> Can I store this number `0.9016312093258695918615325266959189453125e-19` in a `long double`?
<heisenbuugGopiMT> s/L320/L316/
<heisenbuugGopiMT> s/L320/L314/
_whitelogger has joined #mlpack
<zoq[m]1> <Aakash-kaushikAa> "makes sense this particular file..." <- The way you do it right now is good.
<zoq[m]1> <heisenbuugGopiMT> "[These](https://github.com/boost..."; <- The question I have why do we reproduce the boost approximation, because the solution is faster?
<heisenbuugGopiMT> s/L320/L314/
<heisenbuugGopiMT> s/L320/L314/
<heisenbuugGopiMT> Yup and more accurate as well. If we use the simple formulae mentioned on the wiki then there is a difference in the values.
<heisenbuugGopiMT> s/L320/L314/
<zoq[m]1> Yeah, I have seen the comparison, but was wondering if boost outputs an approximation because it's faster or if the mlpack result is wrong because it's a corner case.
<zoq[m]1> Might be worth to compare this with R or MATLAB and see what the output is.
<heisenbuugGopiMT> We are only facing issues with small numbers, and small negative numbers.
<heisenbuugGopiMT> I can try running on matlab as well.
<heisenbuugGopiMT> Our implementation is different from boost.
<zoq[m]1> Maybe we are trying to replicate an approximation, but we don't actually need to.
<heisenbuugGopiMT> They are solving a series to find the answer, and we are using a single formula.
<zoq[m]1> If you use the single formula does it break the test suite?
<heisenbuugGopiMT> Yup.
<zoq[m]1> I would say let's compare it with another framework, and see what the output is.
<zoq[m]1> But to answer your question I would go with `long double` directly no need for a typedef.
<heisenbuugGopiMT> Okay, I will try matlab and add the comparisons to the table. I Will also go through the implementation to make sure that I am not doing anything wrong.
<heisenbuugGopiMT> Oh, okay, no need for typedef then, but first I will compare with other framework
<heisenbuugGopiMT> @marcusedel:matrix.org I tried in R, it's giving same values as boost's implementation.
<heisenbuugGopiMT> So we can't use direct formula for negative numbers.
<heisenbuugGopiMT> We are using reflection formula
<heisenbuugGopiMT> [Euler's Reflection Formula](https://brilliant.org/wiki/digamma-function/)
<heisenbuugGopiMT> if x < 0 then
<heisenbuugGopiMT> `psi(x) = psi(1-x) - (pi*cot(pi*x))`
<heisenbuugGopiMT> Okay, I think I made a mistake in the implementation.... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/ddb2443cb1aa9641927c79d34e6df6891715e203)
<heisenbuugGopiMT> s/```//
<heisenbuugGopiMT> * Okay, I think I made a mistake in the implementation.
<heisenbuugGopiMT> This is what I am getting now.
<Aakash-kaushikAa> So i will post this gist on the PR and maybe you can take it forward from there ? I am not really sure how we want to deploy it.
<zoq[m]1> Aakash-kaushikAa: Sounds good, I can deploy it and create a jenkins job as well.
<zoq[m]1> > <@heisenbuug-5a298898d73408ce4f8241d7:gitter.im> ```... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/5ce7e7e3bb9ee40f0f61407af0706fd128495cc4)
<heisenbuugGopiMT> Yup, but the difference was reduced.
<heisenbuugGopiMT> Shall I try to push this code and see if tests are passing?
<heisenbuugGopiMT> s/try to//
<shrit[m]> Yers
<heisenbuugGopiMT> Okay...
<shrit[m]> yes do not hesitate in pushing the code to see if the accuracy is enough
<heisenbuugGopiMT> If this doesn't work we might have to go with the series expansion formulae.
<Aakash-kaushikAa> Done, posted the link on that PR.