ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< heisenbuugGopiMT>
Okay, I work it out accordingly and will let you know if I have any more queries.
mindlifter has quit [Remote host closed the connection]
mindlifter has joined #mlpack
mindlifter has quit [Remote host closed the connection]
mindlifter has joined #mlpack
mindlifter has quit [Quit: Leaving.]
< jonpsy[m]>
zoq: Hey, looks like all our ```callback.hpp``` needs to accept a tuple of objective functions? I'm not sure how ```callbacks_test.cpp``` is working. Can you shed some light?
< jonpsy[m]>
* zoq say4n : Hey, looks like all our `callback.hpp` needs to accept a tuple of objective functions? I'm not sure how `callbacks_test.cpp` is working. Can you shed some light?
UmarJ has quit [Ping timeout: 260 seconds]
ImQ009 has joined #mlpack
< zoq>
jonpsy[m]: You can already pass a tuple, because all callbacks use a template for the function type.
< jonpsy[m]>
the problem is, paretoFront is only filled at the end of the optimization. So, no, you can't track the optimization process unless we do for loops per generation to fill the bestFront/bestSet.
< zoq>
But you could use Evaluate in combination with calculatedObjectives right?
< zoq>
To calculate the front inside the callback in each iteration.
< jonpsy[m]>
yeah.....
< jonpsy[m]>
i think i see what you're getting at but I'm not quite there
< zoq>
Unless you think you can calculate the fron inside the callback class, by exposing the necessary parameter in the NSGAII class.
< zoq>
I guess another solution would be to calculate the front in each iteration and just use it the way you currenty do.
< zoq>
NOt sure how expensive that would be if we do that for each iteration.
< jonpsy[m]>
> I guess another solution would be to calculate the front in each iteration and just use it the way you currenty do.
< jonpsy[m]>
I thought of this, but I was wondering if it would be costly
< jonpsy[m]>
> NOt sure how expensive that would be if we do that for each iteration.
< jonpsy[m]>
YES, EXACTLY!
< zoq>
Yeah, that's why I thought let's do it in the callback, since then we only add that additional cost if the user is interested in that information.
< jonpsy[m]>
> I guess in this case it makes sense to add another function to the callback pool that is meant to be used for multi-objective optimization.
< jonpsy[m]>
I think this is the best option of all. What's your thoughts on merging the current notebook as-is, then let's work on callback. And get back on restructuring the notebook?
< zoq>
Yes, let's do that.
< jonpsy[m]>
Allrighty!
< shrit[m]>
Would someone recall me, where the ensmallen-latest tar was stored on each website?
< shrit[m]>
just to add it to the CI
< shrit[m]>
* Would someone recall me, where the ensmallen-latest tar was stored on which website?
< rcurtin[m]>
which CI? it would be better to use a package manager probably
< shrit[m]>
ah
< shrit[m]>
there are ensmallen on ubuntu?
< rcurtin[m]>
yes, `libensmallen-dev`
< shrit[m]>
I see what you mean, I thought I need to download it as armadillo
< rcurtin[m]>
it should also be possible to install Armadillo via apt
< shrit[m]>
Yeah, we are only keeping the 8.4 for compatibility
< rcurtin[m]>
ah, sure, that is a good point; I forgot that we use 8.400 specifically to ensure that code works with the oldest version of Armadillo we support
< shrit[m]>
But there is no a minimal version for ensmallen
< shrit[m]>
I mean if I use `libensmallen-dev` it should compile perfectly with mlpack
< rcurtin[m]>
yes, I think that is correct
< shrit[m]>
And on windows CI, can I use this command directly on the powershell `vcpkg: vcpkg install ensmallen:x64-windows`? or I have to use nuget?
< rcurtin[m]>
I don't know, maybe try it and find out?
< shrit[m]>
* And on windows CI, can I use this command directly on the powershell ` vcpkg install ensmallen:x64-windows`? or I have to use nuget?
< shrit[m]>
I will give it a try
< Aakash-kaushikAa>
Hey, So we have two models in our models repo which is yolo and darknet and darknet has different versions for it so right now to create different versions you have to pass that as a template param, I have been thinking that we can create typedef for every type of model that we have and also is there a way so the user can only import one file and all the models are available to him but the actual model code is only
< Aakash-kaushikAa>
dragged into compilation which is used and not the whole file ?
< Aakash-kaushikAa>
something like the layers.hpp that mlpack has but a bit simpler
< zoq>
Aakash-kaushikAa: The typedef idea sounds good to me, makes it easier to use.
< zoq>
Aakash-kaushikAa: Not sure I get the second question.
< zoq>
shrit[m]: ensmallen is header only, so if it's not available easy to just download it and point cmake to the correct path.
< Aakash-kaushikAa>
so i can import something named as `models.hpp` and then can create any model i want rather than having to import resnet to create resnet
< shrit[m]>
Yeah, I know I need to avoid the version number in the download and installation step on the CI
UmarJ has joined #mlpack
< shrit[m]>
so we do not touch it in the future
< shrit[m]>
zoq: would you guide me to enable the embedded for all mlpack pull requests
< zoq>
Aakash-kaushikAa: Yes that would be possible, the issue I see in this case each cpp file would now include everything, which might have a negative effect on the build time.
< Aakash-kaushikAa>
And also the structure is just odd, a folder for yolo, a folder for darknet and so a folder for every model, we can just have a models folder in the repo and all the files reside there.
< Aakash-kaushikAa>
@zoq Yes exactly because of that i asked if there is a solution so the whole code is not dragged while compilation if that model is not used
< Aakash-kaushikAa>
Btw @zoq does this file:https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/layer/layer.hpp not include every layer and have a bad impact too?
< zoq>
Aakash-kaushikAa: yes and yes, one thing we like to solve with the vtable branch.
< Aakash-kaushikAa>
damn, the first yes was for all file in a single model folder right ?
< Aakash-kaushikAa>
* damn, the first yes was for all files in a single model folder right ?
< zoq>
Aakash-kaushikAa: yes layer.hpp includes every layer and yes it will impact the build time.
< zoq>
Aakash-kaushikAa: About the restructure, let's open an issue for that.
< rcurtin[m]>
thanks everyone for joining! it was great to meet all of you :)
< rcurtin[m]>
and if anyone has any music recommendations, I am all ears 😃 personally I have been listening to a lot of Wilco lately
< jonpsy[m]>
Nice meeting y'all :)!!
< jjb[m]>
Great meeting everyone as well 🙂
< heisenbuugGopiMT>
When we make a PR github runs checks on it, like memory checks, style checks etc. How can I run those checks on my local machine? i.e. before making the PR.
< jonpsy[m]>
<heisenbuugGopiMT "I generally listen to Bach when "> If you're interested in classics I totally recommend Chopin's Nocturne s there so "classy". There's also a whole new genre of modern classic, for ex: La valse da Amelie or Comptine dun autre ete by Yann Tiersen they perfectly capture the spirit of France(my french is not that good).
< heisenbuugGopiMT>
I made some changes to the code, but looks like I am getting some error when I am running `make` command, am I calling the functions wrong?
< shrit[m]>
For music recommendation I like Ludovico Einaudi music
< shrit[m]>
I am also fan of most neoclassical and minimalist music in general
< shrit[m]>
heisenbuug (Gopi M Tatiraju): what is the error you are getting when running make?
< jonpsy[m]>
<shrit[m] "For music recommendation I like "> OMG Same! I'm actually learning Nuvole Bianche maybe I should post someday 😄. You should definitely check out Yann Tiersen though
< heisenbuugGopiMT>
Function not found, but I solved it by adding the template parameter before the function call. Is it compulsory to add that? I don't see it in arma's implementation...
< shrit[m]>
sorry, you have put the error at the end of the gist I did not notice
< shrit[m]>
let me check
< heisenbuugGopiMT>
Yes...
< shrit[m]>
That is normal, template parameters are the type of the armadillo matrix
< shrit[m]>
so you need to tell the compiler explicitly what is the type
< shrit[m]>
Unless if you have defined a default type in the template parameters itself.
< shrit[m]>
You should not use just `eT`, you can replace it by `double` and it will be fine
< shrit[m]>
have a look at line 63
< heisenbuugGopiMT>
So should I declare double as default type?
< heisenbuugGopiMT>
And then I won't need to add `<...>` at the time of function call?
< shrit[m]>
I do not think you need to change the template type in the code. However when you use these functions you need to define the type of the armadillo matrix that is going to be used
< shrit[m]>
and then the compiler will deduce the type in this case
< heisenbuugGopiMT>
okay, got it...
< heisenbuugGopiMT>
I need to do more reading on templates, it never feels enough...
< shrit[m]>
You can also change the code style to follow the mlpack style 👍️
< shrit[m]>
it will be easier to read
< heisenbuugGopiMT>
Okay, I will make changes accordingly...
< shrit[m]>
jonpsy: Yeah, I know Yann Tiersen music
< shrit[m]>
heisenbuug (Gopi M Tatiraju): Also you can change the name from `new_parser` into `csv_parser` since that what it does 😀
< heisenbuugGopiMT>
Yea, I will make that change as well.
< shrit[m]>
There are really few compositors having the same musical style as Tiersen in France. However, there are much more in Germany.