rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
ArchitBhonsle[m] has quit [Quit: Client limit exceeded: 20000]
chinglamchoiChin has quit [Quit: Client limit exceeded: 20000]
shubhsherlShubha has quit [Quit: Client limit exceeded: 20000]
Abhi-1001Abhi-10 has quit [Quit: Client limit exceeded: 20000]
krushia has joined #mlpack
<HRIDAYMEHTA[m]> Hello community! My name is Hriday and I am a second year student at National Institute of Technology Karnataka.
<HRIDAYMEHTA[m]> I am interested in working towards the project "Enhance CMA-ES". Could you please provide some guidance on how to approach this further? I am really looking forward in contributing to this community. Thanks!
lxi has quit [Ping timeout: 248 seconds]
OmSurase[m] has joined #mlpack
<OmSurase[m]> guys i think the mentors are busy rn and we'll have to work on our ideas on our own for now
M7Ain7Soph77Ain7 has quit [Quit: Client limit exceeded: 20000]
bmanasiManasi[m] has quit [Quit: Client limit exceeded: 20000]
madhavkumar1523m has quit [Quit: Client limit exceeded: 20000]
dsrt^ has joined #mlpack
<zoq[m]> Getting back to the messages, later today.
robobub has joined #mlpack
<vaibhavp[m]> Hello everyone! I am Vaibhav. I am very keen on creating the new DAG network for the ANN module, particularly because I was working on creating my own backpropagation library of sorts. I have a basic blueprint on how things will work and I am working on a demo at the moment. So, if you guys have any ideas and suggestions on what it should look like then please share. I am particularly interested in having an active discussion on how this
<vaibhavp[m]> project should come about. Thank you!
<vaibhavp[m]> rcurtin, Aakash-kaushik (Aakash kaushik)
<vaibhavp[m]> One of the things that I observed was that with the DAG network class we would also have to create new layers which could take input from multiple layers, maybe in the form of arms::cube, because as far as I know all pre-existing layers take only a single mat as input(through forward and backward functions). The new layers would act as a glue to join multiple layers.
<vaibhavp[m]> * One of the things that I observed was that with the DAG network class we would also have to create new layers which could take input from multiple layers, maybe in the form of arma::cube, because as far as I know all the pre-existing layers at the moment take only a single mat as input(through forward and backward functions). The new layers would act as a glue to join multiple layers.
<vaibhavp[m]> vaibhavp[m]: If you have any other or better ideas please go on.
dsrt^ has quit [Remote host closed the connection]
<zoq[m]> > <@cyber-machine-633add556da03739849d5fc7:gitter.im> Hi guys, I am Maaz Karim, a Junior pursuing Artificial Intelligence and Machine Learning at DSCE. I have deep learning experience and am interested in reinforcement learning. I am interested in participating in GSoC'23 in mlpack at Reinforcement learning implementing RL... (full message at <https://libera.ems.host/_matrix/media/v3/download/libera.chat/340496bab4de719236e42acd56342309ca6720b9>)
<zoq[m]> <IWNMWEIWNMWE[m]> "Hey can I know what the plan for..." <- It was part of lasst years GSoC, but dpending on what you are interested it, there might be room for extensions and improvements.
<IWNMWEIWNMWE[m]> Hey zoq I have question is there any weighted gini index trees or stumps in mlpack
<zoq[m]> > <@hridaym25:gitter.im> Hello community! My name is Hriday and I am a second year student at National Institute of Technology Karnataka.
<zoq[m]> > I am interested in working towards the project "Enhance CMA-ES". Could you please provide some guidance on how to approach this further? I am really looking forward in contributing to this community. Thanks!
<zoq[m]> Hello, for this particular project, my recommendation is to take a deeper look into ensmallen (https://github.com/mlpack/ensmallen) ran some examples. And also https://github.com/mlpack/ensmallen/pull/351
<zoq[m]> IWNMWEIWNMWE[m]: mlpack implements decission trees, and yes you can have weighted classes - https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/decision_tree/gini_gain.hpp#L30
<zoq[m]> > <@mrvasura:matrix.org> Hello everyone! I am Vaibhav. I am very keen on creating the new DAG network for the ANN module, particularly because I was working on creating my own backpropagation library of sorts. I have a basic blueprint on how things will work and I am working on a demo at the moment. So, if you guys have any ideas and suggestions on what it should look like then please share. I am particularly interested in having an active
<zoq[m]> > rcurtin, Aakash-kaushik (Aakash kaushik)
<zoq[m]> We have some layers that can take multiple inputs and merge them together (https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/layer/add_merge_impl.hpp) without using `arma::cube`.
<zoq[m]> discussion on how this project should come about. Thank you!
<zoq[m]> zoq[m]: Coming up with a good DAG structure is difficult and I'm more than interested to see how this could look like for mlpack, do you have a link to your project?
<vaibhavp[m]> <zoq[m]> "Coming up with a good DAG..." <- Surely, but can you please wait for some time? I just need a little bit of time to solve some of the problems I have been having. 😀
<vaibhavp[m]> <zoq[m]> "> <@mrvasura:matrix.org> Hello..." <- Thanks for the info, I definitely missed that layer. But looking at the current implementation, it is very restrictive and certainly not any arbitrary layer could be plugged together, if I am not wrong. Suppose for example, two layer which are far apart in the architecture which need their outputs to be added for it to be passed to the next layer, sometime like a resnet.
<vaibhavp[m]> * Thanks for the info, I definitely missed that layer. But looking at the current implementation, it is very restrictive and certainly not any arbitrary layer could be plugged together, if I am not wrong. Suppose for example, two layer which are far apart in the architecture which need their outputs to be added for it to be passed to the next layer, something like a resnet.
<vaibhavp[m]> But the new layers could be created on the similar line as AddMerge, and store the pointer/unique_id to the inputs. So that any arbitrary layer could be glued together.
<IWNMWEIWNMWE[m]> <zoq[m]> "mlpack implements decission..." <- Hey zoq I was actually referring to weighted trees using Gini gain as I have seen in the libraries adaboost implementation I have observed that the original data set is passed on each iteration without the weights and the data set is also not changed hence I think There is no useful boosting happening in the multiclass classification problems hence leading to
<IWNMWEIWNMWE[m]> error:https://github.com/mlpack/mlpack/issues/3010
<IWNMWEIWNMWE[m]> IWNMWEIWNMWE[m]: If I am mistaken in the understanding then please correct me
krushia has quit [*.net *.split]
krushia has joined #mlpack
dsrt^ has joined #mlpack