ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
ImQ009 has joined #mlpack
tomsun has joined #mlpack
tomsun_ has quit [Ping timeout: 265 seconds]
HeikoS has joined #mlpack
ImQ009 has quit [Ping timeout: 272 seconds]
ImQ009 has joined #mlpack
HeikoS has quit [Quit: Leaving.]
HeikoS has joined #mlpack
HeikoS has quit [Client Quit]
tomsun has quit [Ping timeout: 240 seconds]
tomsun has joined #mlpack
< saksham189Gitter> hi @himanshupathak21061998 are you there?
< HimanshuPathakGi> Yup hello
< HimanshuPathakGi> @saksham189
< saksham189Gitter> hi ;)
< saksham189Gitter> so I think the work on the RBFN is complete. I have suggested one change. If you make that I would merge it in.
< HimanshuPathakGi> We will dicsuss today about Kernel-svm
< HimanshuPathakGi> > so I think the work on the RBFN is complete. I have suggested one change. If you make that I would merge it in.
< HimanshuPathakGi> Yup:)
< HimanshuPathakGi> So, I am thinking of naming it RBF as GaussianFunction. What do you think
< saksham189Gitter> do you know where you would be adding it?
< HimanshuPathakGi> > do you know where you would be adding it?
< HimanshuPathakGi> As I discussed as well as @zoq suggested that we should new directory kernel_svm for it
< saksham189Gitter> yeah that is what I was thinking as well.
< saksham189Gitter> I think the kernel would be a template parameter since we could let the user specify different kernels with the SVM, right?
< HimanshuPathakGi> It will be a better thing because if we want to add polynomial kernel function we can do that in kernel_svm
< saksham189Gitter> also have you shared your blog post for the week?
< HimanshuPathakGi> > also have you shared your blog post for the week?
< HimanshuPathakGi> I will share it today :)
< HimanshuPathakGi> I always get late in this
< saksham189Gitter> alright great. Is there anything we need to discuss?
< HimanshuPathakGi> > alright great. Is there anything we need to discuss?
< HimanshuPathakGi> Done from my side.
< HimanshuPathakGi> If you want to ask anything??
< saksham189Gitter> Also we have an implementation of `linear_svm` that could be helpful
< saksham189Gitter> I think you could try to adapt that and add kernel as a template parameter and then add different kernels like linear, RBF etc.
< HimanshuPathakGi> > Also we have an implementation of `linear_svm` that could be helpful
< HimanshuPathakGi> Yes, it is helpful:) I was also thinking of doing this
< saksham189Gitter> Let me know if you need any help or if there are any blockers we can discuss them here.
< HimanshuPathakGi> > Let me know if you need any help or if there are any blockers we can discuss them here.
< HimanshuPathakGi> Yup, if I have got while implementing I will ask for help :)
< saksham189Gitter> Alright great ! Bye. Hope you have a great day.
< HimanshuPathakGi> Have a great day bye :)
< rcurtin> shrit[m]1: sorry for the slow response
< rcurtin> I think it's ok to leave all the trees in mlpack_knn---after all, the bindings are meant to provide decent functionality to languages other than C++ (including the command line)
< rcurtin> so if the user wants a very specific and small KNN program that only uses one tree type, then they should use a custom C++ program that will be much smaller
< rcurtin> let me know what you think :)
< rcurtin> also, with cereal and CLI11, do you know the "new" smaller size of mlpack_knn? I'm curious how much those changes helped :)
< shrit[m]1> We have gained 1.3 MB, The final size for mlpack_knn is 3.4
< shrit[m]1> The issue now is all the traverse functions are called of all the trees, even If the user specified one tree
< shrit[m]1> That is the reason I tough of a template function that called depending on what the user is calling
< shrit[m]1> *thought
< shrit[m]1> rcurtin knn_low_resource is 2.3 MB
< jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): do python binding build is off by default
< jeffin143[m]> And we have to specify ??? Using cmake flag ?? To build python binding ?
< rcurtin> shrit[m]1: awesome, nice size improvement :)
< jeffin143[m]> Thanks shrit
< rcurtin> I don't know any way around having all of the traversals instantiated in the mlpack_knn program though, unless we reduce the number of trees supported (and ideally we should avoid reducing functionality of the bindings)
< rcurtin> it sounds like maybe we are getting close to the limit of how small we can make mlpack_knn with its current functionality?
< rcurtin> shrit[m]1: do you have an updated breakdown for the sizes of functions in mlpack_knn now?
favre49 has joined #mlpack
< shrit[m]1> Yes of course, I will send you one by mail
< shrit[m]1> I am still convinced we can gain up to 500KB without lossing any functionality
< rcurtin> shrit[m]1: sounds good---maybe let's think tomorrow about ways that we can improve further
< rcurtin> the templated traversers for each tree type actually do make a difference in terms of runtime; it's important to have each traverser compiled specifically for each tree type
< rcurtin> however, maybe there are still some tricks we can do to reduce the size of each individual compiled traverser
< jeffin143[m]> Shouldn't we have generic things to reduce size , I mean here making changes will only reduce knn size and other would still remain of significant size and then we have to reduce them as well by changing code right ?
< jeffin143[m]> GitHub trying to remove racist words from GitHub , such as master to main , and whitelist to something
< shrit[m]1> @rcurtin, Perfect, In the meanwhile I will try to understand mlpack_knn in details.
< rcurtin> jeffin143[m]: yeah, a lot of the changes have been for generic parts of the codebase and would make a difference to everything (like the boost::serialization and boost::program_options changes)
< jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): oh I see :)
favre49 has quit [Remote host closed the connection]
< abernauer[m]> Does anyone in the community have any tips for building a deep learning image data set from scratch?
< abernauer[m]> Yeah I can just use wget totally blanked.
< HimanshuPathakGi> Hey, everyone, this is my weekly blog post https://medium.com/@hpathak336/week-2-gsoc2020-b2b8a8f6e745
< HimanshuPathakGi> :)
< kartikdutt18[m]> Hey everyone, Here is the [link] (https://medium.com/@kartikduttmd/gsoc-week-3-3rd-june-11-june-4f396c196315?source=friends_link&sk=b9b7cf604f344dd64b7c0f0907982744) for my weekly blog. Kindly let me know what you think.
ImQ009 has quit [Quit: Leaving]
< zoq> kartikdutt18[m]: Great update, thanks; I like INZO's - Overthinker, Alan Watts voice matches perfectly.
< zoq> kartikdutt18[m]: Also, not sure if you have seen -> Responding to the Controversy about YOLOv5 - https://blog.roboflow.ai/yolov4-versus-yolov5/
< zoq> shrit[m]1: Nice update as well, is the current plan to replace boost serialization with cereal independently of the other steps?
< shrit[m]1> @zoq In fact, boost serialization have been replaced, I have only raw pointer left
< shrit[m]1> Since cereal does not serialize raw pointer out of the box, we need to figure out a way to do it properly, otherwise the overall is good I think
< zoq> shrit[m]1: I think it's part of #2415?
< rcurtin> I figured maybe it might make sense to cherry-pick some things out of #2415 as we go into their own PRs?
< zoq> Yes, the PR is already quite large.
< rcurtin> yeah I can't even load the diff automatically :-D
< zoq> Sounds like CLI and the serialization part are two seperate things.
< rcurtin> agreed, it would be nice to split them out
< shrit[m]1> rcurtin zoq agree, how to do this ?
< shrit[m]1> I am sure it would be easier to review
< rcurtin> shrit[m]1: I guess we could just cherry-pick the relevant commits into a different branch, and then review that branch?
< shrit[m]1> I am looking into cherry-pick never used that before.
< shrit[m]1> I hope I did not mix modification related to two different things in one commit, I usually do not do that
< rcurtin> it's ok, even if you did do that, if the commit was small, you could cherry-pick without committing, then modify locally to revert the unwanted changes, then commit
< rcurtin> alternately, you could even just make a new branch and not use git and copy over all the changes you wanted from the original branch, in the worst case :)
< shrit[m]1> Perfect, I will create two different pull requests one for cereal and other for CLI11
< shrit[m]1> rcurtin The idea is good, actually it is extremely easy to use cherry-pick
< shrit[m]1> in this case we will keep #2415 as a draft, and will extract all features as cherry picks in different pull requests
< rcurtin> shrit[m]1: that sounds good to me, I guess we can ask in our meeting tomorrow if Roberto has any ideas or comments too :)
< shrit[m]1> Agreed
< rcurtin> I am spending the evening setting up some new (old) build slaves for Jenkins... hopefully should have them online tonight, and then when the builds break I can learn which packages I forgot to install :)
< shrit[m]1> Great, that would requires the addition of cereal I think
< shrit[m]1> I do not know for CLI11, It will never build in Jenkins if the sources is not in mlpack