ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< jeffin143[m]>
Irc needs some button to show reaction on messages just like GitHub
ImQ009 has joined #mlpack
< AbishaiEbenezer4>
@rcurtin oh yes, I am definitely looking to contribute. I failed this year's gsoc re
< AbishaiEbenezer4>
*terribly
< AbishaiEbenezer4>
So I hope I can make meaningful contributions this time around to stand a chance for next year
< AbishaiEbenezer4>
I am currently undertaking research work with a professor in NLP for health
< AbishaiEbenezer4>
I was studying material on different classifiers and thought I'll have a look at what mlpack has got
< AbishaiEbenezer4>
Coming to the naive Bayes, I'll definitely take it up and make some contributions soon
< AbishaiEbenezer4>
I'll keep in constant touch here, for I think I'll have a few questions
< HimanshuPathakGi>
Hey @saksham189 are you there??
< saksham189Gitter>
hi @himanshupathak21061998 should we move the meet to tomorrow at 6pm?
< saksham189Gitter>
I have left a review on your PR and after that it should be good to merge.
< HimanshuPathakGi>
OK sure :)
< rcurtin>
go bindings are merged! thank you Yashwant and Yasmine for making those a reality :)
< RyanBirminghamGi>
Awesome!!
< saksham189Gitter>
amazing :smile:
< jeffin143[m]>
rcurtin (@freenode_rcurtin:matrix.org): when is the new release :)
< jeffin143[m]>
<saksham189Gitter "amazing :smile: "> rcurtin (@freenode_rcurtin:matrix.org) : what is the library you used , as alternative to numpy in julia ???
< rcurtin>
jeffin143[m]: for julia, we just used the native Matrix{Float64} type
< rcurtin>
:)
< jeffin143[m]>
rcurtin (@freenode_rcurtin:matrix.org): that means for other binding type also we can use native one too ??
< jeffin143[m]>
Right ?
< jeffin143[m]>
I wanted to start javascript binding for mlpack , but I didn't find a good enough library similar to numpy so I didn't know :)
< jeffin143[m]>
Same goes with rust , rust is too new a language to have a library similar to numpy , but I wanted to build a binding rust also
< jeffin143[m]>
I know this too much of work 😂
< jeffin143[m]>
But I hope I will contribute someday :(
< jeffin143[m]>
Hi Ryan Birmingham (Gitter)
< RyanBirminghamGi>
jeffin143: I don't know about the memory layout, which is almost certainly different, but I've used mathjs for some things - https://github.com/josdejong/mathjs
< RyanBirminghamGi>
Hi jeffin143!
< jeffin143[m]>
Ryan Birmingham (Gitter): this looks promising I will take a detailed look
< jeffin143[m]>
Thanks for the direction
< jeffin143[m]>
Ryan Birmingham (Gitter): any clue about similar rust one ???
< jeffin143[m]>
Question Open to anyone , since I don't know who might be working in rust
< RyanBirminghamGi>
I have never used rust...
< jeffin143[m]>
Ryan Birmingham (Gitter): ok :)
< RyanBirminghamGi>
Maybe I'll play with it some time. I have some strange gaps in my knowledge/experience.
< RyanBirminghamGi>
How are you doing this week, jeffin143?
< jeffin143[m]>
Ryan Birmingham (Gitter): I am good , I have updated the weekly blog
< jeffin143[m]>
Done with the first pr , and have setup some cmake for catch unit testing but there is some issue in that
< jeffin143[m]>
Apart from that good :)
< jeffin143[m]>
Since we all are regularly in touch with the project that rarely leaves any doubt or agenda to be discussed during the weekly meet :)
< zoq>
jeffin143[m]: Maybe wasm makes more sense.
< jeffin143[m]>
zoq (@freenode_zoq:matrix.org): yeah may be that's a good idea
< jeffin143[m]>
But I have never played around with wasm , so I have to dig deep
< walragatver[m]1>
If you are installing from build you know it's version
< walragatver[m]1>
Also protobuf is present in conda virtual environments.
< walragatver[m]1>
Pytorch and tf use it
< jeffin143[m]>
I don't think it's due to that
< jeffin143[m]>
I will try to play around a bit more
< jeffin143[m]>
May be we need to change project structure
< jeffin143[m]>
I am not sure
< walragatver[m]1>
Basically I am of the view using precompiled headers is slightly tricky
< jeffin143[m]>
Ok
< walragatver[m]1>
I would suggest let's keep the task of using pre compiled headers as seperate task
< jeffin143[m]>
walragatver: ok yes ok I will revert back then and open a separate pr :)
< jeffin143[m]>
For that supported
< jeffin143[m]>
Support*
< walragatver[m]1>
Is it necessary to use precompiled header for testing?
< jeffin143[m]>
<jeffin143[m] "Support*"> walragatver: no
< jeffin143[m]>
> Is it necessary to use precompiled header for testing?
< jeffin143[m]>
No I don't think so
< walragatver[m]1>
Okay then it would be better to open a seperate PR
< jeffin143[m]>
Yes
< walragatver[m]1>
Okay so that's it for now. We are quite good on track at this moment.
< walragatver[m]1>
Just one last thing I would like to say
< walragatver[m]1>
I saw you miss quite a lot of style issues. So just as suggestion I am sharing my experience. I also use to miss quite a lot of style problems. So what I did was to read the code line by line before pushing. That helps in self correction as well as eventually we stop doing those errors.
< jeffin143[m]>
Yes , I do that a quit a lot I agree , I will be more careful next time onwards
< jeffin143[m]>
I assure you :)
< walragatver[m]1>
No worries.
< walragatver[m]1>
jeffin143: Thank you for your time. Have a great week ahead.
< jeffin143[m]>
Thnks walragatver : you too
< walragatver[m]1>
I think birm and you already discussed today
< jeffin143[m]>
walragatver : yes not about the project but a different topic :)
< walragatver[m]1>
birm: If you want to discuss anything feel free to do so.
< birm[m]1>
I'm ok for today!
< birm[m]1>
see ya on wed!
< jeffin143[m]>
Cool
< rcurtin>
zoq: hopefully, maybe one more thing was needed to test the logistic regression function
< rcurtin>
but that was what my aim was with the conv_to<>
< rcurtin>
jeffin143[m]: sorry for the slow response, I had to go right after my last message; in many other languages, if it's available, we can definitely use the native type
< rcurtin>
but at least Python doesn't have a good native type (hence numpy) and actually Go doesn't either (we had to use gonum)
ImQ009 has quit [Quit: Leaving]
< shrit[m]1>
rcurtin: I was studying the details of the size of the new binary after the removal of boost_serilization, I was thinking also of serilization the pointer but I came to a different conclusion
< shrit[m]1>
It seems to me that knn is calling all these tree functions, and greedy single tree tranverse is very expensive function.
< shrit[m]1>
First the NeighborSearch<>::Search() is costing 100KB only this one
< shrit[m]1>
And the Traverse() function of all the trees is costing from 50 KB to 5 KB
< shrit[m]1>
I still do not know how to do it, but It might be a better idea to keep only one tree in mlpack_knn when compiled, at the same time not modifying the source code heavily
< shrit[m]1>
Maybe we can create several Search classes with template type of search mode such as Search<NAIVE_MODE> and call this class only once in neighbor_search_impl.hpp instead of using switch cases and call Traverse several time