ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gotadachi has quit [Quit: Leaving...]
gotadachi has joined #mlpack
kyrre has quit [Read error: Connection reset by peer]
< shrit[m]>
Aakash kaushik (Gitter): Sounds cool, very good article, it would be great if we have a similar tutorial either in`mlpack/doc` or in the example repository.
< AakashkaushikGi4>
> `shrit` Aakash kaushik (Gitter): Sounds cool, very good article, it would be great if we have a similar tutorial either in`mlpack/doc` or in the example repository.
< AakashkaushikGi4>
Hey, thanks a lot, I think i will be able to help with that if you we can have a guideline on how the examples should be, because i was free to write things in the article but i think we need to have some guidelines here so everyone can contribute.
< AakashkaushikGi4>
(edited) ... if you we can have a guideline on ... => ... if we can have guidelines on ...
< AakashkaushikGi4>
Also i have completed the mean_shift_test and the loss_functions_test.cpp so is it okay if i create the PRs on October 1st for the hacktoberfest ?
< AakashkaushikGi4>
Please let me know if i am wrong with this
< AakashkaushikGi4>
Oh wait i think doubling it is wrong and increasing the decimal point by 2 will work such as if it was `1e-4` then `1e-6` may work ?
< shrit[m]>
Aakash kaushik (Gitter): What guidelines are you thinking of ? Do you have specific ideas? Regarding the article you have wrote, you can provide more details in a tutorial to explain more specifically we are doing each lines, You can imagine the case of a user with only a theoretical background in machine learning and want to make a good use if this code.
ImQ009 has quit [Quit: Leaving]
< zoq>
AakashkaushikGi4: Yes, 1e-6 should work.
< zoq>
rcurtin: Working on another example for K-means, currently using the pima indians diabetes dataset, I plot the optimization steps - http://data.kurg.org/pima-indians-diabetes.gif. The cluster are almost fixed after 10 iterations, off the top of your head to you know any dataset that might be more interesting?
< rcurtin>
ahh, unfortunately, in 2 dimensions, k-means usually converges really fast
< rcurtin>
also, usually it is only the first ~5-10 iterations (even in high dimensions) that see most of the cluster movement
< rcurtin>
personally I think that diabetes visualization is pretty cool---it's way cooler than, e.g., just k-means on two gaussians or something