ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
gotadachi has quit [Quit: Leaving...]
gotadachi has joined #mlpack
kyrre has quit [Read error: Connection reset by peer]
kyrre has joined #mlpack
< jjb[m]> rcurtin when you have a moment can you comment on the documentation site note: <https://github.com/mlpack/mlpack/pull/2633> ?
ImQ009 has joined #mlpack
PranavReddyP16Gi has joined #mlpack
ArijitRoyGitter[ has joined #mlpack
Saksham[m] has joined #mlpack
shrit[m] has joined #mlpack
AbhinavGudipati[ has joined #mlpack
birm[m] has joined #mlpack
_slack_mlpack_U0 has joined #mlpack
KhizirSiddiquiGi has joined #mlpack
outmanipulateGit has joined #mlpack
jeffin143[m] has joined #mlpack
kuhaku has joined #mlpack
sreenik[m] has joined #mlpack
< AakashkaushikGi4> Hey everyone, the article got published [here](https://medium.com/syntechx/convolutional-neural-network-cnn-in-c-52c9ed47a6ea), maybe give it a look.
sakshamb189[m] has left #mlpack []
vansika__ has quit [Read error: Connection reset by peer]
vansika__ has joined #mlpack
< say4n> rcurtin: yes, totally!
neteraxeGitter[m has left #mlpack []
< rcurtin> jjb[m]: sorry for the slow response, I hadn't managed to get to it---answering now
< zoq> AakashkaushikGi4: Cool :)
travis-ci has joined #mlpack
< travis-ci> shrit/examples#89 (rl_cpp - 9778dfb : Omar Shrit): The build passed.
< travis-ci> Change view : https://github.com/shrit/examples/compare/45e2027519f2^...9778dfb743ff
travis-ci has left #mlpack []
< shrit[m]> Aakash kaushik (Gitter): Sounds cool, very good article, it would be great if we have a similar tutorial either in`mlpack/doc` or in the example repository.
< AakashkaushikGi4> > `shrit` Aakash kaushik (Gitter): Sounds cool, very good article, it would be great if we have a similar tutorial either in`mlpack/doc` or in the example repository.
< AakashkaushikGi4> Hey, thanks a lot, I think i will be able to help with that if you we can have a guideline on how the examples should be, because i was free to write things in the article but i think we need to have some guidelines here so everyone can contribute.
< AakashkaushikGi4> (edited) ... if you we can have a guideline on ... => ... if we can have guidelines on ...
< AakashkaushikGi4> Also i have completed the mean_shift_test and the loss_functions_test.cpp so is it okay if i create the PRs on October 1st for the hacktoberfest ?
< AakashkaushikGi4> for the `BOOST_REQUIRE_CLOSE_FRACTION(x,y,z)` what should be the equivalent catch2 expression according to [this](https://stackoverflow.com/questions/1093453/difference-between-boost-check-close-and-boost-check-close-fraction) we can go with a rule of thumb such as if `z = 1e-4` in boost we can convert the expression to `REQUIRE(x == Approx(y).epsilon(1e-8))` basically doubling the decimal points.
< AakashkaushikGi4> Please let me know if i am wrong with this
< AakashkaushikGi4> Oh wait i think doubling it is wrong and increasing the decimal point by 2 will work such as if it was `1e-4` then `1e-6` may work ?
< shrit[m]> Aakash kaushik (Gitter): What guidelines are you thinking of ? Do you have specific ideas? Regarding the article you have wrote, you can provide more details in a tutorial to explain more specifically we are doing each lines, You can imagine the case of a user with only a theoretical background in machine learning and want to make a good use if this code.
ImQ009 has quit [Quit: Leaving]
< zoq> AakashkaushikGi4: Yes, 1e-6 should work.
< zoq> rcurtin: Working on another example for K-means, currently using the pima indians diabetes dataset, I plot the optimization steps - http://data.kurg.org/pima-indians-diabetes.gif. The cluster are almost fixed after 10 iterations, off the top of your head to you know any dataset that might be more interesting?
< rcurtin> ahh, unfortunately, in 2 dimensions, k-means usually converges really fast
< rcurtin> also, usually it is only the first ~5-10 iterations (even in high dimensions) that see most of the cluster movement
< rcurtin> personally I think that diabetes visualization is pretty cool---it's way cooler than, e.g., just k-means on two gaussians or something