ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< HimanshuPathakGi> Hey @saksham189 For testing rbfn with mnist dataset should add that in test file of feed forward network ??
ImQ009 has joined #mlpack
ImQ009 has quit [Ping timeout: 272 seconds]
ImQ009 has joined #mlpack
< saksham189Gitter> Yes that should be fine
favre49 has joined #mlpack
< jeffin143[m]> <jeffin143[m] "Here I am with other id"> @rcurtin:matrix.org: I wanted to ask , about the gsoc project I would be doing this summer , should I make a repo under my profile
< jeffin143[m]> Since we don't want to introduce new dependices so it would be good with new repo
< jeffin143[m]> If user want he can download the repo or build it and then can move ahead
< himanshu_pathak[> Hey zoq saksham189 (Gitter)
< himanshu_pathak[> I want to add mnist dataset but when I convert it to csv
< himanshu_pathak[> It is 104 mb is there any I can do this and add mnist dataest
< himanshu_pathak[> *dataset
< himanshu_pathak[> * It is 104 mb is there any other way I can do this and add mnist dataset
< kartikdutt18[m]> Hey Himanshu, We have a smaller subset of mnist in mlpack consisting only of 9 and 4s [here](https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/data/mnist_first250_training_4s_and_9s.tar.bz2). Is that useful?
< kartikdutt18[m]> This is used for training cnn in convolutional_neural_network_test.
< himanshu_pathak[> Yeah I discussed with saksham and the paper we a follwing used whole dataset of mnist. So we decided to test the RBFN on that dataset.
< himanshu_pathak[> maybe zoq: can give us a better suggestion on this
< kartikdutt18[m]> Sure, Also other than this we have a downloader in C++ that downloads mnist dataset from mlpack.org and we can also download it using cmake.
< himanshu_pathak[> kartikdutt18: Thanks for helping :) . Actually I am also worried about the time network will take to train on whole dataset.
favre49 has quit [Remote host closed the connection]
ImQ009 has quit [Quit: Leaving]
< zoq> himanshu_pathak[: I would test it on a subset first and if that works let's move to the full set, I could let it train a separate machine.
< himanshu_pathak[> zoq: Yeah I have test with only 9 and 4s and gave the classification error 0.304 and in my local computer giving the classification error of 0.23 and I am waitning from last 2hrs it is still training with a dataset of 60,000 image sample :)
< himanshu_pathak[> * zoq: Yeah I have tested with only 9 and 4s and gave the classification error 0.304 in online builds and in my local computer giving the classification error of 0.23 and I am waitning from last 2hrs it is still training with a dataset of 60,000 image sample :)
< zoq> himanshu_pathak[: Hm, 0.3 isn't that good for such a simple dataset, if I remember right the dataset isn't equally split.
< himanshu_pathak[> zoq: Yeah I should try to look at the paper again may be I am missing something
< HimanshuPathakGi> Hey @saksham189 I have gone through the paper I think we are missing that we have to also calculate beta parameter as you can see this reference from the paper.One of the most popular choices for h is the Gaussian kernel, defined by h(‖x−z‖) = K(x,z) = exp(−‖x−z‖2/2t)
< HimanshuPathakGi> here 1/2t is same as beta parameter I will try to implement this
< HimanshuPathakGi> I have to change the implementations for activation functions also