ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
maa1388[m] has quit [Ping timeout: 246 seconds]
maa1388[m] has joined #mlpack
RyanBirminghamGi has quit [Ping timeout: 246 seconds]
robotcatorGitter has quit [Ping timeout: 246 seconds]
KimSangYeon-DGU[ has quit [Ping timeout: 246 seconds]
YashwantSinghPar has quit [Ping timeout: 246 seconds]
KumarArnav[m] has quit [Ping timeout: 246 seconds]
JatoJoseph[m] has quit [Ping timeout: 246 seconds]
ZanHuang[m] has quit [Ping timeout: 246 seconds]
aadarsh-asthanaG has quit [Ping timeout: 246 seconds]
RudraPatil[m] has quit [Ping timeout: 246 seconds]
jacob-earleGitte has quit [Ping timeout: 246 seconds]
SakshamRastogiGi has quit [Ping timeout: 246 seconds]
Manav-KumarGitte has quit [Ping timeout: 246 seconds]
SaraanshTandonGi has quit [Ping timeout: 246 seconds]
GitterIntegratio has quit [Ping timeout: 246 seconds]
SlackIntegration has quit [Ping timeout: 246 seconds]
JoelJosephGitter has quit [Ping timeout: 265 seconds]
RyanBirminghamGi has joined #mlpack
robotcatorGitter has joined #mlpack
KimSangYeon-DGU[ has joined #mlpack
YashwantSinghPar has joined #mlpack
JatoJoseph[m] has joined #mlpack
ZanHuang[m] has joined #mlpack
aadarsh-asthanaG has joined #mlpack
JoelJosephGitter has joined #mlpack
KumarArnav[m] has joined #mlpack
Manav-KumarGitte has joined #mlpack
SakshamRastogiGi has joined #mlpack
jacob-earleGitte has joined #mlpack
SaraanshTandonGi has joined #mlpack
RudraPatil[m] has joined #mlpack
< HimanshuPathakGi>
Hey, @saksham189 are you there I want to discuss some problems about the RBFN
< saksham189Gitter>
yes I am here
ImQ009 has joined #mlpack
< HimanshuPathakGi>
So the problem with training RBFN with K-means clustering is that
< HimanshuPathakGi>
We need to train our weights with least square regression
< HimanshuPathakGi>
Weights of linear layer
< saksham189Gitter>
I think we could use gradient descent
< saksham189Gitter>
(edited) ... gradient descent => ... gradient descent shouldn't make much difference
< saksham189Gitter>
(edited) ... much difference => ... much difference or any other optimization method.
< HimanshuPathakGi>
Gradient descent is not working I think because
< HimanshuPathakGi>
It is not giving better predictions
< saksham189Gitter>
Hmm. I don't think I understand what you are trying to say.
< HimanshuPathakGi>
So we need to train our rbf layer also with gradient descent
< HimanshuPathakGi>
To get better prediction
< saksham189Gitter>
In the paper we are referencing they are not using gradient descent only k-means for the rbf layer
< HimanshuPathakGi>
Yeah, also they are training their linear weights with least square regression
< saksham189Gitter>
Also they report very low classification error on the MNSIT dataset.
< HimanshuPathakGi>
That's why I'm thinking we need to change our plan
< saksham189Gitter>
No, they mention using least-square for SVM not for the k-means rfb
< saksham189Gitter>
(edited) No, they ... => I think they ...
< saksham189Gitter>
I think we should check the implementation once
< HimanshuPathakGi>
Because @zoq was also saying that is getting classification error of 0.304 is not that good with small dataset of 9 and4s
< saksham189Gitter>
If you want to do least squares regression then you can use `mean_squared_error` with gradient descent
< saksham189Gitter>
(edited) ... `mean_squared_error` with ... => ... `mean_squared_error` loss function with ...
< HimanshuPathakGi>
That I Think might help
< saksham189Gitter>
> Because @zoq was also saying that is getting classification error of 0.304 is not that good with small dataset of 9 and4s
< saksham189Gitter>
Yes its pretty bad. They report a classification error of 3.3 % only with k-means RBF.
< saksham189Gitter>
I will try to take a look on your PR.
< HimanshuPathakGi>
Also they are using constant values of t
< HimanshuPathakGi>
I also tried that but this was only increasing the classification error
< HimanshuPathakGi>
I think my pr is not in that good shape I was just experimenting some things
< saksham189Gitter>
alright no worries. I think we should remove the gradient and stick to the k-means approach. I am pretty sure we can make it work.
< HimanshuPathakGi>
Yeah I will try mean_squared_error
< rcurtin>
zoq: I think polar needs 'dot' installed :)
< rcurtin>
jeffin143[m]: RyanBirminghamGi: walragatver[m]: let me know if you want me to set up mlpack-bot on the mlboard repo; I figure I will add it to gitdub (so commits/issue comments/PRs go to the mlpack-git mailing list)
< jeffin143[m]>
Yes that may be good , I guess for thos people who would be following the project
< zoq>
rcurtin: Is there a debian package?
< rcurtin>
zoq: yeah, should be graphviz I think?
< zoq>
rcurtin: I guess graphviz
< zoq>
yes
< zoq>
rcurtin: Was that for the doxygen build?
< rcurtin>
looks like just the regular build---I think it builds the `doc` target too
< rcurtin>
(since doxygen is installed)
< rcurtin>
I suppose it would be no issue to disable that option
< zoq>
ahh the 'git commit' job
< zoq>
thought this only runs on master
< zoq>
anyway installed on both nodes
< rcurtin>
awesome, thanks :)
< rcurtin>
someday we will get our build troubles resolved :)
< rcurtin>
...and it will probably work for two months and then we'll have more... :-D
< zoq>
:D
< rcurtin>
thinking of builds, I got an update about dealgood (the one hosted by GT, not by Symantec)
< rcurtin>
it looks like the external RAID device that mounts /home is borked
< rcurtin>
so, I think we'll just remove it (it's not like disk space was that useful anyway)
< rcurtin>
however, it might be a few weeks until someone goes back to campus and can work on it
ImQ009 has quit [Quit: Leaving]
< shrit[m]>
rcurtin Do we need to keep compatibility for version 0 for mlpack?
< rcurtin>
shrit[m]: definitely not, that was 2010 and earlier :-D
< shrit[m]>
great, did you had a chance to see me email?
< rcurtin>
if you find things in the code that exist for such ancient compatibility, I would say it's ok to remove them
< rcurtin>
yeah, actually it is next on my list after responding to ensmallen#165, which I just did :-D
< rcurtin>
I don't mean to distract you from GSoC with with ensmallen#165, so, there is surely no priority on it
< shrit[m]>
Ok, no hurries, I am just checking delivery, since my domain name is not accepted in outlook and yahoo
< rcurtin>
I know the feeling, it takes a lot of DNS fighting to get everything right
< shrit[m]>
I thought of services like sendgrid and mailjet, they never worked for me.
< rcurtin>
shrit[m]: responded, hopefully I didn't write too much :)
< rcurtin>
I was compiled with the VERBOSE flag enabled, unfortunately
< rcurtin>
:)
< shrit[m]>
The issue is that outlook does not respect the rfc's and standards, and everyone using them
< shrit[m]>
Thanks
< shrit[m]>
For 21 mb knn file and the functions name these are just requirement for the profiler
< shrit[m]>
I need to profile with debug symbols, otherwise the profiler does not give much of a data
< shrit[m]>
I like verbose mode, this will provides much of ideas
< rcurtin>
:)
< rcurtin>
yeah, I figured that the debugging symbols were necessary just for the trace
< shrit[m]>
Look what they wrote on the uclibc++ at the end of the page https://cxx.uclibc.org/faq.html. These guy are truly suffering
< jeffin143[m]>
shrit : which time zone are you in ?
< rcurtin>
shrit[m]: ha! :-D
< rcurtin>
it looks like maybe development has stopped?