verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
shubham_94 has quit [Quit: Connection closed for inactivity]
Awcrr has quit [Remote host closed the connection]
Awcrr has joined #mlpack
Awcrr_ has joined #mlpack
Awcrr has quit [Ping timeout: 240 seconds]
mtr_ has joined #mlpack
tmsimont_ has joined #mlpack
tmsimont has joined #mlpack
tmsimont__ has quit [Ping timeout: 252 seconds]
tmsimont_ has quit [Ping timeout: 240 seconds]
chrishenx has joined #mlpack
< chrishenx> Hi there, I'm compiling mlpack on os x, lot of unused paramater warnings
< chrishenx> lets see if everthing go well
< chrishenx> after one hour compiling armadillo
< rcurtin> chrishenx: yeah, at least on my OS X setup a lot of those warnings are inside boost
< rcurtin> if you find any that can be fixed in mlpack, I'd be happy to merge in a PR
< chrishenx> rcurtin: step by step :) And yeah, all those warning come from boost
< rcurtin> yeah, it was all in the serialization code I think
< rcurtin> maybe new version of boost have it fixed; not sure
skon46 has quit [Remote host closed the connection]
< chrishenx> success!
< rcurtin> :)
chrishenx has quit [Ping timeout: 250 seconds]
chrishenx has joined #mlpack
Awcrr has joined #mlpack
Awcrr_ has quit [Ping timeout: 246 seconds]
Awcrr has quit []
Nilabhra has joined #mlpack
omid has joined #mlpack
< omid> !
omid has quit [Client Quit]
praveench has joined #mlpack
Awcrr has joined #mlpack
vineet_ has joined #mlpack
chrishenx has quit [Ping timeout: 276 seconds]
vineet_ has quit [Ping timeout: 244 seconds]
vineet_ has joined #mlpack
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 260 seconds]
praveench has quit [Ping timeout: 252 seconds]
ranjan123_ has joined #mlpack
Awcrr_ has joined #mlpack
Awcrr has quit [Read error: Connection reset by peer]
vineet_ has quit [Ping timeout: 244 seconds]
Awcrr_ has quit [Read error: Connection reset by peer]
Awcrr has joined #mlpack
praveench has joined #mlpack
chvsp has joined #mlpack
chvsp has quit [Client Quit]
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 260 seconds]
< praveench> Hi Marcus, I am praveen (chvsp on github ). As you might know, I am presently working on writing tests for K.Subavathi Initialisation.
< praveench> I have gone through the paper as well as the implementation and found that the it fails to consider the nitty gritties of the paper( like there is a different equation for computing the weights of the hidden to output layer, etc.) I think that we are not using the full benefit of KS initialisation. I just wanted to ask if this was done intentionally? Or else I would start work on it
praveench_ has joined #mlpack
praveench_ has quit [Client Quit]
praveench has quit [Ping timeout: 252 seconds]
praveench has joined #mlpack
< praveench> Hi, I am praveen (chvsp on github ). As you might know, I am presently working on writing tests for K.Subavathi Initialisation.
< praveench> I have gone through the paper as well as the implementation and found that the it fails to consider the nitty gritties of the paper( like there is a different equation for computing the weights of the hidden to output layer, etc.) I think that we are not using the full benefit of KS initialisation. I just wanted to ask if this was done intentionally? Or else I would start work on it.
< praveench> Really sorry if this message is repeated. Our network connection here is not so stable
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 244 seconds]
ashutosh has joined #mlpack
Awcrr has quit []
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 244 seconds]
agobin has joined #mlpack
vineet_ has joined #mlpack
vineet_ has quit [Ping timeout: 276 seconds]
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 244 seconds]
praveench has quit [Quit: Page closed]
Awcrr has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
Nilabhra has quit [Ping timeout: 244 seconds]
Awcrr has quit [Remote host closed the connection]
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 276 seconds]
Nilabhra has joined #mlpack
Awcrr has joined #mlpack
chrishenx has joined #mlpack
ashutosh has quit [Quit: Connection closed for inactivity]
chrishenx has quit [Ping timeout: 276 seconds]
Awcrr has quit [Remote host closed the connection]
Nilabhra has quit [Ping timeout: 264 seconds]
Nilabhra has joined #mlpack
Awcrr has joined #mlpack
Awcrr has quit [Ping timeout: 276 seconds]
mentekid has joined #mlpack
chrishenx has joined #mlpack
Nilabhra has quit [Remote host closed the connection]
chrishenx has quit [Ping timeout: 268 seconds]
vineet_ has joined #mlpack
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 268 seconds]
vineet_ has quit [Ping timeout: 264 seconds]
praveench has joined #mlpack
praveench has quit [Client Quit]
praveench has joined #mlpack
< zoq> praveench: Hello, I tested the weight update process and found out that RMSprop is preferable method in my test cases. If you like you can test the method, and prove me wrong.
christie has joined #mlpack
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 268 seconds]
jeshuren has joined #mlpack
praveench has quit [Quit: Page closed]
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 260 seconds]
jeshuren has quit [Quit: Page closed]
spock_ has quit [Ping timeout: 240 seconds]
Cooler_ has quit [Ping timeout: 248 seconds]
chrishenx has joined #mlpack
ank_95_ has joined #mlpack
chrishenx has quit [Ping timeout: 260 seconds]
Awcrr has joined #mlpack
Gas has joined #mlpack
Gas is now known as Guest42198
Guest42198 has quit [Client Quit]
tmsimont_ has joined #mlpack
Awcrr has quit [Remote host closed the connection]
tmsimont has quit [Ping timeout: 251 seconds]
Cooler_ has joined #mlpack
spock_ has joined #mlpack
Awcrr has joined #mlpack
mentekid has quit [Ping timeout: 252 seconds]
ranjan123_ has quit [Ping timeout: 252 seconds]
Chinmaya has joined #mlpack
Chinmaya has quit [Quit: Ex-Chat]
spock_ has quit [Ping timeout: 244 seconds]
Cooler_ has quit [Ping timeout: 244 seconds]
ranjan123_ has joined #mlpack
acbull has joined #mlpack
mentekid has joined #mlpack
acbull has quit [Ping timeout: 252 seconds]
vineet has joined #mlpack
shubham_94 has joined #mlpack
tmsimont_ has quit [Quit: Leaving]
anveshi has joined #mlpack
Nilabhra has joined #mlpack
ff has joined #mlpack
mrbean has quit [Ping timeout: 268 seconds]
mrbean has joined #mlpack
ff has quit [Ping timeout: 240 seconds]
christie has quit [Quit: Page closed]
ff has joined #mlpack
mrbean has quit [Ping timeout: 240 seconds]
anveshi has quit [Quit: Page closed]
uzipaz has joined #mlpack
< uzipaz> rcurtin: for parallel SGD, is Hogwild parallel algorithm a good candidate for this?
Awcrr has quit [Quit: zzzZZZZZ]
vineet has quit [Ping timeout: 250 seconds]
mrbean has joined #mlpack
ff has quit [Ping timeout: 248 seconds]
Rishabh_ has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
Rishabh has joined #mlpack
Rishabh is now known as Guest95780
ff has joined #mlpack
mrbean has quit [Ping timeout: 240 seconds]
toshad has joined #mlpack
tsathoggua has joined #mlpack
ff has quit [Ping timeout: 250 seconds]
mrbean has joined #mlpack
Guest95780 is now known as Rishabh_
Rishabh_ has quit [Remote host closed the connection]
Rishabh_ has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
yoshi2095 has joined #mlpack
yoshi2095 has left #mlpack []
jereliu has joined #mlpack
kirizaki has joined #mlpack
jereliu has quit [Client Quit]
praveench has joined #mlpack
< praveench> Hi, I am interested in developing RBM and DBN for mlpack this GSOC 2016. I have read a lot of literature on the topics and am quite confident about the implementation part. I am also familiar with the mlapck codebase. But I have no prior experience in writing tests for these models
< praveench> I am quite well versed with BOOST but I would like to ask you as to how to get started on writing tests for these kinds of algorithms?
< praveench> I plan to find some research papers which give the implementation as well as results when the algo is run
< praveench> I can compare the results of my implementation with those given in the paper.
< praveench> Am I going in the right direction or is there a better method?
< zoq> praveench: Hello, in case of an RBM you can come up some really nice test: take a look at this page: http://deeplearning.net/tutorial/rbm.html#tracking-progress.
< zoq> praveench: So one test you could do is to check that the samples from the RBM look like your training data.
< zoq> praveench: Another test could check if the latent values look like a smooth gabor filter.
< zoq> out for dinner
< praveench> okay will have a look at it
< praveench> Thanks a lot
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 248 seconds]
chrishenx has joined #mlpack
praveench has quit [Quit: Page closed]
dd_ has joined #mlpack
< dd_> mlpack_emst works great. Is there a method/class that will create dendrogram?
ranjan123__ has joined #mlpack
ranjan123_ has quit [Ping timeout: 252 seconds]
chrishenx has quit [Ping timeout: 248 seconds]
chrishenx has joined #mlpack
chrishenx has quit [Ping timeout: 240 seconds]
chrishenx has joined #mlpack
< rcurtin> dd_: unfortunately there is no support for that right now
< rcurtin> you could reconstruct a dendrogram fairly easily though
< rcurtin> the top will be the edge with maximum length
< rcurtin> and as you go down the dendrogram the edges get shorter
chrishenx has quit [Ping timeout: 240 seconds]
< rcurtin> uzipaz: hogwild is a good candidate, yes, but it's also quite simple so I think that a good project would include more parallel sgd techniques
< rcurtin> dd_: someone volunteered to write a minimum linkage clustering module so maybe if that gets done the code would help you (since that will give you a dendrogram) but I can't guarantee that the code will ever get finished or merged (I haven't seeen any code yet)
Nilabhra has quit [Read error: Connection reset by peer]
chrishenx has joined #mlpack
< toshad> rcurtin:
< toshad> Hi
< toshad> I am interested in the "Implement tree types" project for GSOC 2016 and I was looking at the tickets to work on
< toshad> But most of them are closed and the remaining are difficult ones
kirizaki has quit [Ping timeout: 252 seconds]
< dd_> understood. thinking out loud that function signature would take mlpack_emst output as input, then back out the parent/child and sort. Output would be a sorted tree.
chrishenx has quit [Remote host closed the connection]
palashahuja has joined #mlpack
palashahuja has quit [Client Quit]
kirizaki has joined #mlpack
chrishenx has joined #mlpack
< kirizaki> rcurtin: I would like to change OS and I'm thinking to install something against mlpack::backtrace
< kirizaki> to widen functionality
ranjan123__ has quit [Ping timeout: 252 seconds]
< kirizaki> but I'm really, but really against windows ;f
< kirizaki> I really hate this OS :P
< kirizaki> but maybe I should install FreeBSD ?
< rcurtin> that could be fun :)
< rcurtin> I used to use freebsd years ago
< rcurtin> I liked it but always stuck with Linux
< rcurtin> I think zoq is more of a freebsd expert than me :)
< zoq> I really like FreeBSD mainly for three reasons, 1. awsome port collection, 2. jails 3. ZFS
< zoq> I guess you could use docker instead of jails on linux and maybe btrfs instead of ZFS but at least on debian apt-get isn't nearly as good as ports on FreeBSD.
< kirizaki> I don;t understand anything from what You guys saying :P
< kirizaki> port collection = some kind of repos for packages?
chrishenx has quit [Ping timeout: 246 seconds]
< kirizaki> jails = dunno; ZFS = dunno [wtf?]
< zoq> port collection = repo for packages
< zoq> ZFS = filesystem something like NTFS for windows :)
< zoq> jails = environment separate from the rest of the system, something like a virtual machine
< kirizaki> but, is there big difference in using for ''newbie'' user like me? FreeBSD vs linux mint/arch/
< kirizaki> ?
< zoq> kirizaki: hm, I guess if you prefer to have a desktop environment something like gnome or kde, I would go with linux.
< kirizaki> yeah, I think I'm gonna stay with linux :P
< kirizaki> but maybe archlinux this time :P
< zoq> I think rcurtin uses archlinux not sure.
< rcurtin> I'm a debian user, but I do have an arch box
< rcurtin> I used arch for a while, but after pacman uninstalled itself on multiple occasions I decided that it was time to move on :)
shubham_94 has quit [Quit: Connection closed for inactivity]
chrishenx has joined #mlpack
kirizaki has quit [Quit: Page closed]
kirizaki has joined #mlpack
kirizaki has quit [Quit: Page closed]
uzipaz has quit [Ping timeout: 252 seconds]
mentekid has quit [Ping timeout: 248 seconds]
< rcurtin> dd_: yeah, it would definitely not be too hard to write some standalone program that took the mlpack_emst output and did what you described
< rcurtin> another option would be to work at the C++ level and work with the DualTreeBoruvka class, but if you are looking for a quick fix then what you described is probably the best way to go
< rcurtin> toshad: I don't have any other tickets that I can give, sorry. the best advice I can give is to take a look at the mailing list archive for more information on the project
< rcurtin> and if you can come up with any improvements to the existing trees, that could be a nice contribution
< rcurtin> either functionality improvements or runtime improvements
< toshad> ok, thanks
< toshad> I was looking at thew cover issue #275, is that something that can be done
< rcurtin> yes, if you are willing to do a lot of reading the code and understanding the cover tree algorithm
< rcurtin> I'd start by reading the cover tree paper
na1taneja2821 has joined #mlpack
chrishenx has quit [Ping timeout: 240 seconds]
chrishenx has joined #mlpack