verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
vinayakvivek has joined #mlpack
govg has quit [Ping timeout: 258 seconds]
clicksaswat has joined #mlpack
govg has joined #mlpack
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
Trion has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 268 seconds]
< Trion> Made this medium article explaining the Evolution strategy algorithm https://goo.gl/fGSu15 gym_tcp might get popular in my friends now :P
Trion has quit [Quit: Have to go, see ya!]
clicksaswat has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
vss has joined #mlpack
< vss> Trion : was pretty fun to read your post :P
vivekp has joined #mlpack
clicksaswat has joined #mlpack
clicksaswat has quit [Client Quit]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
clicksaswat has joined #mlpack
< zoq> Trion: I agree with vss really neat post.
sumedhghaisas has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
clicksaswat has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
mjscott has joined #mlpack
< mjscott> Hello, does anyone have experience with the mlpack SparseSVD factorizers?
< rcurtin> mjscott: yes but I am stepping out for a while so it might he a bit before ai can respond in full
< rcurtin> *before I
< rcurtin> *be a bit
< rcurtin> phone spelling is hard :)
< rcurtin> if you want to leave a request I can answer when I am back in a couple hours
< mjscott> Sure yea, I just mainly have a question. I'm using the SparseSVDCompleteIncrementalFactorizer for a huge dataset, the matrix is about 20000 x 500000 with a density of ~ 0.01 (1% of entries are filled). When I run the factorizer, it converges successfully, but produces matrices that are sparse :/
vivekp has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
vivekp has joined #mlpack
vivekp has quit [Changing host]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 252 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vss has quit [Quit: Page closed]
vivekp has joined #mlpack
mjscott has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
kaspian has joined #mlpack
< kaspian> asd
< kaspian> heya
kaspian has quit [Client Quit]
vinayakvivek has quit [Quit: Connection closed for inactivity]
< rcurtin> mjscott: are there at least a few entries in each column and row? it might converge to zero if there are no nonzero entries in some of the columns and rows
< rcurtin> took me more than a couple hours to get back, sorry about that...
sumedhghaisas has joined #mlpack
< sumedhghaisas> zoq: Hey Marcus, so I was going through the LSTM code and was wondering... all the gates are represented with 1 single linear layer with output dimension equal to 4 times the dimensional of each gate
< sumedhghaisas> thats technically not the same as training 4 different linear layers right?
< sumedhghaisas> wait... its just a linear layer ... sorry it is ... they are same....
< sumedhghaisas> stupid of me :P