verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
chenzhe has joined #mlpack
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
chenzhe has quit [Ping timeout: 246 seconds]
kris1 has joined #mlpack
sumedhghaisas has joined #mlpack
< sumedhghaisas>
rcurtin, zoq: Sorry for the delay. Removing the prevOutput copy in GRU is creating some memory issues in my code. I am sure is something small. I will definitely figure this out tomorrow and finish that task.
< sumedhghaisas>
For the shift to arma::cube... it does more slightly better results. But resize cost is really high compared to that boost....
sumedhghaisas has quit [Quit: Ex-Chat]
sumedhghaisas_ has joined #mlpack
kris1 has left #mlpack []
sumedhghaisas_ has quit [Ping timeout: 260 seconds]
Trion has joined #mlpack
aashay has joined #mlpack
Trion has quit [Remote host closed the connection]
Trion has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
shikhar has joined #mlpack
govg has joined #mlpack
sgupta has quit [Read error: Connection reset by peer]
govg has quit [Quit: leaving]
mikeling has joined #mlpack
aashay has quit [Quit: Connection closed for inactivity]
govg has joined #mlpack
Trion1 has joined #mlpack
Trion has quit [Read error: Connection reset by peer]
Trion1 is now known as Trion
Trion has quit [Remote host closed the connection]
aashay has joined #mlpack
sgupta has joined #mlpack
Trion has joined #mlpack
sumedhghaisas_ has joined #mlpack
sgupta has quit [Ping timeout: 240 seconds]
sumedhghaisas_ has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
< kris1>
if i making changes to some file and do make actually compiles the old(cached files only). This is happening on clang. I don’t want to do make clean again and again as this takes a lot of time to make
< kris1>
any ideas only to make the changed files
< kris1>
okay this happens only if i make changes to hpp files not to the cpp files.
< kris1>
cpp files are build again okay i think i got it
kris1 has quit [Quit: kris1]
sgupta has joined #mlpack
sgupta has quit [Client Quit]
sgupta has joined #mlpack
kris1 has joined #mlpack
< sgupta>
rcurtin: I was trying to add docker pipeline plugin (it is the most popular with ~64k downloads) but it requires restart and some plugin changes. Please have a look.
< rcurtin>
sgupta: hm, unfortunately we definitely can't restart Jenkins until the benchmark jobs for mlpack and shogun are done
< rcurtin>
I expect the mlpack benchmarking job to be done by June 16th (just guessing by last time)
< rcurtin>
that's too long to wait, so why don't you just run the containers like you are for now, and we will add the plugin later?
< rcurtin>
it should be a pretty easy change
< sgupta>
rcurtin: okay. No issues. Meanwhile, let me know what to do next?
< rcurtin>
yes, let me send you an email
< sgupta>
rcurtin: sure thanks :)
< rcurtin>
kris1: unfortunately if you modify a header that is used by a lot of the library, then basically everything will need to be rebuilt
< rcurtin>
if you are trying to save some time, one thing you could do (in your local branch only) is to comment out all the test files that you don't care about in src/mlpack/tests/CMakeLists.txt, then the tests will be a lot quicker to build
< kris1>
Okay thanks :) i will do that
kris1 has quit [Quit: kris1]
sumedhghaisas_ has joined #mlpack
kris1 has joined #mlpack
Trion has quit [Quit: Have to go, see ya!]
kris1 has quit [Quit: kris1]
sgupta has quit [Ping timeout: 260 seconds]
sgupta has joined #mlpack
kris1 has joined #mlpack
< sumedhghaisas_>
zoq: Hey marcus....
< sumedhghaisas_>
have a quick doubt if you are free?
< sumedhghaisas_>
also I submitted the version which saves copies. Its a very slight boost.
< sumedhghaisas_>
but I suspect it will be better for deeper and bigger outputs
< cult->
rcurtin: i would have a rather beginner question: kde vs gmm, while using 1dimensional 1 gaussian gmm, the distribution is always cupola like /\, but if i use kde on a data it might be look like /\/\. Why?
< cult->
so how is it possible that 1d gmm has 1 peak while kde has multiple on the same data?
< rcurtin>
cult-: with a 1D GMM that has one Gaussian, you are essentially just fitting a single Gaussian to the distribution
< rcurtin>
this means the free parameters for fitting are only the mean and covariance of that Gaussian, and it will always have the shape of a gaussian
< rcurtin>
in the case of KDE, KDE doesn't assume any model at all---the density estimate at a given point is only the sum of the kernels of all other points
< rcurtin>
so KDE can take basically arbitrary shapes
< cult->
ahhhhh
< cult->
thanks!!!!
< rcurtin>
the bandwidth you use for KDE controls how "smooth" the resulting density will be
< rcurtin>
sure, glad to help :)
< cult->
so if i take the mean and cov of the distribution i can just draw a distribution that would look like kde with high bandwidth?
< cult->
ok nvm
< rcurtin>
hm, I'm not sure I understood that question
< cult->
yes i asked it incorrectly, but i understand it now anyway
< rcurtin>
ah, ok
sumedhghaisas_ has quit [Ping timeout: 260 seconds]
< zoq>
sumedhghais: Sounds good, and I think you are right the impact might be helpful for deeper networks.
< zoq>
sumedhghais: Also, you can always just ask your questions, I'll answer everything once I get the chance.
aashay has quit [Quit: Connection closed for inactivity]
aashay has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
kris1 has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Read error: Connection reset by peer]