ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
k3nz0_ has quit [Read error: Connection reset by peer]
k3nz0__ has joined #mlpack
k3nz0_ has joined #mlpack
k3nz0__ has quit [Ping timeout: 268 seconds]
< Saksham[m]> Hi, if I need to run a specific test do I need to build the whole test suite or Is there any way to build a specific test file which I updated ?
< KhizirSiddiquiGi> Saksham: you can directly run tests as `./bin/mlpack_test -t RBMNetworkTest/SpikeSlabRBMCIFARTest`
< KhizirSiddiquiGi> basically in the form `./bin/mlpack_test -t TestSuiteName/TestCaseName`
< KhizirSiddiquiGi> but to build you will have to build whole test suite.
< Saksham[m]> Yeah that was my doubt
< Saksham[m]> Thanks a lot
< sailor[m]> my build is always stuck at 80% at a certain point in visual studio windows... but i see that in the debug folder, there are 47 .lib files(such as "mlpack_softmax_regression.lib"). Does this mean that I can use just these libraries and run tests on just them?
< GauravSinghGitte> In [c_relu_impl.hpp](https://github.com/mlpack/mlpack/blob/master/src/mlpack/methods/ann/layer/c_relu_impl.hpp) during the backward propogation, why are rows of matrix 'temp' is taken into consideraton?
< GauravSinghGitte> When the gradient 'g' is calculated?
< GauravSinghGitte> (edited) ... is calculated? => ... is calculated
anirudh has joined #mlpack
anirudh has quit [Remote host closed the connection]
< jenkins-mlpack2> Yippee, build fixed!
< jenkins-mlpack2> Project docker mlpack nightly build build #615: FIXED in 2 hr 58 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/615/
< PrinceGuptaGitte> Hi @zoq , thanks for reviewing my PR #2192 , I've made changes as you suggested. Please have a look when you get time. Thanks.
ImQ009 has joined #mlpack
< PrinceGuptaGitte> I was training a FFN on MNIST dataset, after training a whole epoch the time printed out is 13s while in reality it
< PrinceGuptaGitte> (edited) ... reality it => ... reality it is over 1min.
< PrinceGuptaGitte> I used `ens::ProgressBar()` and `ens::PrintLoss()` callbacks
k3nz0__ has joined #mlpack
k3nz0_ has quit [Ping timeout: 260 seconds]
k3nz0__ has quit [Ping timeout: 240 seconds]
k3nz0 has joined #mlpack
< rcurtin> PrinceGuptaGitte: then probably some part that isn't training is taking 47s or more; have you tined all parts of the program?
k3nz0 has quit [Remote host closed the connection]
k3nz0 has joined #mlpack
k3nz0 has quit [Ping timeout: 240 seconds]
k3nz0 has joined #mlpack
< PrinceGuptaGitte> It was only the training part that took 1 min. Loading data happened under 10 seconds
k3nz0 has quit [Ping timeout: 268 seconds]
k3nz0 has joined #mlpack
hrivu21 has joined #mlpack
hrivu21 has quit [Remote host closed the connection]
hrivu21 has joined #mlpack
hrivu21 has quit [Remote host closed the connection]
k3nz0 has quit [Remote host closed the connection]
k3nz0 has joined #mlpack
k3nz0_ has joined #mlpack
k3nz0 has quit [Ping timeout: 265 seconds]
saksham189Gitter has joined #mlpack
< saksham189Gitter> @zoq did you get a chance to look at the email I sent you?
< metahost> zoq: Are we good to go on the PR (ensmallen#149)?
< kartikdutt18Gitt> Hi @zoq, If you get the chance could you have a look at #2195, I wanted to know on how I should proceed with it. Thanks.
< rcurtin> PrinceGuptaGitte: okay, did you try to time and profile it to see where the runtime is actually being spent?
< volhard[m]> <kartikdutt18Gitt "I think in frequency patterns be"> Thanks. I'm new to the whole ML thing.
< volhard[m]> <metahost "volhard: you may! Tasks like wak"> The network needs to preserve a latent representation for an arbitrary duration. Do CNNs work for this purpose?
< volhard[m]> Wait, is this off topic?
< kartikdutt18Gitt> Hi @volhard, People here are more than happy to help. For Fixed duration such as 500ms chirps, I think RNNs / LSTM etc. should be a good idea.
< volhard[m]> Sorry. The interval between the chirps is 500ms. The chirps last about 50ms (8kHz to 2kHz, exponential drop). I'm also feeding the inertial measurements (Ax,Ay,Az,Gx,Gy,Gz) of the motion of the microphone cluster. The inertial data is very noisy, so I'm not sure if I should pass it as such (I'll try smoothing). The network is to generate depth maps of the enivroment (8kHz implies a wavelength close to 5cm, so not
< volhard[m]> unreasonable). Thus I extract depth from video (640x480@30fps; 3 channels) for backprop.
togo has joined #mlpack
< volhard[m]> Does the fact that the fft frames stay near static for about 0.5 second (after the ping; 30 samples a second) have anything to do with RNN performance? I'm using GRU for the encoding layers. (actually ConvGRU for weight sharing due to the spatial organization of the microphones).
< volhard[m]> I've tried training. 69000 frames in total. 300 samples a sequence (worth 10s). Augmented by flipping x/y axes (of the inertial measurements too); otherwise the network falls for the lower_part_of_image-means-closer bias. KL-Divergence weight scheduling from 0.001 to 0.2 over several iterations (MSE stays constant). So far nothing satisfactory.
< Param-29Gitter[m> Hello @zoq and @rcurtin , please have a look on #2169 once you are free. I wanted to start working on other program and wanted to try to parallelize it but I need you review on the current pr first so that I can understand which programs I should work on next.
ImQ009 has quit [Quit: Leaving]
abhi has joined #mlpack
abhi has quit [Ping timeout: 260 seconds]
< zoq> saksham189: Ohh, yeah, just responded.
< rcurtin> Param-29Gitter[m (and others), please be patiwnt; maintainers will get to the reviews when we can and asking us to do it won't make it happen quicker
< rcurtin> I see tons of requests for review in this channel every day and honestly it's a bit overwhelming...
< rcurtin> I'd love to review everything, but I can't get to it all at one time
< rcurtin> however, I have just now gotten home from a two-week trip and so I should be able to pick up the pace of the reviews a good bit :)
< metahost> rcurtin: Ryan, I do understand that only maintainers can merge PRs and approve them but can contributors help with the review workload too? I think that may help offload some work :)
k3nz0__ has joined #mlpack
k3nz0_ has quit [Ping timeout: 260 seconds]
UmarJ has joined #mlpack
togo has quit [Ping timeout: 246 seconds]
k3nz0__ has quit [Ping timeout: 272 seconds]
UmarGitter[m] has joined #mlpack