ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
petris_ is now known as petris
< tejasvi[m]> I'm hoping for someone to review https://github.com/mlpack/mlpack/pull/2127 I've finished adding the tests
< kartikdutt18Gitt> Hi @tejasvi, I have left some comments, Have a look when you get the chance.
volhard[m] has joined #mlpack
< volhard[m]> Should I feed a recurrent net (GRU) audio in the time or frequency domain?
< kartikdutt18Gitt> Hi @volhard, you can work in time domain also but generally data is converted into frequency domain using techniques such as stft.
< kartikdutt18Gitt> I think in frequency patterns become more apparent, if you have heard about tdft, it is a discrete version of fourier transform which is periodic.
< kartikdutt18Gitt> *dtft
< kartikdutt18Gitt> You can read more about stft here, https://www.dsprelated.com/freebooks/sasp/Short_Time_Fourier_Transform.html
Yihan has joined #mlpack
< metahost> volhard: you may! Tasks like wake word detection (like Hey Siri), use CNNs (+ sliding window) to predict when a phrase is detected. If you need contextual information, you should probably use RNNs.
< metahost> But yes, taking a FT makes the individual frequency components stand out
< metahost> Here's another link: https://github.com/MycroftAI/mycroft-precise (Check the how it works section)
Yihan has quit [Ping timeout: 260 seconds]
< PrinceGuptaGitte> Hi @kartikdutt18 Thanks for review. I've done the fixes suggested by you on PR #2208 , I have also cited the FaceNet paper.
zwasd has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #614: UNSTABLE in 3 hr 4 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/614/
zwasd has quit [Quit: Leaving]
< LakshyaOjhaGitte> Hi @zoq can you please restart the appveyor test in my pr of softshrink https://github.com/mlpack/mlpack/pull/2174
< LakshyaOjhaGitte> Thanks.
< kartikdutt18Gitt> Hi @prince776, I will take a look. Thanks.
ImQ009 has joined #mlpack
rohit has joined #mlpack
rohit has quit [Ping timeout: 260 seconds]
johnsoncarl[m] has joined #mlpack
< johnsoncarl[m]> anyone here use something like a virtual env to build and use mlpack?
< johnsoncarl[m]> so that i can delete the trash ones when not needed!
< zoq> johnsoncarl: I used the conda env.
< johnsoncarl[m]> ah. okay.
< johnsoncarl[m]> Thanks
< johnsoncarl[m]> zoq:
< johnsoncarl[m]> looks like i am already using it! :)
< zoq> LakshyaOjha: Looks like it's already queued.
< chopper_inbound[> Hi zoq, can you review this. I am waiting this to be merged😁
< zoq> chopper_in: Will do later today.
< chopper_inbound[> Thanks zoq
< kartikdutt18Gitt> Hi @zoq, could you have a look at #2195, I wanted to know how I should proceed with it.
pranay2 has joined #mlpack
pranay2 has left #mlpack []
< PrinceGuptaGitte> Hi everyone, I was trying to get better understanding of codebase of ann. I have a doubt.
< PrinceGuptaGitte> Why is everything templated instead of using inheritance approach? For example BaseLayer class acts as a template for activation layers where we can use any type of activation function. We could have also done it so that BaseLayer's takes in a ActivationFunction type and all activation functions inherit from it. Could it be because of virtual functions being slow? Or is there some other reason.
< PrinceGuptaGitte> I'm sorry if it's a silly doubt but I don't understand why would we wanna template everything.
pranay2 has joined #mlpack
< PrinceGuptaGitte> (edited) ... we wanna template ... => ... we want to template ...
pranay has joined #mlpack
< GauravSinghGitte> Hey, @prince776 you can read [this](https://www.mlpack.org/papers/mlpack2011.pdf) paper of mlpack. It explains your doubt in detail.
< PrinceGuptaGitte> Thanks for the reference
pranay2 has quit [Remote host closed the connection]
pranay2 has joined #mlpack
< GauravSinghGitte> Hi, @zoq kindly have a look at #2191 I have incorporated the changed that you suggested. Thank you.
pranay has quit [Remote host closed the connection]
pranay2 has quit [Remote host closed the connection]
< GauravSinghGitte> (edited) ... the changed that ... => ... the changes that ...
lozhnikov_ has quit [Ping timeout: 268 seconds]
lozhnikov has joined #mlpack
< tejasvi[m]> What should be the ideal workflow of development? The way I debug is tragic. After making changes in the file, I build and run mlpack_test and use BOOST_TEST_MESSAGE as a cout incarnate. Given the ~45m build time this isn't helpful enough. I tried gdb but it refuses to drill beyond the code of test file. Should I use gdb with handmade code like examples in https://www.mlpack.org/doc/mlpack-3.2.2/doxygen/sample.html? I'm
< tejasvi[m]> bit swamped here.
< SriramSKGitter[m> @tstomar[m] I've noticed much faster build times on subsequent builds. It's only the first time that its ~45 min. Building with -jN and building only specific targets ought to bring compile time down to reasonable levels.
< PrinceGuptaGitte> @tstomar[m] Also whenever you run "make" command it only builds the files that changed (and the connected files)
< PrinceGuptaGitte> @kartikdutt18 I've noticed MLpack doesn't have regular softmax, it only have log softmax. Should I add regular softmax layer?
< zoq> PrinceGupta: There is an open PR that implements the regular SoftMax layer.
< sreenik[m]> Yes, I remember having started it but didn't finish it
< sreenik[m]> If I remember correctly there is probably some mistake in that PR but it would be a lot easier to finish it up rather than take up the work from scratch
< zoq> agreed
< sreenik[m]> In case anyone is interested, feel free to take it up
EL-SHREIFGitter[ has joined #mlpack
< EL-SHREIFGitter[> why their is no mentor in Visualization Tool project for GSoC 2020?
< zoq> EL-SHREIF: Nice catch, will update it later.
< PrinceGuptaGitte> I don't understand what's the problem here
< zoq> PrinceGupta: My first guess is the input size isn't correct.
< PrinceGuptaGitte> Input size is (42000,784) where 42000 is no.. of data samples
< PrinceGuptaGitte> and ouput is one ho encoded matrix of (42000,10)
< PrinceGuptaGitte> (edited) ... one ho encoded ... => ... one hot encoded ...
< zoq> PrinceGupta: Note armadillo column major, so the matrix size should be (784,42000).
< PrinceGuptaGitte> ok I'll try to transpose them
< PrinceGuptaGitte> @zoq apparently the program works when I only take 1 sample (from the 42000 available) and then feed that to `.Train()` function.
< GauravSinghGitte> I don't know if it would be helpful but you can have a look at [this](https://github.com/mlpack/models/blob/master/Kaggle/DigitRecognizer/src/DigitRecognizer.cpp) this also perform the same task you are trying to implement.
< GauravSinghGitte> @prince776
< zoq> agreed, the NegativeLogLikelihood expects a scalar between [1, numer of classes], and not a one hot encoded target.
< PrinceGuptaGitte> Thanks for the help.
ibtihaj has joined #mlpack
Vishwas254 has joined #mlpack
ibtihaj has quit [Quit: Ping timeout (120 seconds)]
travis-ci has joined #mlpack
< travis-ci> shrit/ensmallen#3 (citations - 0a66f6c : Omar Shrit): The build passed.
travis-ci has left #mlpack []
ImQ009 has quit [Quit: Leaving]
Vishwas254 has quit [Remote host closed the connection]
< zoq> ToshalAgrawal: Do you like to add yourself as a mentor for the Visualization Tool idea?