verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
manthan has quit [Ping timeout: 260 seconds]
< windows> Hey, I spent 3 days trying to build mlpack on Ubuntu 14.0, each time it failed to finish building because my pc ran out of ram. I tried doing the windows tutorial on building but that didn't work. So my alternative for mlpack on windows is to use the Windows Linux Subsystem. So far I have had more success with it, as not it has finished building and I got it installed.
< rcurtin> windows: are you building on a VM for Ubuntu? you can increase the amount of RAM it needs
< rcurtin> I believe that mlpack needs 1.5GB+ of RAM to compile successfully; this is because of the templates in the C++
< rcurtin> also for ubuntu you can be quick and just do 'apt-get install libmlpack-dev', but that is less useful if you are planning on modifying the library and not just using it
< windows> I was building on a cloud vm and the 'apt-get install libmlpack-dev' does not have ann.
windows has quit [Ping timeout: 260 seconds]
< rcurtin> windows: oh, sorry, I see you said Ubuntu 14.04
< rcurtin> you would need, I think, 18.04 for the newest package... let me check
< rcurtin> ah, actually, sorry, mlpack 3 is only available in the repos for debian sid/unstable, not yet ubuntu
< rcurtin> probably the next ubuntu (18.10) will have it
< rcurtin> but that is a long time to wait :)
windows has joined #mlpack
csoni2 has quit [Read error: Connection reset by peer]
sulan_ has joined #mlpack
csoni has joined #mlpack
govg has quit [Ping timeout: 276 seconds]
sulan_ has quit [Quit: Leaving]
govg has joined #mlpack
govg has quit [Ping timeout: 256 seconds]
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
windows has quit [Quit: Page closed]
govg has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
ImQ009 has joined #mlpack
luffy1996 has joined #mlpack
< luffy1996> @zoq, I went throught the link you sent me. I think this might make my implementation more complex. What I want to do is add a softmax layer at the end of ann network and then apply catagorical cross entropy to compute the error and gradients. I guess crossentropy is already implemented on mlpack
< luffy1996> I can see that softmax regression has been implemented in mlpack at https://github.com/mlpack/mlpack/tree/4bd01bbc98889e1ade49302b79d791275854be37/src/mlpack/methods/softmax_regression. I do not know how can one integrate this with ann layer. Please let me know how can I proceed with this. Thanks
< luffy1996> @manthan If you have any ideas, feel free to speak. Thanks
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
csoni has quit [Ping timeout: 268 seconds]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
vpal has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vpal is now known as vivekp
ricklly_ has joined #mlpack
csoni has joined #mlpack
csoni2 has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
csoni2 has quit [Ping timeout: 256 seconds]
csoni has joined #mlpack
witness has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
manthan has joined #mlpack
csoni has joined #mlpack
< manthan> luffy1996 : softmax layer is P(y=j∣x)=e^(xTwj)/∑e^(xTwk). Since it has trainable parameters, you need to implement Forward(), Backward() and Gradient() functions. Forward () will contain normal formal pass of the layer. Backward() will contain the derivative of error wrt input. Gradient() will contain derivative of error with respect to the trainable parameters. error here means error upto this layer in the backward pass.
< luffy1996> Does that mean that I have to go ahead and add the sftmax layer for mlpack
< manthan> https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/ this is a good source for the implementation.
< manthan> yes, addition of softmax layer to ann/layer module would be better. You can then have the layer added to your model by model.Add<Softmax<>>()
< luffy1996> manthan: The link is very helpful. Thanks. Would you mind giving me an idea how is crossentropy used in mlpack without softmax. I guess crossentropy is implemented in mlpack.
< manthan> you can see the ann_layer_test.cpp for the test on crossentropy layer.
< manthan> input is considered to be a vector with 8 values = 0.5 and target as 8 values = 1. The output is thus - 8*log(2).
< luffy1996> I get how crossentropy error is used. I think I should go ahead and add a softmax layer for myself. Thanks :)
< manthan> ya that would be better, i would like to help you in case you want.
< luffy1996> Sure I will ping you incase any help is needed. Thanks
< luffy1996> Hie manthan
< luffy1996> I am confused in the backward for softmax
< luffy1996> template<typename InputDataType, typename OutputDataType>
< luffy1996> template<typename InputType, typename OutputType>
< luffy1996> void Softmax<InputDataType, OutputDataType>::Forward(
< luffy1996> const InputType&& input, OutputType&& output)
< luffy1996> {
< luffy1996> arma::mat maxInput = arma::repmat(arma::max(input), input.n_rows, 1);
< luffy1996> arma::mat expInput = arma::exp(maxInput - input);
< luffy1996> // We will normalize the values to get probability.
< luffy1996> double sumExpInput = arma::sum(expInput);
< luffy1996> output = expInput/sumExpInput;
< luffy1996> }
< luffy1996> template<typename InputDataType, typename OutputDataType>
< luffy1996> template<typename eT>
< luffy1996> void Softmax<InputDataType, OutputDataType>::Backward(
< luffy1996> Please refer to the snippet. The entire code got rejected while posting because of technical issues.
< luffy1996> In particular I would like to know how to proceed with the gradients for backward propagation
< luffy1996> THanks
csoni2 has joined #mlpack
< manthan> luffy1996 : g will contain the derivative of error with respect to input for Backward()
< manthan> so the link that i gave you above, calculates the derivative wrt input for softmax layer
sulan_ has joined #mlpack
< manthan> it uses quotient rule of differentiation
csoni has quit [Ping timeout: 240 seconds]
csoni2 has quit [Ping timeout: 265 seconds]
csoni has joined #mlpack
< luffy1996> DO I need to make the entire jacobian matrix
csoni has quit [Ping timeout: 240 seconds]
< manthan> luffy1996 : i think yes you will have to make it
< manthan> and it is Si(dij - Sj) right? for i,j element
csoni has joined #mlpack
csoni2 has joined #mlpack
< luffy1996> something like Jacob()*gy
< luffy1996> Am I correct here?
csoni has quit [Ping timeout: 240 seconds]
csoni2 has quit [Ping timeout: 265 seconds]
manthan has quit [Ping timeout: 260 seconds]
< luffy1996> @zoq, @rcurtin Any comments ?
csoni has joined #mlpack
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
csoni has quit [Ping timeout: 240 seconds]
csoni has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
csoni has joined #mlpack
robertohueso has joined #mlpack
witness has quit [Quit: Connection closed for inactivity]
< robertohueso> Should we allow CLI / Python binding users to select different metric/kernels/trees for an algorithm on runtime?
govg has quit [Ping timeout: 256 seconds]
govg has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
ricklly_ has quit [Ping timeout: 255 seconds]
dmatt has joined #mlpack
csoni has joined #mlpack
luffy1996 has quit [Quit: Connection closed for inactivity]
csoni has quit [Read error: Connection reset by peer]
Guest59383 has joined #mlpack
csoni has joined #mlpack
vivekp has quit [Ping timeout: 245 seconds]
vivekp has joined #mlpack
dmatt_ has joined #mlpack
dmatt has quit [Ping timeout: 265 seconds]
manthan has joined #mlpack
< manthan> luffy1996 : yes it is Jacob() * gy
< manthan> and jacob() can be computed by using the derivative function of softmax wrt input
< manthan> let rcurtin and zoq clarify in case we missed something
csoni has quit [Read error: Connection reset by peer]
sulan_ has quit [Quit: Leaving]
sulan_ has joined #mlpack
< manthan> rcurtin : zoq : I have updated the decision tree pruning PR and have included it as a template parameter. No extra data members are added for the purpose. I have changed the classProbabilities to take up classCounts. Can you have a look at the updated PR.
Guest59383 has quit [Ping timeout: 255 seconds]
manthan has quit [Ping timeout: 260 seconds]
dmatt has joined #mlpack
dmatt has quit [Remote host closed the connection]
csoni has joined #mlpack
dmatt_ has quit [Ping timeout: 255 seconds]
Guest59383 has joined #mlpack
Guest59383 has quit [Max SendQ exceeded]
Guest59383 has joined #mlpack
Guest59383 has quit [Max SendQ exceeded]
Guest59383 has joined #mlpack
sulan_ has quit [Quit: Leaving]
vivekp has quit [Read error: Connection reset by peer]
manthan has joined #mlpack
vivekp has joined #mlpack
csoni has quit [Read error: Connection reset by peer]
Guest59383 has quit [Read error: Connection reset by peer]
sourabhvarshney1 has joined #mlpack
sourabhvarshney1 has quit [Client Quit]
csoni has joined #mlpack
manthan has quit [Ping timeout: 260 seconds]
ImQ009 has quit [Quit: Leaving]
csoni has quit [Ping timeout: 240 seconds]