verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ashutosh has joined #mlpack
ashutosh has quit [Quit: Connection closed for inactivity]
ashutosh has joined #mlpack
ashutosh has quit [Client Quit]
Mathnerd314 has quit [Ping timeout: 244 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
circ-user-JiJuj has joined #mlpack
circ-user-JiJuj has quit [Remote host closed the connection]
marcosirc has joined #mlpack
< nilay>
zoq: i will have to implement 1 x 1 convolution separately, right?
< nilay>
because we need to accumulate depth here unlike 3 x 3 or 5 x 5 convolutions (for which we can use NaiveConvolution)
< nilay>
and also then backprop would be different for 1x1.
< zoq>
yes, it's probably a good idea to implement 1x1 separately
< nilay>
yeah so we should implement 1x1 layer right?
< nilay>
separate forward and backward pass
< zoq>
You can also implement the 1x1 conv inside the inception layer
< nilay>
but then if we want to use just 1x1 conv separately, then we would not be able to
< zoq>
Good point, I guess, you are right a 1x1 conv layer is the way to go.
< nilay>
how fully connected layers are just 1x1 convolution
< nilay>
i'll be back in 10 minutes.
nilay has quit [Quit: Page closed]
nilay has joined #mlpack
< zoq>
You can express a fully-connected layer using a 1x1 convolution, Right now, I can't remember what paper proofs this. I think it was connected with Yann's binary conv network.
< zoq>
But, I'm not sure they are the same.
< nilay>
ok can we do the reverse?
< nilay>
express 1x1 convolution as a fully-connected layer
< zoq>
you mean instead of implementing a 1x1 conv layer we could just use the fully-connected layer?
< nilay>
yes, i can think how we could do the forward pass
< nilay>
i am thinking about the backward pass
< nilay>
ok it would take much more time, its better to implement it as separate
< zoq>
nilay: Hold on, why do you think we can't use the ConvLayer to do 1x1 convolution?
< nilay>
no, because i couldn't find a method that does (arma::cube input, arma::mat filter, arma::mat output)
< nilay>
sorry can't use NaiveConvolution
< zoq>
I think we could just use the ConvLayer inside the inception layer.
< nilay>
how
< nilay>
can't even do the convolution operation
< zoq>
ConvLayer<> convLayer0(1, 8, 1, 1);
< zoq>
convLayer0.Forward(input, output)
< zoq>
does a 1x1 convolution on the input data.
< zoq>
ConvLayer<> convLayer0(10, 2, 1, 1); is probably is a better example, because you like to reduce the input data
< zoq>
The last example takes an cube with 10 slices as input and outputs the results of 2 (1x1) convolutions.
< zoq>
so let's say the input is arma::cube(100, 100, 10) the output would be of size arma::cube(100, 100, 2)
Mathnerd314 has joined #mlpack
richhiey1996 has joined #mlpack
richhiey1996 has quit [Quit: Page closed]
mentekid has quit [Ping timeout: 240 seconds]
mentekid has joined #mlpack
< rcurtin>
mentekid: are you done with the multiprobe LSH PR? I think it is ready to merge, but I dunno if you were waiting to do anything else
< mentekid>
No I think it's ready
< mentekid>
I've started working on parallel find() and unique() code but things don't look good... especially find(), there's too much lock contention
< mentekid>
so it ends up being way slower tha arma::find()
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 246 seconds]
mentekid has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1135 (master - a50784d : Ryan Curtin): The build was fixed.