verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
marcosirc has quit [Quit: WeeChat 1.4]
Mathnerd314 has joined #mlpack
Cooler_ has joined #mlpack
Mathnerd314 has quit [Ping timeout: 240 seconds]
nilay has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay> zoq: hi, in the pooling layer, why do we always pool with downsampling? what if i want to do pooling and keep the width same? we can't put so much padding? if say i have 28 x 28 input, i will have to pad it to 54 x 54 if i want to have the same size after doing pooling with size 3.
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
< zoq> nilay: We use the pooling layer for two reasons, 1. reduce the amount of parameters 2. introduce invariance to translation, rotation, and shifting. To do that we have to downsample (use more than one pixel to represent the information). If we would upsample, we have to do interpolation, to introduce some new pixel, wich is counterproductive.
< nilay> but here we want to pool, and keep the width same
< nilay> so what do we do here
< zoq> right, so we have to pad with zeros
< nilay> pad it to 84 x 84
< zoq> you have done the same with the MakeBorder function, if that's the size you need, yes
< nilay> or maybe write a pooling function which does not do downsampling, but does pooling with an input stride. for deep networks we don't want to downsample much
< nilay> if i do a pooling with size 3 and want to keep 28 x 28 size, i want to have input of 84 x 84
< nilay> if we pad with zeros we will have, many zeros -------- then some data ------ then many zeros again
< zoq> yes, right ... in case of the inception layer the dimension of the output of the pooling layer depends on the convolution layer output
< nilay> so you're saying having zeros won't matter?
< zoq> I wouldn't say it doesn't matter, but I can't think of another way that's fast to scale up the output of the pooling layer.
< zoq> You could also test interpolation, but that's definitely not as fast as zero padding.
< nilay> what about pooling with stride=1?
< nilay> also if the input is zero, applying any weight on it would still remain the output to be zero only so in that way convolution dont help
< zoq> Even with pooling stride=1 you don't end up with the same size.
< nilay> yes but i only need to pad 1 or 2 zeros
< nilay> small amount of zeros
< zoq> At the same time, you reduce the effect of the pooling layer (invariance).
< zoq> If you like to test it with a stride factor, I'm not against the idea.
< nilay> i don't get how we effect invariance, we only get redundant large input activations instead of only one large activation previously?
< zoq> if we use stride=1 we move the filter by 1 pixel if we use stride=2 we move the filter by 2 pixels, so we left out information.
< nilay> previously we were moving the filter by size pixels, if we are doing pooling(size)
< zoq> we moved the filter by 1 pixel
< zoq> here is an example that uses stride=2, which moves the filter by two pixel
< zoq> maybe that's a better example: http://cs231n.github.io/convolutional-networks/
< zoq> As I said, you can test it with a stride factor > 1.
< zoq> I do not change the mechanism of the inception layer, you still have to pad with zeros. You can check the output of the pooling layer and resize it depending on the convolutional layer output.
< nilay> pooling with stride > 1 would reduce the width even further, why would we want to do that
marcosirc has joined #mlpack
< nilay> zoq: in the example you showed me ( http://cs231n.github.io/assets/cnn/maxpool.jpeg) they did pooling on 4x4 matrix with 2x2 size and stride = 2 to give output of size 2x2. the pooling function implemented in mlpack also does this only, gives output of size 2x2.
< zoq> nilay: To increase the confidence about e.g. rotations, if we use stride=1 we use more information for one pixel if we use stride > 1 we use less information for one pixel.
< nilay> for (size_t j = 0; j < input.n_cols; j += cStep) move by cStep not move by 1.
< nilay> i don't know how in code we move by 1 pixel?
< zoq> rStep isn't stride, span returns a range from 1 - ksize and in the next step 2 - ksize
< zoq> the current implementation only supports stride=1
< zoq> if you like to use stride > 1 you have to write (i + stride) and (j + stride)
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1211 (master - 81c14d9 : Ryan Curtin): The build was fixed.
travis-ci has left #mlpack []
nilay has quit [Ping timeout: 250 seconds]
Mathnerd314 has joined #mlpack
sojournerc has joined #mlpack
Cooler_ has quit [Ping timeout: 240 seconds]
sojournerc has left #mlpack []
nilay has joined #mlpack
sojournerc has joined #mlpack
yan__ has joined #mlpack
yan__ has quit [Ping timeout: 250 seconds]
mentekid has quit [Ping timeout: 272 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 246 seconds]
mentekid has joined #mlpack
marcosirc has quit [Quit: WeeChat 1.4]
nilay has quit [Ping timeout: 250 seconds]
mentekid has quit [Ping timeout: 264 seconds]
sojournerc has quit [Quit: sojournerc]