ChanServ changed the topic of #mlpack to: Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there.
cjlcarvalho has joined #mlpack
cjlcarvalho has quit [Ping timeout: 264 seconds]
cjlcarvalho has joined #mlpack
cjlcarvalho has quit [Ping timeout: 252 seconds]
cjlcarvalho has joined #mlpack
vivekp has joined #mlpack
cjlcarvalho has quit [Ping timeout: 240 seconds]
akshay has joined #mlpack
akshay has quit [Client Quit]
cjlcarvalho has joined #mlpack
rahul has joined #mlpack
rahul is now known as Guest80569
< Guest80569> hello
Guest80569 has quit [Ping timeout: 256 seconds]
< zoq> Guest80569: Hello there!
cjlcarvalho has quit [Ping timeout: 252 seconds]
< davida> For Convolution<> layers is it possible to apply asymmetric padding? If not, how would I apply a 4x4 filter with stride=1 to a 64x64 input and get "SAME" padding so the output is also a 64x64 matrix?
< davida> The typical formula ... 1+(Hin-f+2p)/s = Hout ... requires padding = 2.5 which is clearly not possible.
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#5570 (master - 922950f : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
< ShikharJ> davida: Be sure that the division on the formula would use a floor function.
< ShikharJ> davida: My suggestion would be to use a odd length filter like 3x3 or 5x5. Asymmetric padding is not used that often as far as I'm aware, but maybe we can provide the support for that. But it would be way too many parameters for padding itself, and correspondingly one might also argue that we need to provide for asymmetric strides as well.
< ShikharJ> rcurtin: The website for ensmallen looks good!
< davida> ShikharJ: Thx for the reply. Was trying to replicate the Deplearning.ai course exercises which are using these even Conv filters. Anyway, having much larger problems as cannot
< davida> get the Conv model they proposed to converge at all.
< ShikharJ> davida: I'm guess they would be trimming the output to get to 64x64? Can you give a link to that?
< davida> Does anyone know if there are major differences between the application of Tensorflow's AdamOptimizer and the one provide in MLPACK if I use that with SGD??
< davida> The exercise is in Python.
< davida> It is a Jupyter notebook.
< ShikharJ> Can you point me to the week of the course? I might have access to the specialization.
< davida> It is Course 4. Week 1. The second assignment.
< davida> Title is "Convolution Model - Application"
< davida> I printed a PDF of the notebook if it can help or you can go to https://www.coursera.org/learn/convolutional-neural-networks/notebook/0TkXB/convolutional-model-application
< davida> NOTE::: this link may be my own notebook so you might not get access.
< ShikharJ> davida: Ah, I can't access the notebook as I have finished the course. I'll have to re-enroll for the access.
< davida> Can I post the PDF somewhere you can accessit?
< ShikharJ> Yeah, maybe post the code in a pastebin and give a link here?
cjlcarvalho has joined #mlpack
< ShikharJ> davida: Ah, using the conv2D function directly does make use of arbitrary padding, so my guess is it adds an odd pad column to the right and row to the bottom as required.
< ShikharJ> zoq: This might be something we should look into as well.
< davida> They have the option to set the padding to SAME or VALID rather than specify an actual padding amount.
< ShikharJ> rcurtin: I'm curious though, when you had started you PhD in ML, what resources did you refer to back in the day when there were hardly many resources for the field?
< davida> Anyway, I tried using the 5x5 and 3x3 with padding of 2 and 1 respectively, instead of 4x4 and 2x2 in the exercise but I cannot get the MLPACK version to converge beyond ~50% on training. It seems something
< davida> is quite different in the optimizer since the network setup is almost identical,
< ShikharJ> davida: Yes, they take care of the computation, but that in a way is a restriction as you only have two padding options, and you'll have to apply tf.pad() to the image to set the padding of your choice.
< davida> I used AdamUpdate with SGD as an approximation for the Tensorflow AdamOptimizer.
cjlcarvalho has quit [Ping timeout: 272 seconds]
< ShikharJ> davida: I'm not sure about the optimizer framework, zoq would be a better person to ask.
< rcurtin> ShikharJ: the Bishop "Pattern Recognition and Machine Learning" book, CLRS for algorithms, and the AIMA book by Russell and Norvig
< rcurtin> that plus recent papers in the particular field of study (which for me was nearest neighbor search)
< rcurtin> so there was a lot of reading of labmates' papers, etc., as I came up to speed
< rcurtin> davida: what batch size were you using?
< davida> AdamUpdate adamUpdate(1e-8, 0.9, 0.999);
< davida> SGD<AdamUpdate> optimizer(0.009, 64, 100000, 1e-05, true, adamUpdate);
< rcurtin> try with a batch size of 1... I yesterday noticed some strange results for convolutional layers with larger batch sizes
< davida> The parameters were taken from the exercuse
< rcurtin> (so change "64" to "1")
< davida> OK - will try now.
< davida> One more question. If I create a layer like this...
< davida> model.Add<Convolution<> >(3, 8, 4, 4, 1, 1, 2, 2, 64, 64);
< davida> ... what will the output size be? 65x65x8 ?
< davida> ... so applying a pooling layer with there parameters:
< davida> model.Add<MaxPooling<> >(8, 8, 8, 8, true);
< rcurtin> what's the input size for that convolution layer?
< rcurtin> oh sorry that is the last two parameters
< davida> Should reduce this to 8x8x8, right since the 65th pixel will be ignored, right?
< davida> 64*64*3 images
< davida> But I am getting memory access violation with this
< davida> I am worried that the flattening of the image is not correctly taken care of in MaxPooling since there is not input for the width and height
< davida> The 1,080 images of size 64*64*3 are read in to a matrix of 1288x1080.
< davida> 12288x1080
< davida> With (64x64)(64x64)(64x64) layout.
< rcurtin> so, the output size I would *expect* for the convolution layer you described is 65x65x8 exactly like you wrote
< davida> ... so then they become 8 layers of (65x65).....(65x65) after the Conv layer, how does MaxPool know that the image input is now 65x65x8?
< rcurtin> however, the size is being calculated by the function ConvOutSize() in convolution.hpp at line 185... and if I am reading it correctly, it will give a size of ... 32x32x8 ??
< rcurtin> let me read the rest of what you wrote, hang on...
< davida> ... because MaxPool should ignore the 65th col and row since they are not complete sets of 8x8 pixels?
< rcurtin> I see that the MaxPooling layer will use the size of the previous layer's output, but I am not too familiar with this part of the code (which is why I am kind of slow to respond)
< davida> Hmmm. Then I am not sure why I get an memory access violation when I run that.
< rcurtin> I'm really kind of hung up on the ConvOutSize() issue. It seems like the output size is being computed incorrectly
< rcurtin> I think that we should open a Github issue for this. To me it seems clear there is a problem of some sort
< rcurtin> would you like to do this, or would you like me to?
< davida> If it was not correct it would explain a lot of issues I am facing right now....
< rcurtin> I don't have the time this afternoon to dig in too deep to this
< rcurtin> but I think that I can find time to address it soon (or maybe someone will beat me to it)
< davida> Do you also suspect a problem with the batch management in the optimizer?
< rcurtin> I think overall the batch management is okay, but I suspect that the convolution layer is not using the memory for its output correctly
< rcurtin> basically each layer will compute some big output matrix, and pass (memory, rows, cols, slices) or something like this to the following layer
< rcurtin> (sometimes the passing of rows/cols/slices is implicit and done through a different mechanism)
< rcurtin> I personally think right now after a quick glance that the convolution layer is *saying* to other layers that rows/cols/slices is one thing, but then it's acting as though those values are *different* inside of the layer
< rcurtin> and this may also be the case for max pooling
< rcurtin> now it would seem odd for you to encounter this, because I know there are tests for this code
< rcurtin> but... it's software. anything can happen...
< ShikharJ> rcurtin: What is the issue with the batch sizes that you are observing?
< ShikharJ> rcurtin: Also, when you eventually started you PhD, did you take time to go back on doing Linear Algebra and Probability all over again, or you did something else?
< davida> rcurtin: Setting batch size to 1 does not seem to complete even one iteration.
< ShikharJ> davida: You are correct regarding the output of the MaxPooling layer, the output should be 8x8x8
< davida> ShikharJ: then there is a problem in the code somewhere as it is throwing an error during Train. Access Violation, so most likely some matrix memory reads/writes are not taking care of the sizes correctly.
< davida> It works fine (no exception) if you match the layers perfectly by using 5x5 filter with pad=2 giving an exact 64x64x8 output.
< davida> rcurtin: I tried: SGD<AdamUpdate> optimizer(0.009, 1, 1080, 1e-05, true, adamUpdate); and it fails to come out of the optimizer loop. Even set maxIterations to 1 and got the same problem.
< ShikharJ> davida: I suspect that would be because of an issue with the Pooling class, unfortunately, I'm not familiar with its working as of yet. I'll dig deeper and let you know tomorrow?
< davida> rcurtin: I take that back. It stopped the optimizer loop after 5mins.
< ShikharJ> davida: For the pooling layer, my suspicion kind of grows, given that it is not tested, apart from being used in an example convolutional network.
< davida> rcurtin: Just reading the code on the ConvOutSize() at line 181 of convolution.hpp. Looks correct to me unless it is being called incorrectly.
< davida> looks like it is used correctly in the calls from convolution_impl.hpp at lines 123 & 124.
< davida> As for MaxPooling, the code calculating the outputWidth and outputHeight on lines 61 & 62 of max_pooling_impl.hpp is correct. In my case it correctly calculates the output 8x8x8 for an input dimension of 65x65x8.
< davida> ... so referring to your suggestion to open a GitHub issue I am not sure what to say.
< zoq> davida: A simple example to reproduce the issue and the expected output would be enough here.
< davida> K
< zoq> davida: thanks
< rcurtin> davida: zoq: ShikharJ: it's always possible I got confused and there is no bug in ConvOutSize(), so take everything I have written with a grain of salt of course :)
< rcurtin> ShikharJ: I observed some strange behavior with the runtime and accuracy changing for batch size in this issue: https://github.com/mlpack/mlpack/pull/1554
< rcurtin> however, I am not actually sure that anything is necessarily *wrong* with the code there. It may just be the expected behavior for that dataset/network combination
< zoq> rcurtin: If there is an issue, I'm sure we can figure it out.
< rcurtin> oh! I see. I did read ConvOutSize() wrong
< rcurtin> the code is 'return std::floor(size + p * 2 - k) / s + 1;'
< rcurtin> but I understood this as 'return std::floor(size + p * 2 - k) / (s + 1);'
< rcurtin> which is entirely different
< rcurtin> I must not have gotten enough sleep last night...
< rcurtin> davida: so, my statement that the output of the convolution layer is 32x32x8 is totally wrong. The actual size will be 65x65x8 like you originally said
< rcurtin> ShikharJ: I remembered linear algebra well enough to not take a class on it, but I did consult some various linear algebra textbooks. Actually one thing that was really useful to me was the matrix cookbook:
< davida> rcurtin: Thanks for confirming.