verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq>
nilay: The problem here is that slices returns a subview, but the layer expects some matrix type (arma::Mat<eT> or arma::cube<eT>.
< zoq>
nilay: You have to call the backward function of the base layer with some valid input, I this case it's: base1.Backward(base1.OutputParameter(), ..., ...)
< zoq>
nilay: I think, you can use the OutputParameter in every case, but using a dummy parameter as you do it right now, is also a neat idea.
nilay has joined #mlpack
Ritwik has quit [Quit: Page closed]
< nilay>
zoq: thanks i don't think i could have figured out that error.
Mathnerd314 has quit [Ping timeout: 240 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 276 seconds]
nilay has quit [Ping timeout: 250 seconds]
mentekid has joined #mlpack
marcosirc has joined #mlpack
Mathnerd314 has joined #mlpack
mentekid has quit [Ping timeout: 240 seconds]
nilay has joined #mlpack
Mathnerd314 has quit [Ping timeout: 246 seconds]
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay>
zoq: hi, for base_layer would the backprop error always be a matrix type, whatever the input??
< nilay>
matrix meaning 2d matrix
< zoq>
if the input is a cube the error should also be a cube type
< nilay>
which it is
< nilay>
but whats happening is, it goes to this function( line 76, base_layer.hpp) and segfaults there
< nilay>
when error is a cubetype
< nilay>
i am providiing dummy input since input is not required
< nilay>
providing*
< zoq>
can you update the code, I'm not sure I'm looking at the same line
< zoq>
or are we talking about line 76 in base_layer.hpp?
< nilay>
yes
< nilay>
so should the backward pass go here for cnns?
< zoq>
the input for the base layer is required: ActivationFunction::deriv(input, derivative);
< nilay>
ok
< nilay>
thanks thats the error then.
< zoq>
nilay: You have to call the backward function of the base layer with some valid input, I this case │
< nilay>
also i was thinking of also implementing concat_layer, i have one doubt though, i should take as input, the number of layers to concatenate and the collection of layers to be concatenated., so what should i use to take as input the "collection of layers"
< nilay>
could i use a tuple just like in the network?
< zoq>
hm, yeah I guess that's probably the best solution here, good idea
< nilay>
ok
< zoq>
Maybe another solution is to take two conv nets as input ... maybe your idea is cleaner
< nilay>
how would that work?
< nilay>
there could be more than 2 layers being concatenated at a time, like in inception layer
< zoq>
yeah, in this case you have to use two concat layer e.g: concat(concat(A, B), C) as I said your idea is nice
< nilay>
ok lets see if I can implement it nicely :)
< zoq>
let me know if you need help
< nilay>
yeah sure
Mathnerd314 has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay>
zoq: when performing gradient update, if I have a pooling layer before a convLayer, would i have to do anything different?
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< zoq>
nilay: You have to set the InputParameter to the OutputParameter() of the layer before the pooling layer.
< nilay>
but the problem is coming for the layer after the pooling layer
< zoq>
Maybe because the error of the pooling layer isn't correct?
< nilay>
what do you mean by correct?
< zoq>
Since the backward pass of the layer after the pooling layer depends on the error of the pooling layer, it could be that the calculated error of the pooling layer is not correct. Which depends on the error passed to the pooling layer.
< zoq>
not sure, do you get some error message?
< nilay>
i get a segfault when trying to update the gradient of the layer after the pooling layer
< nilay>
the backward pass works correctly, by correctly i mean it proceeds and the gradient is called after that
< nilay>
also i wanted to ask, why do we use rvalue and std::forward instead of lvalue in the constructor for CNN?
< zoq>
To call the CNN constructor with temporary values. Can you push the current state of your code?
< zoq>
can you check that convPool.InputParameter() and biasPool.Delta() isn't empty
< nilay>
ok
< nilay>
convPool.InputParameter() does not have same spatial dimensions as biasPool.Delta()
< nilay>
after pooling, I pad to convPool to make it 28 x 28
< nilay>
should i add a padding in pooling layer also??
< nilay>
then this issue would not come? right now i am doing pooling then padding in the following convLayer
< nilay>
would using a lvalue reference in CNN constructor be a bad idea
< zoq>
hm, I think what we should do is to write a small test which first calls the forward pass with some input data that is way smaller compared to the input you are using right now. Check if the output looks right. Afterwards, we are using some error and call the backward function, and make sure the output looks right.
< zoq>
Once, we checked the forward and backward function, we test the gradient function. It's hard to track down some error in the gradient function, if we can't be sure the two other functions are not correct. Do you think that's reasonable?
< nilay>
i think it is due to the incompatible dimensions only. adding padding in the pooling layer should solve the problem, question is do we want to add a padding arguement also in pooling layer
< nilay>
right now convPool is 26 x 26 x 192
< nilay>
convPool.input that is. and biasPool.Delta is 28 x 28 x 32
< zoq>
I don't think so, we could do any padding in the inception layer, right?
< nilay>
write a separate function to pad?
< nilay>
if I can make convPool.InputParameter 28 x 28 x ... then it could work
< zoq>
yes, if you pad the output of the pooling layer, you have to remove the padding if you call the backward and gradient function.