verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ImQ009 has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< prakhar_code[m]> How to get started with mlpack if I've a strong hold in ml using Python ans well versed with c/c++ too
< zoq> prakhar_code: Hello, a good start is to get familiar with the codebase, e.g. by going through the tutorials.
< zoq> In case you like to contribute, http://www.mlpack.org/involved.html should be helpful.
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
vivekp has quit [Ping timeout: 265 seconds]
vivekp has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#138 (AtrousConv - 777eab6 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< ShikharJ> zoq: Hey, I think I found an issue with the naive_convolution.hpp file?
< ShikharJ> output = arma::zeros<arma::Mat<eT> >((input.n_rows - filter.n_rows + 1) / dW, (input.n_cols - filter.n_cols + 1) / dH);
< ShikharJ> Shouldn't this be output = arma::zeros<arma::Mat<eT> >((input.n_rows - filter.n_rows) / dW + 1, (input.n_cols - filter.n_cols) / dH) + 1; ?
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#139 (LayerNorm - 2f18c22 : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
< ShikharJ> Considering the input has already been padded?
< ShikharJ> Also, none of the test cases check for stride > 1, which adds to my suspicion why this bug remained unseen for so long.
< ShikharJ> Except for Transposed Convolution tests, where anyways the corresponding strides get reduced to 1 in either direction.
< zoq> ShikharJ: you are absolutely right, with padW it's (inputHeight + 2 * padH - kH) / dH + 1, since we pass the padded input it's (inputHeight - kH) / dH + 1
haritha1313 has joined #mlpack
< haritha1313> zoq: Since the coding period is starting tomorrow, I thought of updating you with what I intend to begin with.
< haritha1313> I plan to implement the merge-multiply and embedding layer as the first step, since they are kind of independent and will became a prerequisite when working on ncf.
< haritha1313> become*
< haritha1313> I am also doing the necessary refactoring for PR #1355, since it affects the base cf class, I guess it would be better to finish it in parallel with the ann layers, before starting with ncf class.
< haritha1313> Hopefully I will push changes for it by tomorrow evening.
< zoq> haritha1313: Hello, sounds like a good plan to me, and would be great if we could work on the refactoring over the week.
< zoq> Also, if we don't get the refactoring done in parallel, we can work on the refactoring after the merge-multiply and embedding layers are implemented.
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< haritha1313> zoq: Sure, I'll try my best to complete refactoring too, as soon as possible.
haritha1313 has quit [Quit: Page closed]
< ShikharJ> zoq: How are gradients supposed to be calculated for the case of Dilated Convolutions? For example, lets say I have a 6x6 input matrix and a 3x3 kernel,
< ShikharJ> which I dilated to 2, so I get an effective kernel width of 5x5, and output of 2x2 (unit stride).
< ShikharJ> But now, when I try to run the GradientConvolutionRule and convolve my input matrix with the output matrix, I'll get back a 5x5 matrix (for a 3x3 kernel).
< ShikharJ> Dilating the output matrix is probably the wrong way to go, so how can I reduce the output in this case?
ImQ009 has quit [Quit: Leaving]
witness_ has quit [Quit: Connection closed for inactivity]
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#140 (AtrousConv - 6c8a02e : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> ShikharJ/mlpack#141 (AtrousConv - 88f652a : Shikhar Jaiswal): The build has errored.
travis-ci has left #mlpack []