rcurtin_irc changed the topic of #mlpack to: mlpack: a scalable machine learning library (https://www.mlpack.org/) -- channel logs: https://libera.irclog.whitequark.org/mlpack -- NOTE: messages sent here might not be seen by bridged users on matrix, gitter, or slack
<AnwaarKhalid[m]> Hello! I was hoping to adapt the `concat` layer but looks like rcurtin has already made some effort there. In the previous implementation, we first flattened all the axes till the axis of concatenation i.e the output of every layer contained in the concat object would first be reshaped as `out.reshape(out.n_rows / channels, out.n_cols * channels)` and then we used armadillo to join the columns. In 2777, you treat all the axes that
<AnwaarKhalid[m]> come before the axis of concatenation as "flattened slices" and all the axes that come after as "flattened rows" & you convert every layer output to a cube like `arma::Cube<typename OutputType::elem_type>(this->layerOutputs[i].memptr(), rows,this->network[i]->OutputDimensions()[axis], slices, false, true)` and then you join the columns. There are two things about this that I do not understand:
<AnwaarKhalid[m]> 1. Was there a problem with the previous implementation? Why does converting to a cube make more sense?
<AnwaarKhalid[m]> 2. The way you calculate rows & slices seems wrong: ` size_t slices = (axis == 0) ? input.n_cols : std::accumulate(this->outputDimensions.begin(),this->outputDimensions.begin() + axis, 0) + input.n_cols`. Shouldn't we multiply these dimensions instead of adding them?
_slack_mlpack_U7 has quit [Quit: You have been kicked for being idle]
_slack_mlpack_13 has quit [Quit: You have been kicked for being idle]