23:23 UTC

< October 2018 > Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

- Console
- #amaranth-lang
- #armbian
- #armbian-allwinner
- #armbian-amlogic
- #armbian-broadcom
- #armbian-rockchip
- #armlinux
- #beagle
- #buildbot
- #commonlisp
- #crux
- #crux-arm
- #crux-devel
- #crux-social
- #crystal-lang
- #discferret
- #evennia
- #fedilinks
- #fedora-coreos
- #fedora-riscv
- #ffmpeg
- #ffmpeg-devel
- #foot
- #glasgow
- #hpy
- #hts
- #jruby
- #kisslinux
- #libreelec
- #linux-amlogic
- #linux-exynos
- #linux-mediatek
- #linux-rockchip
- #linux-ti
- #litex
- #maemo-leste
- #mailx
- #mew
- #mlpack
- #moin
- #nmigen
- #numbat
- #ocaml
- ##openfpga
- #openFPGALoader
- #openocd
- #openscad
- #openvswitch
- #osdev
- #picolisp
- #prjbureau
- #pypy
- #racket
- #radxa
- ##raspberrypi-internals
- #riscv
- #river
- #ruby
- #rust-embedded
- #sandstorm
- #scopehal
- #solvespace
- #Speedsaver
- ##stm32-rs
- #tegra
- #titandev
- #u-boot
- ##yamahasynths
- #yocto
- #yosys
- #zeppe-lin

ChanServ changed the topic of #mlpack to: "Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there."

< davida>
mlpack::optimization::SGD<mlpack::optimization::AdamUpdate> optimizer(0.01, 32, 10000, 1e-05, true, mlpack::optimization::AdamUpdate(1e-8, 0.9, 0.999));

< davida>
'Optimize': is not a member of 'mlpack::optimization::GradientClipping<mlpack::optimization::SGD<mlpack::optimization::AdamUpdate,mlpack::optimization::NoDecay>>'

< davida>
zoq: Yes. That works and I now realise that I have to apply clipping to the update policy and not the optimizer.

< davida>
I am building an RNN with my ClippedGradients. Now I am at the point to train my network with X inputs and Y labels configured in arma::mat variables. My compiler is giving me an error stating that in function Train, the compiler cannot convert from arma::mat to arma::cube .

< davida>
Why would my RNN expect cubes? Is that a default template that needs to be overridden somewhere?

< davida>
The full error reads: 'void mlpack::ann::RNN<mlpack::ann::NegativeLogLikelihood<arma::mat,arma::mat>,mlpack::ann::HeInitialization>::Train<mlpack::optimization::SGD<mlpack::optimization::GradientClipping<mlpack::optimization::AdamUpdate>,mlpack::optimization::NoDecay>>(arma::cube,arma::cube,OptimizerType &)': cannot convert argument 1 from 'arma::mat' to 'arma::cube'

< davida>
Hmmm. OK - reading documentation it mentions need to use cubes with i,j,k representing the i'th dimension of the j'th data point at time slice k.

< davida>
I was following the example in the tutorials (http://www.mlpack.org/docs/mlpack-3.0.3/doxygen/anntutorial.html) which uses arma::mat for inputs and labels. Perhaps that page needs updating.

< davida>
I do have a question regarding this cube for inputs. If each slice is a step in time and this can vary by datapoint, then the cube needs to have the 'k' dimension to be the size of the longest datapoint. With this in mind, how do you instruct the RNN to stop processing those datapoints that are shorter than the longest datapoint?

< davida>
Hi, I am struggling a little with creating an RNN. I have a cube of labels that is X(27, 1500, 25) and my labels match as Y(27,1500,25). The input is a OneHot vector and the output should be another OneHot vector of the same dimension (prediction). I want to have 50 nodes in my RNN. I have built is like this:

< davida>
mlpack::ann::Add<> add(nbrNodes); mlpack::ann::Linear<> lookup(27, nbrNodes); mlpack::ann::TanHLayer<> tanHLayer; mlpack::ann::Linear<> linear(50, 50); mlpack::ann::Recurrent<>* recurrent = new mlpack::ann::Recurrent<>(add, lookup, linear, tanHLayer, rho); mlpack::ann::RNN<mlpack::ann::MeanSquaredError<>, mlpack::ann::HeInitialization> model(rho); model.Add<mlpack::ann::IdentityLayer<> >(); model.Add(recurrent); model.Add<m

< davida>
This is basically trying to follow the example provided in the tutorial on the mlpack website with a few modifications. I am not sure at all if what I am doing here is correct since I cannot find many examples of RNNs with MLPACK.

< davida>
Could someone have a look at my model and see if it makes sense. I am getting an error when I run the code that tells me I have a matrix multiplication error: "addition: incompatible matrix dimensions: 50x32 and 50x5"

< davida>
... some more info from the aboce error is that it is being thrown in Backward() of the optimizer.

< davida>
I have tracked the error down to the optimizer by implementing a very simple optimizer instead which works but will not converge, hence the reason I need GradientClipping. The simple optimizer I tried was:

< davida>
The optimizer I need to implement but is failing with the matrix addition error I mentioned before is:

< davida>
mlpack::optimization::GradientClipping<mlpack::optimization::AdamUpdate> clipping(-5, 5, adamUpdate);

< davida>
mlpack::optimization::SGD<mlpack::optimization::GradientClipping<mlpack::optimization::AdamUpdate> > optimizer(0.01, 32, 10000, 1e-05, true, clipping);

< davida>
Some additional input on above error after more debugging. I removed the GradientClipping and the error is still there which means it is within the AdamUpdate portion of the optimizer. Here is the simpified optimizer code:

< davida>
mlpack::optimization::SGD<mlpack::optimization::AdamUpdate> optimizer(0.01, 32, 10000, 1e-05, true, adamUpdate);

< zoq>
davida: Unfortunately we had to strip out the dynamic sequence size support for now, but it should be possible to reintegrate the support again. For now, you might like to pad the input/output.

< zoq>
davida: About the model, you might want to take a look at the example here: https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/recurrent_network_test.cpp