ChanServ changed the topic of #mlpack to: Due to ongoing spam on freenode, we've muted unregistered users. See http://www.mlpack.org/ircspam.txt for more information, or also you could join #mlpack-temp and chat there.
cjlcarvalho has joined #mlpack
< davida> zoq: Done. Uploaded to the data directory on my GitHub. Note that the trainSetX.bin is in a compressed file as it was too large for a GitHub upload.
< davida> zoq: Note that the data is saved raw, i.e. pre-normalisation.
< zoq> davida: Thanks!
davida has quit [Ping timeout: 244 seconds]
davida has joined #mlpack
davida has quit [Ping timeout: 252 seconds]
davida has joined #mlpack
dhull has joined #mlpack
dhull_ has joined #mlpack
dhull_ has quit [Client Quit]
cjlcarvalho has quit [Read error: No route to host]
cjlcarvalho has joined #mlpack
dhull has quit [Ping timeout: 256 seconds]
pd09041999 has joined #mlpack
cjlcarvalho has quit [Ping timeout: 240 seconds]
dhull has joined #mlpack
dhull_ has joined #mlpack
dhull has quit [Quit: Page closed]
cjlcarvalho has joined #mlpack
< zoq> davida: Looks like there is some issue with how the layer handles a batch, if I use a batch size of 1 I get https://gist.github.com/zoq/fbca2d7f51e7745b76164d7294e1f475
< zoq> davida: which is I think is close to the tf result.
cjlcarvalho has quit [Ping timeout: 268 seconds]
davida has joined #mlpack
dhull_ has quit [Ping timeout: 268 seconds]
< davida> zoq: I got disconnected for a while so just catching up with your message in the mlpack irc log. Can you tell me how you set the MaxIterations when you set the BatchSize = 1 ?
< davida> Could you also share any other changes you might have made?
dhull_ has joined #mlpack
< davida> zoq: the reason I ask is that setting the batchsize to 1 and leaving MaxIter=10000 seems to take forever to run on my PC. I have had it running for about 20mins and have not even had 5 Epochs complete.
< davida> zoq: First 5 Epochs result for me with BatchSize=1 and MaxItr=1080:
< davida> Epoch: 0 Training Accuracy = 16.6667% Test Accuracy = 16.6667%
< davida> Epoch: 5 Training Accuracy = 16.6667% Test Accuracy = 16.6667%
< davida> compared to your result:
< davida> Epoch: 0Training Accuracy = 16.6667%Test Accuracy = 16.6667%
< davida> Epoch: 5 Training Accuracy = 23.9815% Test Accuracy = 24.1667%
< davida> ... so either we have different settings or I have an issue with my installation of MLPACK.
< davida> zoq: Epoch: 10 Training Accuracy = 16.6667% Test Accuracy = 16.6667% vs Epoch: 10 Training Accuracy = 47.3148% Test Accuracy = 40%
< zoq> davida: Haven't really changed anything, still use LeakyRELU: https://gist.github.com/zoq/5f11b7c6a4942523c2d4e67556c9ab17
< zoq> davida: Did you build mlpack with -DDEBUG=ON or -DRELEASE=ON (default), RELEASE should be faster.
< davida> zoq: In the code I uploaded the optimizer was set to:
< davida> SGD<AdamUpdate> optimizer(0.009, 64, 10000, 1e-05, true, adamUpdate);
pd09041999 has quit [Ping timeout: 245 seconds]
< davida> ... so all you did was change the 64 -> 1 ?
< zoq> I think so, I used Week1Main.cpp and not the latest ConvolutionModelApplication.cpp.
< davida> I made changes to that since I uploaded it. Could you please paste the optimizer line for me.
< davida> OK - figured out how to get the deleted file back from GitHub. I see that was set to BatchSize=1 and MaxIters=1080. I tried that on my computer and it failed to get better than 16% for 100 epochs. It looks like I may have a problem
< davida> in my libraries.
< davida> I am first trying with CUDA NVBLAS turned off to see it that is affecting it at all.
< zoq> davida: You can find the file I used here: https://gist.github.com/zoq/5f11b7c6a4942523c2d4e67556c9ab17
< davida> That one has the BatchSize=64
< davida> and MaxIter=1000
< davida> zoq: so did you change that 64 to 1 ?
< zoq> strange, let me rerun the example
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
pd09041999 has joined #mlpack
pd09041999 has quit [Max SendQ exceeded]
dhull_ has quit [Read error: Connection reset by peer]
< davida> zoq: ok - I disabled CUDA NVBLAS and it didn't change anything on my end in terms of results, although it did slow it down a lot.
< zoq> davida: Okay, turns out, I haven't changed the batch size, strange, did you build mlpack with -DDEBUG=ON?
< davida> I think I did.
< davida> zoq: yes - I am using debug versions of both Armadillo and MLPACK
< davida> zoq: will that cause me a problem? I can switch to the 'release' versions of the libraries
< davida> zoq: .... but not sure why that would impact anything other than slowing it down.
< zoq> It shouldn't change anything right, I will rerun the example on another system as well.
< davida> zoq: I realised I did not actually compile the RELEASE version of MLPACK. However, as I am compiling a release version I am getting a few errors. The first is "Unknown binding type". It is coming from mlpack_main.hpp which seems should only be included in command line versions.
< davida> zoq: I am also getting a lot of DLLIMPORT errors.
< davida> zoq: Is there a compile switch for BINDING_TYPE that needs to be set for building the libraries?
< zoq> davida: If you don't need the pythoin bindings, I would jsut set -DBUILD_PYTHON_BINDINGS=OFF
< zoq> davida: Also, with latest version do you mean the master branch?
< davida> I built 3.0.3 a while ago. That is what I have on my computer
< zoq> davida: Do you mind to test against the master brnach?
< davida> zoq: where do I pull the master?
< davida> OK - I will rebiuld all from there and try again.
< zoq> okay, great
< davida> will let you know how it goes with that
< davida> zoq: getting a CMAKE error on MLPACK configuration. ERROR: Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)
< davida> zoq: but the logfile shows no errors
< davida> but configure output shows: -- Configuring incomplete, errors occurred!
< davida> See also "D:/sdk/mlpack/mlpack/build/CMakeFiles/CMakeOutput.log".
< davida> strange.
< zoq> davida: Did you build with -DBUILD_PYTHON_BINDINGS=OFF?
< davida> I will try to build anyway.
< davida> I am running CMAKE right now.
< davida> zoq: In tha make it says this:
< davida> CMake Warning at CMakeLists.txt:31 (message):
< davida> By default Python bindings are not compiled for Windows because they are
< davida> not known to work. Set BUILD_PYTHON_BINDINGS to ON if you want them built.
< davida> zoq: I think this means they are off
< rcurtin> davida: if Python bindings are off, there shouldn't be any error "Could NOT find PythonInterp"---that part of the code shouldn't even be getting called
< zoq> maybe we missed something
< rcurtin> davida: you might need to remove CMakeCache.txt or something like this and try reconfiguring
< rcurtin> I will say, I am a little confused to see that error; I can't reproduce anything like it on Linux
< davida> zoq: rcurtin: I added the -DBUILD_PYTHON_BINDINGS=OFF to the cmake line and it worked this time.
< davida> "D:\Program Files\CMake\bin\cmake" -G "Visual Studio 15 2017 Win64" -DBLAS_LIBRARY:FILEPATH="d:/sdk/mlpack/mlpack/packages/OpenBLAS.0.2.14.1/lib/native/lib/x64/libopenblas.dll.a" -DLAPACK_LIBRARY:FILEPATH="d:/sdk/mlpack/mlpack/packages/OpenBLAS.0.2.14.1/lib/native/lib/x64/libopenblas.dll.a" -DARMADILLO_INCLUDE_DIR="d:/sdk/mlpack/armadillo-9.200.4/include" -DARMADILLO_LIBRARY:FILEPATH="d:/sdk/mlpack/armadillo-9.200.4/build/Release/armadillo.lib"
< davida> -DBOOST_INCLUDEDIR:PATH="d:/sdk/boost/boost_1_68_0/" -DBOOST_LIBRARYDIR:PATH="d:/sdk/boost/boost_1_68_0/lib64-msvc-14.1" -DDEBUG=OFF -DPROFILE=OFF -DBUILD_PYTHON_BINDINGS=OFF ..
< davida> That particular option was not specified on the Windows build page
< rcurtin> I see---I think I might know what this is about. hang on, let me test something
< zoq> hm, I think there is some option I missed, so that the Travis build is triggered
< rcurtin> hehe, I think maybe I see it
< rcurtin> .travis.yaml -> .travis.yml :)
< rcurtin> seems like it is building now
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#1 (master - 3dadea5 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#2 (master - c934628 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
davida has quit [Ping timeout: 240 seconds]
davida has joined #mlpack
< zoq> ahh, I see
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#3 (master - b9500e9 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
< davida> zoq: rcurtin: I successfully built both Release and BDebug versions of MLPACK with the MASTER pull. Now trying my code again.
< zoq> davida: Hopefully it works out.
< davida> zoq: unfortunately not so far. I am still at 16.67% after 10 Epochs where you were already at 47%.
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#4 (master - 0beff34 : Marcus Edel): The build has errored.
travis-ci has left #mlpack []
< davida> zoq: Epoch: 30 Training Accuracy = 16.6667% Test Accuracy = 16.6667% :(
< zoq> davida: Okay, I think it's save to say that this isn't the effect we aimed for.
< davida> zoq: it seems that something might be broken on my build
< zoq> I wonder if this is some sort of a windows related issue, I'll see if I can test this out on a windows system.
< davida> I could also try to build this in the Ubuntu window in my platform to see if I still have the problem there.
< zoq> that might work as well
< davida> zoq: for Ubuntu should I use the "$ sudo apt-get install libmlpack-dev"
< zoq> davida: Actually, I would build the master branch as well.
< davida> ok
travis-ci has joined #mlpack
< travis-ci> mlpack/ensmallen#5 (master - fa250a1 : Marcus Edel): The build passed.
travis-ci has left #mlpack []
< rcurtin> zoq: thanks, I guess it does need to be on one line :)