verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< jenkins-mlpack2> Project mlpack - git commit test build #19: ABORTED in 2 min 6 sec: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/19/
< rcurtin> ok, removed all the spam from the logs
< rcurtin> hopefully they are gone now
gmanlan has joined #mlpack
< gmanlan> somebody knows if the current mlpack version number is passed to AppVeyor as an env var during build time?
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#5405 (master - 53aff80 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
< rcurtin> gmanlan: I don't think it is, you could check src/mlpack/core/util/version.hpp, or if it's a git release, it'll be in src/mlpack/core/util/gitversion.cpp (after configuration)
< rcurtin> er, "git release" -> "git checkout"
< gmanlan> ok great, will add some magic in appveyor.yml then - we need to pass that to the installer
< gmanlan> also, I got rid of the .sln, now trying a raw heat/candle/light approach... let's see if it works
< rcurtin> it might be easier to replicate what CMake is already doing, which is using 'git rev-parse HEAD' and seeing if it returns anything, and if so getting the version from that, otherwise you could grab it from the Doxyfile easily
< gmanlan> (Y)
< rcurtin> sure, if you have trouble with it, I'm fine with a .sln, I guess it will just need to be occasionally updated
< rcurtin> ha! MSN messenger user? :)
< gmanlan> :) oh, now you know how old I'm
< rcurtin> it took me a long time to break my habit and write :+1: in Github
< gmanlan> haha
mxt111 has joined #mlpack
< jenkins-mlpack2> Project docker mlpack weekly build build #4: STILL UNSTABLE in 5 hr 13 min: http://ci.mlpack.org/job/docker%20mlpack%20weekly%20build/4/
< mxt111> Hi everyone, it is my first time to use mlpack, i trying to compile code example in c++. but i not able to compile
< gmanlan> are you using windows?
< mxt111> no i using ubuntu
< gmanlan> what error do you get?
< mxt111> mlpack_knn: error while loading shared libraries: libmlpack.so.3: cannot open shared object file: No such file or directory
< jenkins-mlpack2> Project mlpack - git commit test build #20: STILL FAILING in 2 hr 0 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/20/
< jenkins-mlpack2> Ryan Curtin: Handle paths with spaces correctly.
< gmanlan> make sure you follow this guide in order to build mlpack first: http://www.mlpack.org/docs/mlpack-3.0.3/doxygen/build.html
< mxt111> Ok good. thank you.
witness has joined #mlpack
< rcurtin> mxt111: you will need to set LD_LIBRARY_PATH to the directory containing libmlpack.so
< rcurtin> so, perhaps, 'export LD_LIBRARY_PATH=/usr/local/lib/' (assuming that you have installed libmlpack.so to /usr/local/lib/; if not, it would be the path of your build directory plus lib/, or whatever directory contains libmlpack.so)
< gmanlan> rcurtin: do you know what happened with the build systems?
< gmanlan> it's been running/freezed for a while
< rcurtin> do you mean on ci.mlpack.org?
< rcurtin> I added a build step today for the git commit build to build the doxygen documentation, I'm still debugging that a bit
< rcurtin> but that should only affect the git commit test job; the next possibility is that there is some test that is hanging, but if so I'll probably have to debug that tomorrow (it is getting late)
< gmanlan> ah ok - I'm just checking the continuous integration steps in #1485
< gmanlan> yeah sure
< rcurtin> ah, the PRs don't run the git commit test job on ci.mlpack.org, so ignore what I just said :)
< rcurtin> in this case it looks like #1485 is hung up on the AppVeyor queue
< rcurtin> let's see if I can cancel some unnecessary builds there
< gmanlan> ok
< rcurtin> oh, actually, looks like there is nothing to cancel---it is currently building the master branch, and your build is next:
< gmanlan> ah great
< rcurtin> looks like these builds usually take 1h20m - 1h40m, and the current build is 1h in, so I guess it'll run in roughly 20 to 40 minutes for your PR
< gmanlan> awesome, thanks!
< gmanlan> have a great night
< rcurtin> you too, talk to you later :)
< gmanlan> (Y) :+1:
< rcurtin> haha :)
Samir_ has joined #mlpack
gmanlan has quit [Quit: Page closed]
Samir has quit [Ping timeout: 260 seconds]
< jenkins-mlpack2> Project mlpack - git commit test build #21: STILL FAILING in 2 hr 0 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/21/
mxt111 has quit [Ping timeout: 252 seconds]
tallguy23 has joined #mlpack
tallguy23 has quit [Remote host closed the connection]
ululate has joined #mlpack
ululate has quit [Killed (Sigyn (Spam is off topic on freenode.))]
Faylite23 has joined #mlpack
Faylite23 has quit [Remote host closed the connection]
wenhao has joined #mlpack
wenhao has quit [Ping timeout: 252 seconds]
sujeet20 has joined #mlpack
sujeet20 has quit [Killed (Unit193 (Spam is not permitted on freenode.))]
witness has quit [Quit: Connection closed for inactivity]
< Atharva> zoq: rcurtin: I tried building armadillo from source. It was clearly mentioned that openblas was being used. Still, I don't see any improvement in speed. On monitoring with htop, I only see one core being used 100% and 7 other cores being used about 10%(same is the case without openblas). I am trying it on convolutional net so I guess the code shouldn't be the problem.
< zoq> Atharva: Can you test another method e.g. kernel PCA?
< Atharva> zoq: Okay, I will give it a try.
vivekp has joined #mlpack
< Atharva> zoq: Is there some sample code anywhere for the method?
< zoq> Atharva: I think the test suite will work.
< Atharva> Okay
vivekp has quit [Ping timeout: 260 seconds]
< Atharva> zoq: Well, I noticed something weird, the kernal PCA test suite executed too quickly to see anything. The feedforward network tests seemed to use all the cores to 100%(on htop some of them were red) but the tests took too long to execute.
< Atharva> I think it could be due to the fact that mlpack also compiled with openMP this time. I am currently compiling mlpack without openMP to see what's happening
< zoq> okay, I'll see if I can reproduce the results you see later today.
< Atharva> zoq: Sure!
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 265 seconds]
vpal has joined #mlpack
vpal is now known as vivekp
< Atharva> After building armadillo from source and then building mlpack without openMP, I get this:
< Atharva> However, this doesn't happen when I keep openMP on
< Atharva> I am guessing as DSO means dynamic shared object, it has something to do with the shared libraries option
< zoq> With a clean build? So 'make clean' before make/cmake or removed the build folder?
< zoq> it should link against pthread
< Atharva> zoq: Oh, okay, I will do a clean build
mxt111 has joined #mlpack
< rcurtin> ok, I see that LMNNLowRankAccuracyBBSGDTest is often hanging, I am investigating it now and should have a fix soon
< zoq> rcurtin: great
ImQ009 has joined #mlpack
vpal has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
vpal is now known as vivekp
< Atharva> zoq: I did a clean build, still it throws that error
< zoq> Atharva: can you rebuild mlpack with: -DBUILD_PYTHON_BINDINGS=OFF
< Atharva> zoq: I will give it a try, let's see what happens.
< Atharva> zoq: I got the same error :(
< Atharva> Ahh this is frustrating, I can't even get mlpack running now.
< Atharva> I could just use the downloaded armadillo from apt instead of the built one, but I need hdf5 support.
< jenkins-mlpack2> Project mlpack - git commit test build #22: STILL FAILING in 1 hr 2 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/22/
< zoq> Atharva: This is strange, if you use the armadillo version from apt I guess, it works?
< zoq> Atharva: perhaps we could find another way to load the dataset for now?
< zoq> Atharva: Also, if you install hdf5 via apt and armadillo mlpack should have hdf5 support
< Atharva> Yeah, mlpack was working fine before I built armadillo from source. Also, to be clear, it builds fine even now when I don't switch openMP off, but then every time I compile a program, it warns me that something will break as I am not linking openmp
< Atharva> zoq: Oh, if I can get it hdf5 to work that way, then maybe I will ignore this issue for now and do that.
< zoq> Atharva:I think I was wrong, you have to build armadillo with hdf5
< zoq> If it builds fine with OPENMP=ON, why switch it off?
< Atharva> zoq: Oh, yeah the error did say that it had to be compiled with hdf5
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#5407 (master - 6256d43 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
< Atharva> zoq: because it's causing FeedForwardNetworkTest to take extremely long to run.
< Atharva> zoq: Although, I am not sure that openmp is the reason for that, but I don't have anything else to doubt.
< zoq> what if you specify: OMP_NUM_THREADS=1 does this solve the performance issue?
< Atharva> zoq: FeedForwardNetworkTest is just an example, I don't know what else is changes
< zoq> I guess, to supress the warning at the build step we just have to link against: -fopenmp
< Atharva> zoq: that does suppress the warning
< zoq> if OMP_NUM_THREADS=1 works, perhaps that's the best way to go for now
< Atharva> Do I specify OMP_NUM_THREADS=1 when executing the test
< zoq> OMP_NUM_THREADS=1 bin/mlpack_test -t ... should work
< zoq> You could also export OMP_NUM_THREADS
< zoq> So, you just build armadillo with openblas + hdf5 right?
< Atharva> zoq: I just built the default configuration of armadillo, openblas and hdf5 were turned on in it.
< zoq> and the mlpack build failed using the manually build version, if you turn OpenMP off?
< Atharva> Yes
< zoq> If I remember right, you can build OpenBLAS with OpenMP, so you have to link against OpenMP; that's probably the issue. So, I guess you could build armadillo against BLAS instead of OpenBLAS or use OMP_NUM_THREADS=1, but not sure that solves the performance issue.
< Atharva> zoq: If this issue continues, I might have to turn openBLAS off then and just use BLAS. I was hoping I could get some speedup from openBLAS as the celebA dataset is huge. But I guess priority is to get it to work.
< zoq> Atharva: it's strange that it's slower with OpenBLAS/OpenMP
< zoq> Atharva: I guess if you see similair results with OMP_NUM_THREADS=1, you could see if increasing OMP_NUM_THREADS does result in a speedup
< Atharva> zoq: Stranger things is that htop showed that all the cores were getting used close to 100%
< Atharva> thing*
< zoq> agreed
< Atharva> zoq: I tried to see which test was taking so long, turned out ForwardBackwardTest. Rest seemed to take the time they usually take.
< zoq> Atharva: I see, this one runs the test 100 times if it doesn't converge.
< zoq> If that's the only test, I would give the celeb dataset a shot.
< Atharva> OMP_NUM_THREADS=1 bin/mlpack_test -t FeedForwardNetworkTest
< Atharva> this worked fast
< Atharva> so, I guess it's openblas/openmp that's causing the problem
< zoq> what about OMP_NUM_THREADS=4 bin/mlpack_test -t FeedForwardNetworkTest ?
< Atharva> okayy, till 4 it runs fine, from 5 it really starts slowing down
< Atharva> 6 is super slow, so is 7
< Atharva> How do I link openmp in CMakeLists.txt for the models repo?
< Atharva> zoq: Thanks!
< Atharva> zoq: Let's see how the cores affect the cnn network
< zoq> good idea
< jenkins-mlpack2> Project mlpack - git commit test build #23: STILL FAILING in 1 hr 1 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/23/
ImQ009_ has joined #mlpack
ImQ009 has quit [Ping timeout: 248 seconds]
mxt111 has quit [Ping timeout: 252 seconds]
ImQ009_ has quit [Read error: Connection reset by peer]
< jenkins-mlpack2> Project mlpack - git commit test build #24: NOW UNSTABLE in 1 hr 7 min: http://ci.mlpack.org/job/mlpack%20-%20git%20commit%20test/24/
< jenkins-mlpack2> noreply: Merge pull request #1430 from xtelinco/master
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#5408 (master - 45e6673 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
berndj22 has joined #mlpack
berndj22 has quit [Killed (Sigyn (Spam is off topic on freenode.))]