verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< rcurtin>
gmanlan: I don't think it is, you could check src/mlpack/core/util/version.hpp, or if it's a git release, it'll be in src/mlpack/core/util/gitversion.cpp (after configuration)
< rcurtin>
er, "git release" -> "git checkout"
< gmanlan>
ok great, will add some magic in appveyor.yml then - we need to pass that to the installer
< gmanlan>
also, I got rid of the .sln, now trying a raw heat/candle/light approach... let's see if it works
< rcurtin>
it might be easier to replicate what CMake is already doing, which is using 'git rev-parse HEAD' and seeing if it returns anything, and if so getting the version from that, otherwise you could grab it from the Doxyfile easily
< gmanlan>
(Y)
< rcurtin>
sure, if you have trouble with it, I'm fine with a .sln, I guess it will just need to be occasionally updated
< rcurtin>
ha! MSN messenger user? :)
< gmanlan>
:) oh, now you know how old I'm
< rcurtin>
it took me a long time to break my habit and write :+1: in Github
< rcurtin>
mxt111: you will need to set LD_LIBRARY_PATH to the directory containing libmlpack.so
< rcurtin>
so, perhaps, 'export LD_LIBRARY_PATH=/usr/local/lib/' (assuming that you have installed libmlpack.so to /usr/local/lib/; if not, it would be the path of your build directory plus lib/, or whatever directory contains libmlpack.so)
< gmanlan>
rcurtin: do you know what happened with the build systems?
< gmanlan>
it's been running/freezed for a while
< rcurtin>
do you mean on ci.mlpack.org?
< rcurtin>
I added a build step today for the git commit build to build the doxygen documentation, I'm still debugging that a bit
< rcurtin>
but that should only affect the git commit test job; the next possibility is that there is some test that is hanging, but if so I'll probably have to debug that tomorrow (it is getting late)
< gmanlan>
ah ok - I'm just checking the continuous integration steps in #1485
< gmanlan>
yeah sure
< rcurtin>
ah, the PRs don't run the git commit test job on ci.mlpack.org, so ignore what I just said :)
< rcurtin>
in this case it looks like #1485 is hung up on the AppVeyor queue
< rcurtin>
let's see if I can cancel some unnecessary builds there
< gmanlan>
ok
< rcurtin>
oh, actually, looks like there is nothing to cancel---it is currently building the master branch, and your build is next:
< rcurtin>
looks like these builds usually take 1h20m - 1h40m, and the current build is 1h in, so I guess it'll run in roughly 20 to 40 minutes for your PR
tallguy23 has quit [Remote host closed the connection]
ululate has joined #mlpack
ululate has quit [Killed (Sigyn (Spam is off topic on freenode.))]
Faylite23 has joined #mlpack
Faylite23 has quit [Remote host closed the connection]
wenhao has joined #mlpack
wenhao has quit [Ping timeout: 252 seconds]
sujeet20 has joined #mlpack
sujeet20 has quit [Killed (Unit193 (Spam is not permitted on freenode.))]
witness has quit [Quit: Connection closed for inactivity]
< Atharva>
zoq: rcurtin: I tried building armadillo from source. It was clearly mentioned that openblas was being used. Still, I don't see any improvement in speed. On monitoring with htop, I only see one core being used 100% and 7 other cores being used about 10%(same is the case without openblas). I am trying it on convolutional net so I guess the code shouldn't be the problem.
< zoq>
Atharva: Can you test another method e.g. kernel PCA?
< Atharva>
zoq: Okay, I will give it a try.
vivekp has joined #mlpack
< Atharva>
zoq: Is there some sample code anywhere for the method?
< zoq>
Atharva: I think the test suite will work.
< Atharva>
Okay
vivekp has quit [Ping timeout: 260 seconds]
< Atharva>
zoq: Well, I noticed something weird, the kernal PCA test suite executed too quickly to see anything. The feedforward network tests seemed to use all the cores to 100%(on htop some of them were red) but the tests took too long to execute.
< Atharva>
I think it could be due to the fact that mlpack also compiled with openMP this time. I am currently compiling mlpack without openMP to see what's happening
< zoq>
okay, I'll see if I can reproduce the results you see later today.
< Atharva>
zoq: Sure!
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 265 seconds]
vpal has joined #mlpack
vpal is now known as vivekp
< Atharva>
After building armadillo from source and then building mlpack without openMP, I get this:
< zoq>
Atharva: This is strange, if you use the armadillo version from apt I guess, it works?
< zoq>
Atharva: perhaps we could find another way to load the dataset for now?
< zoq>
Atharva: Also, if you install hdf5 via apt and armadillo mlpack should have hdf5 support
< Atharva>
Yeah, mlpack was working fine before I built armadillo from source. Also, to be clear, it builds fine even now when I don't switch openMP off, but then every time I compile a program, it warns me that something will break as I am not linking openmp
< Atharva>
zoq: Oh, if I can get it hdf5 to work that way, then maybe I will ignore this issue for now and do that.
< zoq>
Atharva:I think I was wrong, you have to build armadillo with hdf5
< zoq>
If it builds fine with OPENMP=ON, why switch it off?
< Atharva>
zoq: Oh, yeah the error did say that it had to be compiled with hdf5
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#5407 (master - 6256d43 : Ryan Curtin): The build has errored.
< Atharva>
zoq: because it's causing FeedForwardNetworkTest to take extremely long to run.
< Atharva>
zoq: Although, I am not sure that openmp is the reason for that, but I don't have anything else to doubt.
< zoq>
what if you specify: OMP_NUM_THREADS=1 does this solve the performance issue?
< Atharva>
zoq: FeedForwardNetworkTest is just an example, I don't know what else is changes
< zoq>
I guess, to supress the warning at the build step we just have to link against: -fopenmp
< Atharva>
zoq: that does suppress the warning
< zoq>
if OMP_NUM_THREADS=1 works, perhaps that's the best way to go for now
< Atharva>
Do I specify OMP_NUM_THREADS=1 when executing the test
< zoq>
OMP_NUM_THREADS=1 bin/mlpack_test -t ... should work
< zoq>
You could also export OMP_NUM_THREADS
< zoq>
So, you just build armadillo with openblas + hdf5 right?
< Atharva>
zoq: I just built the default configuration of armadillo, openblas and hdf5 were turned on in it.
< zoq>
and the mlpack build failed using the manually build version, if you turn OpenMP off?
< Atharva>
Yes
< zoq>
If I remember right, you can build OpenBLAS with OpenMP, so you have to link against OpenMP; that's probably the issue. So, I guess you could build armadillo against BLAS instead of OpenBLAS or use OMP_NUM_THREADS=1, but not sure that solves the performance issue.
< Atharva>
zoq: If this issue continues, I might have to turn openBLAS off then and just use BLAS. I was hoping I could get some speedup from openBLAS as the celebA dataset is huge. But I guess priority is to get it to work.
< zoq>
Atharva: it's strange that it's slower with OpenBLAS/OpenMP
< zoq>
Atharva: I guess if you see similair results with OMP_NUM_THREADS=1, you could see if increasing OMP_NUM_THREADS does result in a speedup
< Atharva>
zoq: Stranger things is that htop showed that all the cores were getting used close to 100%
< Atharva>
thing*
< zoq>
agreed
< Atharva>
zoq: I tried to see which test was taking so long, turned out ForwardBackwardTest. Rest seemed to take the time they usually take.
< zoq>
Atharva: I see, this one runs the test 100 times if it doesn't converge.
< zoq>
If that's the only test, I would give the celeb dataset a shot.