verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq> Yeah, I think you could also run tests on the pendigits dataset, just wanted to point out running benchmarks against the thyroid dataset is probably not the best idea.
< kris1> Okay sure
< kris1> zoq: Just a little confused by this https://gist.github.com/kris-singh/b130d2073a0ea96a6a5b1100bd316492
jenkins-mlpack has quit [Ping timeout: 255 seconds]
jenkins-mlpack has joined #mlpack
< zoq> kris1: Not sure I get the confussion :)
< kris1> Well the accuary for thyroid dataset even with a single feature is 93% surely wrong but i did not find the reason for it
< zoq> kris1: As I said the thyroid is unbalanced, 93% of the dataset is class 1.
< kris1> ohhhh i see
< kris1> you were talking about the pendigit dataset i did not find it in the data folder
< kris1> is it named something else
< zoq> Basically all datasets in tests/ are small, to keep the test time slow.
< sumedhghaisas> zoq: Hey Marcus...
< sumedhghaisas> thats weird...
< sumedhghaisas> for me the tests are running fine...
< sumedhghaisas> just failing with BOOST error
kris1 has left #mlpack []
sumedhghaisas has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
mikeling has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 255 seconds]
sumedhghaisas has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 240 seconds]
mentekid has joined #mlpack
mentekid has quit [Quit: Leaving.]
kris1_ has joined #mlpack
shikhar has joined #mlpack
kris1_ has quit [Quit: kris1_]
kris1_ has joined #mlpack
kris1_ has quit [Client Quit]
kris1_ has joined #mlpack
kris1_ has quit [Quit: kris1_]
shikhar_ has joined #mlpack
shikhar has quit [Ping timeout: 240 seconds]
kris1_ has joined #mlpack
< wiking> zoq, ping
< zoq> wiking: Hello, still running looks like a bunch of methos timed out, we have to take a look at the issue: http://masterblaster.mlpack.org/job/benchmark%20-%20shogun/47/consoleFull
< wiking> hahah
< wiking> timeout we actually seen
< wiking> or kind of have an idea about
< wiking> why it could happen
< zoq> If you could provide some insights, great :)
< wiking> zoq, ok so yesterday a student of ours realised
< wiking> that if he is testing with your framework our LDA
< wiking> that uses Eigen
< wiking> that uses OpenMP
< wiking> there's a problem.... i.e. the process hangs
< wiking> because you do a fork in the python script
< wiking> and posix fork causes some troubles with openmp
< wiking> coz it has some state info :)
< zoq> hm, okay, but it should timeout after 9000 seconds and continue with the next method. So is the already a fix in place? I guess, I could set OMP_NUM_THREADS=1 for LDA as a workaround?
< wiking> zoq, basically "<micmn> all shogun's methods should be run in a different process"
< wiking> zoq, do you guys have the resource for this to be ported?
< wiking> zoq, our you need help from us?
< zoq> wiking: I can probably do this in the next days, but any help is much appreciated :)
< wiking> zoq, mmm i can ask a student of ours to help but only next week
< wiking> anyhow let's see how we proceed and then next week we'll try
< wiking> i reckon you better kill this one
< wiking> as it's gonna mainly timeout
< wiking> due to this bug
< zoq> wiking: okay sounds good, thanks for the information. Also maybe micmn can just open a PR for the LDA method, I'd like to give him credit for the fix, and if anyone has time we point out to micmn's fix.
< zoq> sumedhghais: I guess you tested with Release=ON which is the default setting for the master branch, can you test with DEBUG=ON?
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
sgupta has joined #mlpack
sumedhghaisas has joined #mlpack
< rcurtin> sgupta: I am not sure what happened, but everything in /home/jenkins/src/armadillo is now owned by root and not world readable
< rcurtin> this type of thing happens when you're not careful about docker permissions
< rcurtin> and in this case it causes every matrix build to break
< sgupta> rcurtin: I once just copied the tar balls and did nothing else on that.
< rcurtin> I am certain that no system process caused this to happen, please be careful in the future
< rcurtin> if you mount something inside of a container, then operate on that mount, the permissions may be modified in bad ways if one doesn't take precautions
< sgupta> I'll double check things from now onwards if something went wrong because of this. Sorry for the trouble.
< rcurtin> sure, it's not a huge problem, not too hard to fix (I am fixing it now)
< rcurtin> but the key to remember is that when you operate inside of a Docker container, if you are root and you are modifying or copying files, then root will own the new files
vivekp has quit [Read error: Connection reset by peer]
< sgupta> rcurtin: just to make sure this never happens again, I'm copying files from /home/sgupta i.e. my user, so how the permissions of Jenkins user files got changed.
vivekp has joined #mlpack
< sgupta> Is it because of registry mapping and we tried to pull the image on dealgood?
< rcurtin> sgupta: I don't know exactly what caused it, and I doubt that it has to do with the registry
< rcurtin> I suspect that /home/jenkins/src/armadillo/ was mounted in a docker container and then some command that changed all the ownership and permissions was run
< sgupta> rcurtin: okay. I'll enquire steps before doing something anything related to this.
kris1_ has quit [Quit: kris1_]
shikhar_ has quit [Ping timeout: 260 seconds]
sumedhghaisas has quit [Quit: Ex-Chat]
sumedhghaisas_ has joined #mlpack
shikhar_ has joined #mlpack
< zoq> Looking at: L_BFGS if I do 'Log::Debug << function.Evaluate(iterate);' with DEBUG=OFF it's going to be evaluated anyway. Thinking about an elegant way to abstract that away.
< zoq> sumedhghaisas_: Did you see my message?
< sumedhghaisas_> zoq: Hey Marcus... sorry I didn't get any. can you please copy-paste them again?
< sumedhghaisas_> on IRC? can't find it on the IRC log...
< zoq> sumedhghais: I guess you tested with Release=ON which is the default setting for the master branch, can you test with DEBUG=ON? You can always check the logs: http://www.mlpack.org/irc/
< sumedhghaisas_> ahh okay. I will test that...
shikhar_ has quit [Ping timeout: 240 seconds]
kris1_ has joined #mlpack
< sumedhghaisas_> zoq: Hy marcus...
< sumedhghaisas_> I got the error...
< sumedhghaisas_> although I am not sure its in my code or not yet
< sumedhghaisas_> so for me the as_scalar function failing
< sumedhghaisas_> I checked and found out that you have used data::binarize for conerting the softmax output... it should be max right?
< sumedhghaisas_> I found 1 part where all the outputs are less than 0.5
< sumedhghaisas_> I am still not getting tht out of bounds error
< zoq> sumedhghais: It should be max right, binarize is more restrict if I expect as output [1 0] using binarize on [0.6 0.6] I get [1 1] which is false using max I get [1 0] which is true.
< zoq> sumedhghais: About the bounds error, I can look into it later, if you haven't figured it out; just looked over the code, but couldn't see anything obvious.
shikhar has joined #mlpack
sgupta has quit [Ping timeout: 268 seconds]
< sumedhghaisas_> zoq: I also noticed that we are using Sigmoid as the last layer and MeanSquaredError as the output layer
< sumedhghaisas_> I am confused about this architecture a bit
< sumedhghaisas_> its framed as a classification task right?
< zoq> sumedhghais. Yeah, it's just a simple classification task, you could use another architecture or output layer, I think I used the same architecture as Alex Graves for his experiments.
sumedhghaisas_ has quit [Quit: Ex-Chat]
sumedhghaisas__ has joined #mlpack
chenzhe has joined #mlpack
mentekid has joined #mlpack
kris1_ has quit [Quit: kris1_]
kris1_ has joined #mlpack
sgupta has joined #mlpack
shikhar has quit [Quit: WeeChat 1.7]
kris1_ has quit [Quit: kris1_]
wyatt has joined #mlpack
< wyatt> does anyone know of example code i could look at that implements ann?
< wyatt> If anyone could help i need to build a neural network to approximate a function like z=sin(x)+sin(y) and I have had a lot of trouble trying to understand the documentation
< wyatt> the code in this link has been useful but it seems that there has been a lot of changes since this code was written https://github.com/mlpack/mlpack/issues/531
kris1 has joined #mlpack
< rcurtin> wyatt: your best bet for now will be to look in src/mlpack/tests/feedforward_network_test.cpp and other related test files
< wyatt> thanks ill check that out
< wyatt> if anyone else has any recommendations I would greatly appreciate it
< zoq> sumedhghais: Any progress? Looks like the GRUDistractedSequenceRecallTest also fails because of a matrix multiplication failure.
< sgupta> rcurtin: resolved that issue with the boost library
< sgupta> Will update the pr soon
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
chenzhe has quit [Quit: chenzhe]
< wyatt> i am not able to install mlpack. it keeps breaking when it comes time to [ 24%] Linking CXX shared library ../../lib/libmlpack.dylib
< wyatt> any ideas?
mikeling has quit [Quit: Connection closed for inactivity]
< zoq> wyatt: Maybe you can post the complete error on pastebin or something like that?
< wyatt> it seems to only happen when im trying to use current gnu gcc-7 and g++-7 which are the compilers i want to use. it finished running make when i used clang
< wyatt> that is what happens when i compile using clang
< wyatt> when i run code that just has #include <mlpack/methods/ann/layer/layer.hpp> i get this error https://pastebin.com/wjBiApDQ
< wyatt> this is what happens when i try to compile with gnu compilers
< wyatt> thank you for your help im sure im missing something simple
< wyatt> ok after running make install the clang version is working. though i would still like to know if you have any ideas why the gnu version isnt
< zoq> You should link against mlpack, maybe I missed it? You said it works with clang?
< zoq> Also g++ is sometimes picky about the "right" order:
< zoq> g++ test.cpp -o test -std=c++11 -lmlpack -larmadillo -lboost_serialization -lboost_program_options
< zoq> I could try to reproduce your setup if you can't solve the issue.
< wyatt> ive now added those flags. Is there any place that i missed in the documentation that said i needed to add those?
< zoq> Actually, you don't need to link against "-lboost_serialization" or "-lboost_program_options" as long as you don't use the boost functionality.
< zoq> But lmlpack is important, which is mentioned here: http://www.mlpack.org/docs/mlpack-git/doxygen.php?doc=build.html#build probably not as prominent as it should be.
< wyatt> this is the error when i try and run what you said for the ffn example https://pastebin.com/yfDEhCW7
< zoq> hm, do you mind to share the code?
< zoq> it looks like you are trying to build the boost test case
< wyatt> right now im trying to just use homebrew with brew install mlpack --with-test
< wyatt> i was it is a part of the code for the ffn example
< wyatt> g++ feedforward_network_test.cpp -o test -std=c++11 -lmlpack -larmadillo -lboost_serialization -lboost_program_options
< zoq> note, if you like to use the ann/neural network code, it's not part of the latest release, the plan is to release it with mlpack 3.0.0
< zoq> ah, I see, the code should look like: https://gist.github.com/zoq/845bfe5b72a2646ec7a0b2db263ae153
< kris1> Do we have support for multiclass logistic regression
< wyatt> can you help me out with what else needs to be commented out of the file?
< kris1> logistic regression predict says that the classification would be either 0 or 1
< wyatt> multinomial logit
< rcurtin> kris1: softmax regression is what you are looking for I think
< kris1> the multinomial logit i think is called the sofmax function if i am not mistaken
< rcurtin> see the mlpack_softmax_regression program or src/mlpack/methods/softmax_regression/softmax_regression.hpp
< kris1> thanks i found it
< zoq> wyatt: you can basically just copy the code inside of the BOOST_AUTO_TEST_CASE(...) into the main function of https://gist.github.com/zoq/845bfe5b72a2646ec7a0b2db263ae153
< wyatt> oh yeah i remember why i cant use homebrew because it doesnt have the ann part yet
< wyatt> ok thank you!
< zoq> wyatt: No problem, here to help :)
< wyatt> I ran this code https://pastebin.com/myGL2vLW and got this error https://pastebin.com/CLGKjYhu
< wyatt> if i comment out BOOST_REQUIRE_LE(classificationError, classificationErrorThreshold); i get another error
< zoq> hm, works fine for me, what's the error if you comment BOOST_REQUIRE_LE?
< wyatt> are you using clang or gnu g++
< wyatt> i got it working now if i use clang but i need gnu for the rest of my project
< wyatt> nevermind i just now get runtime errors...
< wyatt> thank you for your help but it doesnt seem to be working for me. should i give up or do you think I can get it working
< wyatt> this was the runtime error i got https://pastebin.com/xbhwyxJd
< wyatt> also http://www.mlpack.org/docs/mlpack-git/doxygen.php?doc=build.html#build this link says headers are installed in usr/include/ but they are installed in usr/local/include/
< zoq> 'No such file or directory' I guess if you specify the full path it will work. Also I tested it with clang. Did you rebuild mlpack with g++? and did you specify the gcc/g++ e.g. export CC=gcc && export CXX=g++
< zoq> yeah, macOS uses usr/local/include
< zoq> Do you think we should adjust the text?
< zoq> I think you could also do: cmake -DCMAKE_CXX_COMPILER=/path/to/g++ ..
< wyatt> i did not rebuild with that. I will do that in a bit i need to catch a bus but will when i get home and ill probably try again tomorrow
< zoq> okay, sounds good
wyatt has quit [Ping timeout: 260 seconds]
aashay has quit [Quit: Connection closed for inactivity]
< zoq> kris1: Do you mind if I integrate the images from the latest blog post?
< kris1> Yes sure no problem
< zoq> kris1: Okay, great :)
mentekid has quit [Quit: Leaving.]
sumedhghaisas__ has quit [Ping timeout: 260 seconds]
< zoq> kris1: Okay, let me know if I messed something up.
< zoq> kris1: Btw. nice update :)
sumedhghaisas__ has joined #mlpack
< kris1> Thanks. But mikhail and we are still not sure if the present implementation is correct meaning that the results should have been better
< kris1> also today i was checking the test on classification accuracy fom sklearn we get very poor perfomance
< zoq> compared with sklearn?
< kris1> That is why i am checking each step again just to find the mistake
< zoq> I see, have you seen my comment here: https://github.com/mlpack/mlpack/pull/1027#discussion_r123231670
< zoq> I haven't looked over the complete code yet.
< kris1> Yup i updated in the latest PR. That was good catch btw :)
< zoq> ah, okay, you fixed that, haven't noticed