verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq> okay, was worth a test ...
< rcurtin> looks like that time it took ~18 minutes
kris2 has joined #mlpack
< kris2> zoq: i am getting a seg fault. examining the stack trace there is a issue with destructor of arma::col but i am not able to figure it out fully.
< kris2> can you help
< kris2> in function get action
< kris2> GetAction
< rcurtin> kris2: try compiling with debugging symbols and see if Armadillo throws an out-of-bounds exception or something
< rcurtin> and if that doesn't bring anything up, try running it with valgrind to see if there is an invalid memory access or something like this
< rcurtin> that would be my first guess for what's wrong there
< rcurtin> I know you weren't asking me but I thought maybe I could try to help anyway :)
< kris2> rcurting: ya i am tring that right now....... any help is appreciated.....:)
< rcurtin> I'm not seeing anything obvious just by looking at it
< rcurtin> either of the ideas I said should give you some good input towards what is actually happening
< kris2> debugging symbols which ones exactly do you mean
< rcurtin> compile with -g -DDEBUG
sicko has quit [Ping timeout: 260 seconds]
< kris2> valgrind says unconditional jump .... so partially initialized variable i think.....have to figure out which one
< zoq> kris2: That is a good tip, take a look at the return value of GetAction.
< kris2> yes it works now.I should have compiled with -Wall i think it would have given me a warning for that. Noted
< zoq> okay, I restarted the commit job and now the RecurrentNetworkTest takes 1 min 14 sec. Maybe that is a wild guess, but could this be an entropy problem?
< zoq> If we use /dev/random which I'm not sure we do, we could end up waiting for entropy.
< zoq> Let's restart the job again and see what happens "cat /proc/sys/kernel/random/entropy_avail" returns 27
< zoq> Looks like armadillo uses /dev/urandom which will never block.
< zoq> at least on Linux
< zoq> RecurrentNetworkTest 29 sec, I think I'll restart the job once again tomorrow, if the machine is still up.
< rcurtin> yeah it looks like the move will not happen tomorrow
< rcurtin> I I don't have much more input yet on when it will happen
< rcurtin> but I talked to some folks to help set the process in motion
shoeb has joined #mlpack
< shoeb> Hello,
< shoeb> I subscribed to mailing list thrice but i never got confirmation email. PLease help !
< rcurtin> shoeb: make sure it didn't end up in your spam folder or something?
< rcurtin> let me check postfix logs to see that it went out... what email did you use?
< shoeb> iamafficionado@gmail.com
< rcurtin> Mar 9 11:59:43 knife postfix/qmgr[31331]: 2366F700035: from=<iamafficionado@gmail.com>, size=4064, nrcpt=1 (queue active)
< rcurtin> Mar 9 11:59:45 knife postfix/smtp[15829]: D4565700035: to=<iamafficionado@gmail.com>, relay=gmail-smtp-in.l.google.com[108.177.11.26]:25, delay=0.86, delays=0.01/0/0.16/0.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1489078785 h129si1217279vkd.253 - gsmtp)
< rcurtin> doesn't look like you did it three times, but I see that it sent you a message about 12 hours ago
< rcurtin> or actually I see now that was a posting attempt. what interface are you trying to use to subscribe? you should be able to just do it at http://lists.mlpack.org/mailman/listinfo/mlpack
< shoeb> Thanks for the help. Hope it works . Before I used these interface only.
< rcurtin> sure, let me know if there are more problems
shihao has joined #mlpack
< shoeb> How much time it will take to send confirmation email ?
< shihao> rcurtin: I try to add a new macro in 'param.hpp' : #define PARAM_ARMA_VECTOR_IN(ID, DESC, ALIAS) \ PARAM_VECTOR(ID, DESC, ALIAS, false, false, true)
< shihao> #define PARAM_VECTOR(ID, DESC, ALIAS, REQ, TRANS, IN) \ static mlpack::util::Option<arma::vec> \ JOIN(cli_option_dummy_vector_, __COUNTER__) \ (arma::vec(), ID, DESC, ALIAS, REQ, IN, !TRANS);
< shihao> Can you give me a clue about how to solve this problem?
< rcurtin> ah I think there is already a PARAM_VECTOR macro for a std::vector, maybe that is the issue; let me look at the gist
< rcurtin> ah sorry nevermind it seems that isn't the issue
< rcurtin> I should look at the gist first before guessing...
< rcurtin> take a look at param_data.hpp line 62
< rcurtin> there is a template struct here that says the passed type of an arma::Mat<eT> is a std::string
< shihao> This is the complete gist, sorry
< rcurtin> I think you will simply need to add one of those for arma::Col<eT>
< rcurtin> sorry that I did not point that out in the issue description
< rcurtin> I have to step out for a little while, I will be back later
< shihao> Sure, thanks!
< rcurtin> before I go though, to clarify
< rcurtin> the basic functionality of the code that's giving you a problem is that when we use boost::program_options,
< rcurtin> we have to tell it which type is used on the command-line for each option
< rcurtin> but these types can only be double, int, size_t, std::string, char, etc...
< rcurtin> there's no reasonable way to pass an entire data matrix on the command line
< rcurtin> therefore, there are these ParameterType template metaprogramming structs that are used to set the boost::program_options type of matrices (and serializable models) to std::string instead
< rcurtin> so the user passes the filename, not the actual data
< rcurtin> and then in HandleParameter() we take that filename the user passed and load the data
< rcurtin> I hope it all makes sense, I think this is some of the more convoluted code in mlpack
< rcurtin> a refactoring is coming with the automatic bindings but it will still be complicated
< shihao> Type is stored in tname in param_data, right?
< shihao> The command line system is way more complicated than I thought...
kris2 has quit [Ping timeout: 256 seconds]
shoeb has quit [Ping timeout: 260 seconds]
< shihao> It's like type conversion. convert mat type to a string type.
Tash has joined #mlpack
vinayakvivek has joined #mlpack
< rcurtin> shihao: kind of, yes, just the type conversion involves file I/O :)
< rcurtin> part of the reason it is so complex is because soon, the *_main.cpp files will be able to be built not only into command-line programs but also python bindings, and later, bindings for other languages also
< shihao> Got it!
< shihao> rcurtin: Currently I use PARAM_ARMA_COL_VECTOR_IN or PARAM_ARMA_ROW_VECTOR_IN since PARAM_VECTOR is already used, does it make sense?
< rcurtin> shihao: maybe simpler is just PARAM_COL_IN and PARAM_ROW_IN ?
< shihao> rcurtin: Looks better :)
< shihao> rcurtin: I noticed that some command line load lables as row vector and some load lables as col vector.
< shihao> rcurtin: I think it would be better if we can restrict lable file to be loaded as col vector.
dhawalht has joined #mlpack
dhawalht has quit [Client Quit]
shihao has quit [Ping timeout: 260 seconds]
deepanshu_ has joined #mlpack
Tash has quit [Ping timeout: 260 seconds]
thyrix has joined #mlpack
ironstark has quit [Quit: Leaving]
thyrix has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
deepanshu_ has quit [Quit: Connection closed for inactivity]
adi_ has joined #mlpack
adi_ has quit [Quit: Page closed]
mikeling has joined #mlpack
govg has quit [Ping timeout: 256 seconds]
govg has joined #mlpack
vinayakvivek has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
deepanshu_ has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
sicko has joined #mlpack
ob_ has joined #mlpack
ob_ has quit [Client Quit]
< sicko> rcurtin, Hello , I just wanted to clarify that when we talk about deterministic we mean programming every event that may occur and how to handle it right?
clicksaswat has joined #mlpack
clicksaswat has quit [Ping timeout: 260 seconds]
< zoq> mikeling: Have you seen my message and have you solved the issues?
biswajitsc has quit [Quit: Connection closed for inactivity]
thyrix has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
< rcurtin> sicko: I'm not sure what context you mean that in
< rcurtin> shihao: I think basically every method that needs labels uses them as a row vector, so I think that arma::Row<size_t> would be the right type to use
< sicko> rcurtin, ummm remember the critical sensors machine learning problem which i told you about a few days back?
shihao has joined #mlpack
< rcurtin> sicko: hang on let me cache it back in
< rcurtin> if I remember right, the threshold approach was being referred to as "deterministic"
< rcurtin> I had thought, based on my understanding, that with the three sensors you had the approach would be something like
< rcurtin> if (sensor1 > threshold1 && sensor2 > threshold2 && sensor3 > threshold3) { detection! }
< rcurtin> (you could use || instead of && too)
arunreddy has joined #mlpack
thyrix has quit [Quit: Page closed]
arunreddy has quit [Quit: WeeChat 1.4]
arunreddy has joined #mlpack
< arunreddy> rcurtin: Thanks for your comments. I will work on them and will get back to you.
tejank10 has joined #mlpack
vivekp has quit [Ping timeout: 258 seconds]
vivekp has joined #mlpack
shihao has quit [Ping timeout: 260 seconds]
tejank10 has quit [Ping timeout: 260 seconds]
Krish has joined #mlpack
Krish has quit [Quit: Page closed]
kris1 has joined #mlpack
< kris1> hi, i wanted to add a new cpp file to gym_tcp_api cmake file
< kris1> zoq: i had seen your gist and followed it.
< kris1> do i have to create any folder gym_tcp_q_learning_source and then add q_learning.cpp to it
< kris1> or how ??
aashay has quit [Quit: Connection closed for inactivity]
emcenrue has joined #mlpack
< zoq> kris1: Hey, just put the q_learning.cpp into the current cpp folder. If you like you can put the file into another folder in that case you have to use /path/to/file/q_learning.cpp in 'set(gym_tcp_q_learning_source ..'.
< kris1> zoq: thanks, i resolved the problem.
< emcenrue> ayy, hello everyone, just wanted to say thanks for the GSOC 17 opportunity!
< kris1> zoq: when you have time can review my pr on NAG(Nestrov Acc Gradient). So i would have sufficient time to make the changes.
< zoq> emcenrue: It's going to be an awesome summer :)
shihao has joined #mlpack
< zoq> kris1: I'll check the PR, probably in the next two hours.
< kris1> zoq: Thanks.....
shihao has quit [Quit: Page closed]
< emcenrue> rcurtin, do you know if parallelized sgd been discussed in prior archives from: http://knife.lugatgt.org/pipermail/mlpack/
kris1 has quit [Ping timeout: 258 seconds]
< arunreddy> zoq:, rcurtin: I have a question regarding writing a test case for MomentumSGD
< arunreddy> The advantage with momentum update is the speed up, converges faster.
< arunreddy> Momentum update converges in 3M iterations. Whereas SGD with vanilla update doesn't.. can speedup be considered as a valid test case?
deepanshu_ has quit [Quit: Connection closed for inactivity]
< rcurtin> arunreddy: I know that typically momentum may converge faster, but do we have any guarantee of that?
kris1 has joined #mlpack
< arunreddy> rcurtin: Theoretical or Numerical guarantee?
< rcurtin> I guess, it is a bad question to ask; I am pretty certain there is no theoretical guarantee
< rcurtin> a test like this might be somewhat risky, because it may be possible that without momentum SGD could converge faster for some problems
< rcurtin> however, I think it *might* be true that you could come up with a very specific and easy problem to minimize where momentum could help
< rcurtin> like a parabola or something like this, where with momentum it will certainly take fewer steps to converge
< arunreddy> Thats true, thats happening with GeneralizedRosenbrockTest.
< arunreddy> In the simple case it does.. I see vanilla SGD always failing with 3M iterations, and momentum sgd passes the test
< rcurtin> right, that is what happens empirically with the implementations we have now, but remember we are not guaranteed that those implementations are correct :)
< rcurtin> now, maybe you can come up with some simple test case where you have SGD and momentum SGD both take *two* iterations maximum
< rcurtin> and if the function being optimized is sufficiently smooth (I think that would be the condition) then momentum SGD would be closer to the solution
vinayakvivek has quit [Quit: Connection closed for inactivity]
< kris1> undefinined refrence to mlpack::Log::Assert (....). I don't understand why this error crops up
< rcurtin> emcenrue: sorry, I did not see your message, I know it has been talked about, probably last february or march
< arunreddy> rcurtin: I agree with sufficiently smooth function, but i doubt looking at just two iterations would make any difference.
< rcurtin> yeah, I guess, I had thought two because the momentum should provide an additive effect to allow the second iteration to take a larger step
< rcurtin> I guess, maybe the rosenbrock test is ok, but be sure to test it many times to make sure the failure rate is low
< rcurtin> the random element in the test will be the order of points encountered by SGD / momentum SGD, so we want to make sure that the test will not fail for some orderings of points
< arunreddy> That's true, but finding such a function is so tricky. :)
< rcurtin> yes, I agree, it is not easy
< rcurtin> but definitely we want to avoid including any tests that can fail sometimes... I am sure you have already seen the havoc it wreaks, because it takes a lot of time to debug them
< rcurtin> and they can confuse users a lot
< rcurtin> I guess, a thing to keep in mind, is that it's basically impossible in most cases to test if an algorithm is completely correct
< rcurtin> instead, it's more realistic to simply have some number of tests that test to see that it is not completely wrong :)
shihao has joined #mlpack
< arunreddy> Yeah. And the test works with SimpleTestCase. Not for Rosenbrock test..
< arunreddy> For Rosenbrock i see that the max number of iterations is set to 0. Why is it so?
< rcurtin> that means no limit on the number of iterations
< arunreddy> sorry, my bad. it is documented.
trapz has joined #mlpack
< emcenrue> rcurtin, did anything end up happening off of this: http://knife.lugatgt.org/pipermail/mlpack/2016-March/002660.html
< emcenrue> hogwild seems legit :)
< emcenrue> this is in reference to the parallelized sgd btw
< emcenrue> for GSOC that is
trapz has quit [Client Quit]
< rcurtin> emcenrue: no, there was never any PR
< emcenrue> :(
< rcurtin> I have heard from other people who are interested in implementing hogwild too but I am not sure if or when I'll see a contribution there
< emcenrue> (but good for me lol)
< emcenrue> thank oyu
< zoq> kris1: You have to link against mlpack; You can modify the CMAKE_CXX_FLAGS in the CMake file or adapt the CMakeLists.txt file from the nes repo which provides a simple cmake file for mlpack. I can send you the modification once I get back.
trapz has joined #mlpack
trapz has quit [Client Quit]
< kris1> okay i looking online how to set up the CMAKE_CXX_FLAGS for linking an external library
< kris1> zoq: oh i got it now
< kris1> even thought i add the -lmlpack to the CMAKE_CXX_FLAGS with -L and -I options still the same error exists.....
< kris1> i don't know why but it still gets the mlpack from /usr/local/.....
aashay has joined #mlpack
< vivekp> I'm working on restructuring the implementation of SMORMS3 based on wrapper class idea but it looks like it will be fruitful only after PR #925 is merged
< vivekp> otherwise I'll just break travis build on SMORMS3 PR :)
< vivekp> Meanwhile, getting it into shape locally.
< arunreddy> vivekp: It is almost there. :)
< vivekp> great :)
kris1 has quit [Quit: Leaving.]
AndChat493044 has joined #mlpack
< AndChat493044> Hi zoq
< AndChat493044> Cmaes is working
< AndChat493044> And now im writting code for supermario
< AndChat493044> Used the genome library of bang
< AndChat493044> I have 13*13+1 with 10 hidden and 5 outputs
< AndChat493044> So 1750 weight dimension covariance matrix
< AndChat493044> That the implementation would be fast enough ..
< AndChat493044> Coz fetching the values and then setting the weights in neural net with propagation calculation in floating point would be a problem
< AndChat493044> I think it requires some matrix multiplication soln..
< zoq> AndChat4930: Awesome, another reason to figure out what went wrong as I tested the NEAT code.
< zoq> AndChat4930: I see, I already did some performance improvements, but another idea is to use the ann code instead Bang's code for fixed network architectures, should be a faster since we don't have to figure out the innovation number after we changed some values. If you like you can open a PR and we can discuss over there?
< zoq> kris1: Probably because /usr/local/ comes before .. in your library search path.