verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq>
okay, was worth a test ...
< rcurtin>
looks like that time it took ~18 minutes
kris2 has joined #mlpack
< kris2>
zoq: i am getting a seg fault. examining the stack trace there is a issue with destructor of arma::col but i am not able to figure it out fully.
< rcurtin>
kris2: try compiling with debugging symbols and see if Armadillo throws an out-of-bounds exception or something
< rcurtin>
and if that doesn't bring anything up, try running it with valgrind to see if there is an invalid memory access or something like this
< rcurtin>
that would be my first guess for what's wrong there
< rcurtin>
I know you weren't asking me but I thought maybe I could try to help anyway :)
< kris2>
rcurting: ya i am tring that right now....... any help is appreciated.....:)
< rcurtin>
I'm not seeing anything obvious just by looking at it
< rcurtin>
either of the ideas I said should give you some good input towards what is actually happening
< kris2>
debugging symbols which ones exactly do you mean
< rcurtin>
compile with -g -DDEBUG
sicko has quit [Ping timeout: 260 seconds]
< kris2>
valgrind says unconditional jump .... so partially initialized variable i think.....have to figure out which one
< zoq>
kris2: That is a good tip, take a look at the return value of GetAction.
< kris2>
yes it works now.I should have compiled with -Wall i think it would have given me a warning for that. Noted
< zoq>
okay, I restarted the commit job and now the RecurrentNetworkTest takes 1 min 14 sec. Maybe that is a wild guess, but could this be an entropy problem?
< zoq>
If we use /dev/random which I'm not sure we do, we could end up waiting for entropy.
< zoq>
Let's restart the job again and see what happens "cat /proc/sys/kernel/random/entropy_avail" returns 27
< zoq>
Looks like armadillo uses /dev/urandom which will never block.
< zoq>
at least on Linux
< zoq>
RecurrentNetworkTest 29 sec, I think I'll restart the job once again tomorrow, if the machine is still up.
< rcurtin>
yeah it looks like the move will not happen tomorrow
< rcurtin>
I I don't have much more input yet on when it will happen
< rcurtin>
but I talked to some folks to help set the process in motion
shoeb has joined #mlpack
< shoeb>
Hello,
< shoeb>
I subscribed to mailing list thrice but i never got confirmation email. PLease help !
< rcurtin>
shoeb: make sure it didn't end up in your spam folder or something?
< rcurtin>
let me check postfix logs to see that it went out... what email did you use?
< rcurtin>
doesn't look like you did it three times, but I see that it sent you a message about 12 hours ago
< rcurtin>
or actually I see now that was a posting attempt. what interface are you trying to use to subscribe? you should be able to just do it at http://lists.mlpack.org/mailman/listinfo/mlpack
< shoeb>
Thanks for the help. Hope it works . Before I used these interface only.
< rcurtin>
sure, let me know if there are more problems
shihao has joined #mlpack
< shoeb>
How much time it will take to send confirmation email ?
< shihao>
rcurtin: I try to add a new macro in 'param.hpp' : #define PARAM_ARMA_VECTOR_IN(ID, DESC, ALIAS) \ PARAM_VECTOR(ID, DESC, ALIAS, false, false, true)
< rcurtin>
I think you will simply need to add one of those for arma::Col<eT>
< rcurtin>
sorry that I did not point that out in the issue description
< rcurtin>
I have to step out for a little while, I will be back later
< shihao>
Sure, thanks!
< rcurtin>
before I go though, to clarify
< rcurtin>
the basic functionality of the code that's giving you a problem is that when we use boost::program_options,
< rcurtin>
we have to tell it which type is used on the command-line for each option
< rcurtin>
but these types can only be double, int, size_t, std::string, char, etc...
< rcurtin>
there's no reasonable way to pass an entire data matrix on the command line
< rcurtin>
therefore, there are these ParameterType template metaprogramming structs that are used to set the boost::program_options type of matrices (and serializable models) to std::string instead
< rcurtin>
so the user passes the filename, not the actual data
< rcurtin>
and then in HandleParameter() we take that filename the user passed and load the data
< rcurtin>
I hope it all makes sense, I think this is some of the more convoluted code in mlpack
< rcurtin>
a refactoring is coming with the automatic bindings but it will still be complicated
< shihao>
Type is stored in tname in param_data, right?
< shihao>
The command line system is way more complicated than I thought...
kris2 has quit [Ping timeout: 256 seconds]
shoeb has quit [Ping timeout: 260 seconds]
< shihao>
It's like type conversion. convert mat type to a string type.
Tash has joined #mlpack
vinayakvivek has joined #mlpack
< rcurtin>
shihao: kind of, yes, just the type conversion involves file I/O :)
< rcurtin>
part of the reason it is so complex is because soon, the *_main.cpp files will be able to be built not only into command-line programs but also python bindings, and later, bindings for other languages also
< shihao>
Got it!
< shihao>
rcurtin: Currently I use PARAM_ARMA_COL_VECTOR_IN or PARAM_ARMA_ROW_VECTOR_IN since PARAM_VECTOR is already used, does it make sense?
< rcurtin>
shihao: maybe simpler is just PARAM_COL_IN and PARAM_ROW_IN ?
< shihao>
rcurtin: Looks better :)
< shihao>
rcurtin: I noticed that some command line load lables as row vector and some load lables as col vector.
< shihao>
rcurtin: I think it would be better if we can restrict lable file to be loaded as col vector.
dhawalht has joined #mlpack
dhawalht has quit [Client Quit]
shihao has quit [Ping timeout: 260 seconds]
deepanshu_ has joined #mlpack
Tash has quit [Ping timeout: 260 seconds]
thyrix has joined #mlpack
ironstark has quit [Quit: Leaving]
thyrix has quit [Ping timeout: 260 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
deepanshu_ has quit [Quit: Connection closed for inactivity]
adi_ has joined #mlpack
adi_ has quit [Quit: Page closed]
mikeling has joined #mlpack
govg has quit [Ping timeout: 256 seconds]
govg has joined #mlpack
vinayakvivek has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
deepanshu_ has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
sicko has joined #mlpack
ob_ has joined #mlpack
ob_ has quit [Client Quit]
< sicko>
rcurtin, Hello , I just wanted to clarify that when we talk about deterministic we mean programming every event that may occur and how to handle it right?
clicksaswat has joined #mlpack
clicksaswat has quit [Ping timeout: 260 seconds]
< zoq>
mikeling: Have you seen my message and have you solved the issues?
biswajitsc has quit [Quit: Connection closed for inactivity]
thyrix has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
< rcurtin>
sicko: I'm not sure what context you mean that in
< rcurtin>
shihao: I think basically every method that needs labels uses them as a row vector, so I think that arma::Row<size_t> would be the right type to use
< sicko>
rcurtin, ummm remember the critical sensors machine learning problem which i told you about a few days back?
shihao has joined #mlpack
< rcurtin>
sicko: hang on let me cache it back in
< rcurtin>
if I remember right, the threshold approach was being referred to as "deterministic"
< rcurtin>
I had thought, based on my understanding, that with the three sensors you had the approach would be something like
< arunreddy>
rcurtin: Thanks for your comments. I will work on them and will get back to you.
tejank10 has joined #mlpack
vivekp has quit [Ping timeout: 258 seconds]
vivekp has joined #mlpack
shihao has quit [Ping timeout: 260 seconds]
tejank10 has quit [Ping timeout: 260 seconds]
Krish has joined #mlpack
Krish has quit [Quit: Page closed]
kris1 has joined #mlpack
< kris1>
hi, i wanted to add a new cpp file to gym_tcp_api cmake file
< kris1>
zoq: i had seen your gist and followed it.
< kris1>
do i have to create any folder gym_tcp_q_learning_source and then add q_learning.cpp to it
< kris1>
or how ??
aashay has quit [Quit: Connection closed for inactivity]
emcenrue has joined #mlpack
< zoq>
kris1: Hey, just put the q_learning.cpp into the current cpp folder. If you like you can put the file into another folder in that case you have to use /path/to/file/q_learning.cpp in 'set(gym_tcp_q_learning_source ..'.
< kris1>
zoq: thanks, i resolved the problem.
< emcenrue>
ayy, hello everyone, just wanted to say thanks for the GSOC 17 opportunity!
< kris1>
zoq: when you have time can review my pr on NAG(Nestrov Acc Gradient). So i would have sufficient time to make the changes.
< zoq>
emcenrue: It's going to be an awesome summer :)
shihao has joined #mlpack
< zoq>
kris1: I'll check the PR, probably in the next two hours.
< arunreddy>
Momentum update converges in 3M iterations. Whereas SGD with vanilla update doesn't.. can speedup be considered as a valid test case?
deepanshu_ has quit [Quit: Connection closed for inactivity]
< rcurtin>
arunreddy: I know that typically momentum may converge faster, but do we have any guarantee of that?
kris1 has joined #mlpack
< arunreddy>
rcurtin: Theoretical or Numerical guarantee?
< rcurtin>
I guess, it is a bad question to ask; I am pretty certain there is no theoretical guarantee
< rcurtin>
a test like this might be somewhat risky, because it may be possible that without momentum SGD could converge faster for some problems
< rcurtin>
however, I think it *might* be true that you could come up with a very specific and easy problem to minimize where momentum could help
< rcurtin>
like a parabola or something like this, where with momentum it will certainly take fewer steps to converge
< arunreddy>
Thats true, thats happening with GeneralizedRosenbrockTest.
< arunreddy>
In the simple case it does.. I see vanilla SGD always failing with 3M iterations, and momentum sgd passes the test
< rcurtin>
right, that is what happens empirically with the implementations we have now, but remember we are not guaranteed that those implementations are correct :)
< rcurtin>
now, maybe you can come up with some simple test case where you have SGD and momentum SGD both take *two* iterations maximum
< rcurtin>
and if the function being optimized is sufficiently smooth (I think that would be the condition) then momentum SGD would be closer to the solution
vinayakvivek has quit [Quit: Connection closed for inactivity]
< kris1>
undefinined refrence to mlpack::Log::Assert (....). I don't understand why this error crops up
< rcurtin>
emcenrue: sorry, I did not see your message, I know it has been talked about, probably last february or march
< arunreddy>
rcurtin: I agree with sufficiently smooth function, but i doubt looking at just two iterations would make any difference.
< rcurtin>
yeah, I guess, I had thought two because the momentum should provide an additive effect to allow the second iteration to take a larger step
< rcurtin>
I guess, maybe the rosenbrock test is ok, but be sure to test it many times to make sure the failure rate is low
< rcurtin>
the random element in the test will be the order of points encountered by SGD / momentum SGD, so we want to make sure that the test will not fail for some orderings of points
< arunreddy>
That's true, but finding such a function is so tricky. :)
< rcurtin>
yes, I agree, it is not easy
< rcurtin>
but definitely we want to avoid including any tests that can fail sometimes... I am sure you have already seen the havoc it wreaks, because it takes a lot of time to debug them
< rcurtin>
and they can confuse users a lot
< rcurtin>
I guess, a thing to keep in mind, is that it's basically impossible in most cases to test if an algorithm is completely correct
< rcurtin>
instead, it's more realistic to simply have some number of tests that test to see that it is not completely wrong :)
shihao has joined #mlpack
< arunreddy>
Yeah. And the test works with SimpleTestCase. Not for Rosenbrock test..
< arunreddy>
For Rosenbrock i see that the max number of iterations is set to 0. Why is it so?
< emcenrue>
this is in reference to the parallelized sgd btw
< emcenrue>
for GSOC that is
trapz has quit [Client Quit]
< rcurtin>
emcenrue: no, there was never any PR
< emcenrue>
:(
< rcurtin>
I have heard from other people who are interested in implementing hogwild too but I am not sure if or when I'll see a contribution there
< emcenrue>
(but good for me lol)
< emcenrue>
thank oyu
< zoq>
kris1: You have to link against mlpack; You can modify the CMAKE_CXX_FLAGS in the CMake file or adapt the CMakeLists.txt file from the nes repo which provides a simple cmake file for mlpack. I can send you the modification once I get back.
trapz has joined #mlpack
trapz has quit [Client Quit]
< kris1>
okay i looking online how to set up the CMAKE_CXX_FLAGS for linking an external library
< kris1>
zoq: oh i got it now
< kris1>
even thought i add the -lmlpack to the CMAKE_CXX_FLAGS with -L and -I options still the same error exists.....
< kris1>
i don't know why but it still gets the mlpack from /usr/local/.....
aashay has joined #mlpack
< vivekp>
I'm working on restructuring the implementation of SMORMS3 based on wrapper class idea but it looks like it will be fruitful only after PR #925 is merged
< vivekp>
otherwise I'll just break travis build on SMORMS3 PR :)
< vivekp>
Meanwhile, getting it into shape locally.
< arunreddy>
vivekp: It is almost there. :)
< vivekp>
great :)
kris1 has quit [Quit: Leaving.]
AndChat493044 has joined #mlpack
< AndChat493044>
Hi zoq
< AndChat493044>
Cmaes is working
< AndChat493044>
And now im writting code for supermario
< AndChat493044>
Used the genome library of bang
< AndChat493044>
I have 13*13+1 with 10 hidden and 5 outputs
< AndChat493044>
So 1750 weight dimension covariance matrix
< AndChat493044>
That the implementation would be fast enough ..
< AndChat493044>
Coz fetching the values and then setting the weights in neural net with propagation calculation in floating point would be a problem
< AndChat493044>
I think it requires some matrix multiplication soln..
< zoq>
AndChat4930: Awesome, another reason to figure out what went wrong as I tested the NEAT code.
< zoq>
AndChat4930: I see, I already did some performance improvements, but another idea is to use the ann code instead Bang's code for fixed network architectures, should be a faster since we don't have to figure out the innovation number after we changed some values. If you like you can open a PR and we can discuss over there?
< zoq>
kris1: Probably because /usr/local/ comes before .. in your library search path.