verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
marcosirc has quit [Quit: WeeChat 1.4]
theeviltwin has joined #mlpack
< theeviltwin>
hey, any veterans there? newbie here
nilay has joined #mlpack
< nilay>
zoq: can you still run that test once, actually what's happening there is that the test completes and after that it gives a memory error, which is very weird considering there isn't anything to do once the test completes (the last line is printed).
nilay has quit [Ping timeout: 250 seconds]
govg has quit [Ping timeout: 276 seconds]
govg has joined #mlpack
theeviltwin has quit [Ping timeout: 250 seconds]
Mathnerd314 has quit [Ping timeout: 240 seconds]
nilay has joined #mlpack
mentekid has joined #mlpack
< zoq>
ilay: sure no problem, I'll take a look at it later today.
nilay has quit [Ping timeout: 250 seconds]
marcosirc has joined #mlpack
< zoq>
nilay: Can you post the complete test? I used the test from the gist and it runs without any errors on my machine.
Mathnerd314 has joined #mlpack
< rcurtin>
mentekid: I'm not sure where lines 82-86 in the program you sent came from... I think that should be from Equation 10 in the Minka paper?
< mentekid>
rcurtin: I was actually correcting precisely that part
< rcurtin>
I came up with this:
< mentekid>
I tried to simplify the equation but probably messed something
< rcurtin>
and that seems to converge to a relatively close approximation (I guess we could modify the tolerance if we wanted it closer):
< rcurtin>
$ ./a.out
< rcurtin>
aEst = 7.20323
< rcurtin>
aEst = 7.20331
< rcurtin>
7.20331(Should be: 7.3)
< mentekid>
yep I get exactly the same thing
< rcurtin>
4.55148(Should be: 4.5)
< mentekid>
I tried to simplify the fraction
< mentekid>
and probably fumbled along the way
< rcurtin>
yeah, maybe it can be simplified, but I have not thought about how very much
< rcurtin>
I had that happen recently too... the hashing paper I submitted, it turns out, had a big error in the proof
< mentekid>
I'll stick with the Minka formula I think
< rcurtin>
I accidentally expanded (1 - a) * (1 + a) to (1 + a^2)...
< mentekid>
it's not even that significant to optimize the loop, it seems to mostly converge in 3-4 iterations or less even with high tolerance
< rcurtin>
luckily I was still able to preserve the proof result when I fixed it, but still, bah! you check it over again and again and still there's something wrong you didn't see... :)
< mentekid>
yeah, I did it a few times and got different results each time
< mentekid>
I think I'll actually plug it in wolfram|alpha and see if they can do anything smart
< rcurtin>
hm, even when I set the tolerance to 1e-10, I still get 7.20331, not 7.3
< rcurtin>
but I guess it is possible that since there are only 200 samples, it is possible that the generated points actually fit the distribution better with 7.20331
< mentekid>
yeah I was going to say that, a bigger sample might come closer to the real alpha
< rcurtin>
yeah, with 200k points I'm getting 7.28
< rcurtin>
ah, there we go... you have to set the seed for the std::default_random_engine
< rcurtin>
now I get different results each run, after adding
< rcurtin>
generator.seed(time(NULL));
< mentekid>
I was wondering why it was the same result every time
< mentekid>
cool :)
< rcurtin>
when you implement this in the mlpack code, see if you can find some way to only include the boost digamma and trigamma headers inside of the implementation in such a way that it won't be included somewhere else
< rcurtin>
whether or not that is really possible, I am not sure, it depends on whether or not the code is templatized
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#1228 (master - f21439c : Ryan Curtin): The build is still failing.
< rcurtin>
if it is, you might be able to use explicit template instantiations
< rcurtin>
(this is in the interest of trying to reduce compile time)
< mentekid>
I'm not sure how to check if it is included only on my class
< rcurtin>
basically, if you include the boost headers in any hpp file, then any mlpack code that includes that hpp file will also include the boost headers
< mentekid>
the code is template-based, the input argument is a template
< rcurtin>
if you instead include it in a cpp file, then it won't be included anywhere else
< rcurtin>
ok; don't spend too much time on it, but I may play around with it once you submit a PR and see if I can figure anything out
< rcurtin>
I am trying to do the same with some other mlpack code, because compile time is pretty long...
< mentekid>
so if I develop my class in an hpp file and include boost at the top, any file including my file will need to recompile digamma/trigamma. I see
< mentekid>
I can implement the class in a cpp file instead, that shouldn't be a problem... I'm not sure how it would be possible for it to be included then
nilay has joined #mlpack
< rcurtin>
yeah, exactly, things that are included in cpp files only will not be included by the rest of mlpack
< rcurtin>
in fact, if we managed to move all of our dependencies on any Boost library into cpp files only, then the Boost headers would only be a build-time dependency and you would not need them once mlpack was built (unless you were using some other Boost functionality)
< mentekid>
might be a stupid question - would what you said be doable this way: define the class in a gamma_dist.hpp, then implement the functions in gamma_dist_impl.cpp where I include the boost functions?
< mentekid>
I'm not sure if that's possible though - can I do #include <gamma_dist_impl.cpp> or somehow link against the implementation of my class?
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
< mentekid>
rcurtin: So, I ended up doing it like the other distributions, but I'm not sure how to link my code. In mlpack/core/dists/ I created a gamma_distribution.hpp and a gamma_distribution.cpp. I added both to the CMakeLists. They compile fine.
< mentekid>
The problem is when I #include "mlpack/core/dists/gamma_distribution.hpp", the compiler can't find the implementation of the functions (which are in gamma_distribution.cpp along with the boost #include directives)
< mentekid>
(I am compiling with g++ simpleTest.cpp -I include/ -L lib/ -larmadillo -std=c++11)
nilay has joined #mlpack
< zoq>
nilay: There is something wrong with the modified pooling layer.
mentekid has quit [Ping timeout: 250 seconds]
< nilay>
zoq: could you pinpoint error, or should i look into it?
< zoq>
nilay: It would be great if you could take a look, maybe you see something.
nilay_ has joined #mlpack
nilay_ has quit [Client Quit]
< zoq>
nilay: if I set stride = kSize it works
nilay has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
< nilay>
zoq: ok, but do you not think its strange that after reached here is printed, the error comes?
< zoq>
nilay: yeah, but it's definitely a side effect of the pooling layer