verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has quit [Ping timeout: 250 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 244 seconds]
govg has joined #mlpack
lozhnikov has quit [Ping timeout: 260 seconds]
lozhnikov has joined #mlpack
a-l-e has joined #mlpack
a-l-e has quit [Ping timeout: 260 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Client Quit]
sumedhghaisas__ has joined #mlpack
sumedhghaisas__ has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Client Quit]
sumedhghaisas__ has joined #mlpack
sumedhghaisas__ has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Client Quit]
sumedhghaisas__ has joined #mlpack
sumedhghaisas__ has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Client Quit]
sumedhghaisas__ has joined #mlpack
sumedhghaisas__ has quit [Client Quit]
sumedhghaisas_ has joined #mlpack
sumedhghaisas_ has quit [Read error: Connection reset by peer]
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
< zoq> I'm halfway through #825 ... a lot of code.
Cooler_ has quit [Quit: Saindo]
< rcurtin> yeah it is a huge amount, sorry to make so much work :P(
< rcurtin> ":(", accidentally hit the P somehow
kartik_ has joined #mlpack
< kartik_> hey thr .. ive written a cpp code after building mlpack from source. Ive added the paths too. by installing and the command line interface is working all fine.
< rcurtin> kartik_: glad to hear you got it working
< kartik_> thanks.. error comes when i run the c++ code which loads a matrix in csv also put in the same folder
< kartik_> in not able to build the source file sir
< kartik_> here is the error
< kartik_> In file included from /usr/include/c++/5/cstdint:35:0, from /usr/local/include/mlpack/prereqs.hpp:28, from /usr/local/include/mlpack/core.hpp:207, from mlpack_kartik.cpp:2: /usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler o
< rcurtin> "This support must be enabled with the -std=c++11 or -std=gnu++11 compiler"
< rcurtin> add -std=c++11 to your compiler flags :)
< kartik_> <rcurtin> i have a very long error while compiling .. but lets see if this fixes it ..thanks
< rcurtin> kartik_: yeah, often the first error is the only important one; lots of C++ compilers like to print tons and tons and tons of lines of error output
< rcurtin> sometimes I have seen up to 10000 lines of error output when compiling some mlpack code with a simple syntax error in it! :)
< kartik_> yeah its actually that 1000 one only .. :D
govg has quit [Ping timeout: 250 seconds]
govg has joined #mlpack
< kartik_> i used c11 flag this time .. but not ableto resolve issues
< kartik_> gcc -std=c++11 mlpack_kartik.cpp /tmp/cci1tNJa.o: In function `arma::arma_incompat_size_string(unsigned long long, unsigned long long, unsigned long long, unsigned long long, char const*)': mlpack_kartik.cpp:(.text+0x63): undefined reference to `std::__cxx11::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >::basic_stringstream(std::_Ios_Openmode)'
< kartik_> my code just loads a matrix.csv file placed in the same folder and saves it again in a csv file..
< kartik_> should i reinstall mlpack from source..
< kartik_> sry for pasting such long silly errors..
< zoq> kartik__: You have to link against mlpack: 'g++ -std=c++11 -lmlpack -larmadillo mlpack_kartik.cpp'
< kartik_> $ g++ -std=c++11 -lmlpack -larmadillo mlpack_kartik.cpp /tmp/ccvD3aL0.o: In function `bool mlpack::data::Load<double>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, arma::Mat<double>&, bool, bool)':
< kartik_> here is my short code
< kartik_> #include <mlpack/core.hpp> using namespace mlpack; int main() { arma::mat data; data::Load("matrix.csv", data, true); arma::mat cov = data; data::Save("cov.csv", cov, true); }
< kartik_> <zoq> matrix.csv is in the same folder
< rcurtin> kartik_: that's a linker error, I'm not sure what is being linked against; take a closer look at the error
< rcurtin> maybe you need to link against a boost library like -lboost_program_options or -lboost_serialization
< zoq> Also do you build on mac OS?
< kartik_> ubuntu
< zoq> I remember that g++ is sometimes picky about the 'right' order. I guess this one is the correct one: g++ mlpack_kartik.cpp -std=c++11 -lmlpack -larmadillo -lboost_serialization -lboost_program_options
kartik_ has quit [Ping timeout: 260 seconds]
kartik_ has joined #mlpack
< rcurtin> kartik_: another thing you could do for linking is use pkg-config (if you have that installed)
< rcurtin> g++ mlpack_kartik.cpp -std=c++11 `pkg-config --libs mlpack`
< rcurtin> I think that will work... we recently started shipping pkg-config .pc files, so maybe that will work... maybe :)
< kartik_> thanks a lot curtin ..it worked awesome
< rcurtin> oh, pkg-config worked?
< kartik_> yes it did..got the a.out file :)
< rcurtin> ok, great, I will keep that in mind in the future then
< kartik_> just one more silly stuff.. ive build and installed both the 2.1 version and the master version of mlpack
< kartik_> i hope it wont create any troubles inthe future
< rcurtin> that may cause a problem, but I am not certain
< kartik_> ohkae .. thanks :)
kartik_ has quit [Ping timeout: 260 seconds]
govg has quit [Ping timeout: 246 seconds]
sumedhghaisas has joined #mlpack
cult- has joined #mlpack
< cult-> hi, anyone here?
< rcurtin> cult-: hi there, I am here now
< cult-> hi
< rcurtin> maybe others too but I can only speak for myself :)
< cult-> my question is for HMM, you have gaussian distribution. i can't get over this. how should I define the distribution for my training data? i tried to use HMM<GMM> or just simply HMM<GaussianDistribution> hmm(3, GaussianDistribution(2)); but everytime the viterbi just returned 0's
< cult-> if i have everything in place with HMM<GaussianDistribution> hmm(initial, transition, emission); i get proper viterbi, but i have no idea how to define the distribution there from the data
< rcurtin> the hmm(3, GaussianDistribution(2)) call will give you a working HMM object but you'll need to call Train() with your data to make it a meaningful model
< rcurtin> I'm not sure I know what you mean by "define the distribution there from the data"
< cult-> well ok i did the training too, unlabeled
< cult-> then, i take the same observations for the Predict, and i expect that will return the states from the data (i predict on the training data)
< cult-> but no, just 0s
< rcurtin> I know that HMMs default-initialize the transition matrix to 1/numClasses for each element
< rcurtin> after you train, what are the initial and transition matrices?
< rcurtin> you can access them with hmm.Initial() and hmm.Transition()
< cult-> one moment
< rcurtin> there is some similar code to what you are doing as a test: src/mlpack/tests/hmm_test.cpp, line 655
< rcurtin> so it's definitely possible to train a GaussianDistribution HMM on unlabeled data
< rcurtin> we just need to figure out what's different between the two situations :)
< cult-> how can i print hmm.Transition() ?
< cult-> just access each element?
< rcurtin> std::cout << hmm.Transition() :)
< rcurtin> (or whatever stream you like)
< rcurtin> the Armadillo matrices are nice like that, they support output to streams
< cult-> ah sorry, i did it just too much output
< rcurtin> hm, shouldn't it be a 3x3 matrix?
< cult-> yes
< cult-> 0.3333
< cult-> 0.3333
< cult-> 0.3333
< cult-> and the same with 3x3
< cult-> thats fine
< cult-> but what about the emissions?
< cult-> mean: -1.0114
< cult-> -1.0422
< cult-> covariance: 19.3383 18.4243
< cult-> 18.4243 19.4789
< cult-> so everything's fine, but when i do hmm.Predict(observations[0], predictions); each predictions are 0
< rcurtin> what is observations[0]? is that your full training sequence?
< rcurtin> that first emission you printed looks fine... are the second and third emission distributions roughly the same?
< cult-> observations are the full training seq yes
< cult-> this is after Train() -> http://pastebin.com/eJJ4CEyC
< cult-> in order: initial, transition, emission mean, emission covariance
< cult-> so maybe the training with hmm(3, gaussiandistribution(2)) didn't work?
< rcurtin> yeah, but you have 3 different emission distributions, one for each class
< rcurtin> can you also print hmm.Emission()[1] and hmm.Emission()[2]?
< cult-> sure, one moment
< rcurtin> thanks
< rcurtin> hmmm, I wonder if the optimization starting from a uniform matrix is causing the problem
< rcurtin> can you try adding this before Train():
< rcurtin> arma::mat c = arma::randu<arma::mat>(3, 3);
< rcurtin> for (size_t i = 0; i < 3; ++i)
< rcurtin> c.row(i) /= arma::accu(c.row(i));
< rcurtin> hmm.Train() = c;
< cult-> hmm.Train() = c;
< rcurtin> ack, sorry, change all the calls to .row() with .col()
< rcurtin> ah, sorry! hmm.Transition() = c
< rcurtin> typing faster than I am thinking :)
< cult-> ok
< rcurtin> it looks to me like your training set consists of three pretty separated gaussians
< rcurtin> but for some reason the training code is not picking up on that, and the optimization is failing with a perfectly uniform transition matrix
< cult-> true
< cult-> and it worked
< cult-> i show the output once again
< cult-> so they are identical expect that the hmm.Emission()[0] hmm.Emission()[1] are swapped
< cult-> rcurtin: thank you very much for your help!
< rcurtin> I'm happy to help
< cult-> i will try this with real data and see if i still need the hack
< rcurtin> I put a lot of time into that HMM implementation back many years ago, I am very happy to see it used :)
< cult-> well
< rcurtin> I think that what I should do is change the initialization of the transition and initial matrices
< cult-> i am hesitating between ghmm and mlpack for hmm
< cult-> by the way it would be nice to have EM beside baum-welch for training
< rcurtin> I agree, that would be a nice thing
< rcurtin> ghmm is quite old, is it still maintained?
< rcurtin> I used to use HTK many years ago for speech HMMs but that library has died (actually it was already dead when I was using it in 2007!)
< cult-> i don't know, but with your community and help i see a good chance on picking mlpack ;)
< rcurtin> if you like you can open a bug on github asking for EM support for HMMs, and maybe someone will come along and do it
< cult-> awesome
< rcurtin> currently my mlpack time is pretty backlogged, but maybe someday I might get around to it... :)
< cult-> it will be used very well, i will be back soon
< cult-> good night and thanks!
cult- has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
sumedhghaisas has quit [Quit: Ex-Chat]
sumedhghaisas has joined #mlpack