verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
govg has quit [Ping timeout: 255 seconds]
sumedhghaisas has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
sumedhghaisas has joined #mlpack
partobs-mdp has joined #mlpack
kris1 has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
sumedhghaisas has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
partobs-mdp has quit [Ping timeout: 240 seconds]
sumedhghaisas has quit [Ping timeout: 240 seconds]
< jenkins-mlpack>
Ryan Curtin: Fail gracefully when Python packages are not found.
govg has quit [Ping timeout: 255 seconds]
Erwan_ has joined #mlpack
< Erwan_>
Hi there
< Erwan_>
another question about serialization of GMM
< Erwan_>
I want to serialize a map of string, mlpack::...::GMM
< Erwan_>
Should I loop on the map and call CreateNVP on each item ? Or should I use CreateArrayNVP directly on my map ?
< Erwan_>
First method produces a "count" exception from boost
< rcurtin>
Erwan_: I think it would be possible to modify the serialization shims so that collections of objects with Serialize() can be properly serialized using boost's tools
< rcurtin>
but certainly for now it would be much easier to just call CreateNVP() on each item
< rcurtin>
CreateArrayNVP() is for C-style arrays of objects
< Erwan_>
I was a bit vague on the exception : it is raised when trying to deserialize
< Erwan_>
We serialize like that :
< Erwan_>
for(auto it : gmm_effectModels_)
< Erwan_>
{
< Erwan_>
ar & mlpack::data::CreateNVP(it.second, it.first);
< Erwan_>
}
< Erwan_>
Which nicely fills the archive
< Erwan_>
But when we deserialize, we got the following exception :
< Erwan_>
terminate called after throwing an instance of 'boost::archive::xml_archive_exception' what(): count
< rcurtin>
hmm, could this be because the gmm_effectModels_ map is not filled?
< Erwan_>
is it
< Erwan_>
it is* (sry)
< rcurtin>
I think you would need to make a first pass to ensure that all the right keys are present in the gmm_effectModels_ map,
< rcurtin>
ah, ok, it is, nevermind then
< rcurtin>
it seems to me that there should be no problem with this code; do you think you can isolate a little bit more about what is failing?
< Erwan_>
in motions_sequences_ we have the gap_surface tag
< Erwan_>
gap_surface is the map key, the map content is then a gmm from mlpack
< rcurtin>
at a quick glance it looks ok to me; can you see how far it gets with deserialization before a failure?
< Erwan_>
Just at gap_surface :)
< rcurtin>
unfortunately I may go AFK here, I am on a flight and at some point they will tell me I have to turn off my laptop as we land...
< rcurtin>
ok; do you think you can compile with debugging symbols and get a backtrace of exactly where the exception is raised? that could help diagnose what the issue might be
govg has joined #mlpack
< Erwan_>
Sorry, I don't have more time for that now, I'll do that over the week-end
< Erwan_>
I'll be back on monday :)
< Erwan_>
Thanks for your time anyway
Erwan_ has quit [Quit: Page closed]
partobs-mdp has joined #mlpack
< partobs-mdp>
zoq: rcurtin: Finally I managed to grind through compile errors :) Now I've got a working TreeMemory compatible with FFN. Now working on the actual HAM unit - which should be already reasonably close
< partobs-mdp>
Although I should admit that it's a pity that the compiler is more of an obstacle than of a help
< partobs-mdp>
Ironically, that's about the same words I said about SVN in our previous discussion with rcurtin ^_^
kris1 has quit [Quit: kris1]
< zoq>
Agreed, especially if you use templates the compiler output is sometimes just garbage. But nice to hear that you could solve the issues.
mikeling has quit [Quit: Connection closed for inactivity]
< bvr>
I am trying to run a simple linear regression with toy data representing
< bvr>
line y = x, but I fail miserably here: Where am I going wrong:
< bvr>
< bvr>
int main()
< bvr>
{
< bvr>
// create data y = x
< bvr>
arma::mat input(
< bvr>
"1 2 -3 -1.5 8 7 4;"
< bvr>
"1 2 -3 -1.5 8 7 4");
< bvr>
< bvr>
// split x and y
< bvr>
arma::mat data = input.rows(0, 0);
< bvr>
arma::rowvec responses = input.row(1);
< bvr>
< bvr>
// train the model
< bvr>
mlpack::regression::LinearRegression lr;
< bvr>
lr.Train(data.t(), responses.t());
< bvr>
< bvr>
// output parameters
< bvr>
std::cout << lr.Parameters() << std::endl;
< bvr>
< bvr>
return 0;
< bvr>
}
< bvr>
< bvr>
This program above gives the following output:
< bvr>
0.0068
< bvr>
0.0068
< bvr>
0.0137
< bvr>
-0.0205
< bvr>
-0.0103
< bvr>
0.0547
< bvr>
0.0479
< bvr>
0.0274
< bvr>
keonkim has joined #mlpack
< zoq>
bvr: Hello, LinearRegression expects that the responses is rowvec.
< zoq>
bvr: lr.Train(data.t(), responses.t()); remove the transpose from the second parameter.
keonkim has quit [Client Quit]
< partobs-mdp>
rcurtin: zoq: Almost made HAMUnit compile, except for a couple of bugs. Could you take a look at the issue? (The latest code is in the PR)
< partobs-mdp>
I have fixed all issues I was able to fix on my own - but those two errors just don't make sense T_T
< zoq>
partobs-mdp: Let's wait for the travis build to fail to see the error log :)
partobs-mdp has quit [Remote host closed the connection]
keonkim has joined #mlpack
< kris1>
zoq: I do not understand why the ssRBM is failing the second test fo the travis
< kris1>
The test pass on my local system. Is there any config diffrence between test 1 and test 2. I was not able to find any.
< zoq>
The second builds with -DDEBUG=ON, in this case armadillo checks e.g. matrix dimensions instead of just accepting everything the user wrote.
< zoq>
The default build config is -DDEBUG=OFF.
< zoq>
You should be able to reproduce the issue if you use: 'cmake -DEBUG=ON .. && make'
< zoq>
If you can't reproduce the issue, let me know.
< lozhnikov>
kris1: it seems the second test ran out of the time limit (30 min)
< lozhnikov>
100/113 Test #84: RbmNetworkTest ................... Passed 1276.57 sec
< zoq>
My guess is we missed to initalize some parameter, so that we use MAXSIZE for the allocation.
< kris1>
lozhnikov: Were you able to look at the Gan PR by any chance. The gradient are now in the range of 1e-0 to 1e-5 that seems reasonable to me. I tried training the network for 1000 / 1 epoch iterations but the results for not good. The blog post that i was following said the got the result for around 100 epoch or so…. but the they were training with the full data and i am training with only part of the data.
< kris1>
The problem is that the training is pretty slow on my system for 1 epoch to approx it took around 20-25 minutes.
< lozhnikov>
kris1: I refactored your PR, but I didn't get good results yet