verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
govg has quit [Ping timeout: 240 seconds]
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
govg has joined #mlpack
govg has quit [Ping timeout: 256 seconds]
govg has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
ShikharJ/mlpack#52 (GAN - b3c3da2 : Shikhar Jaiswal): The build has errored.
< alsc>
I am trying to get the types and weights out of a FNN model, with OutputVisitor.... messy
< rcurtin>
alsc: you could use FNN::Parameters() but I am not sure that gets you exactly what it is you need
< rcurtin>
it would be possible to add some kind of FFN::Get<LayerType>(size_t) function that returns a layer, then you could do FFN::Get<LayerType>(size_t).Parameters(), but you would need to know the layer type itself when you called that
< rcurtin>
that could be done via boost::variant::get(), which would throw an exception if the wrong LayerType was specified for a layer
< alsc>
rcurtin: thanks yeah I ended up using variant::get
< alsc>
it's kind of clumsy as I am just interested in linear layers weights and so I am relying on the auto& linearLayer = get<Linear<arma::mat, arma::mat>*>(layers[li]); not throwing an exception
< alsc>
but it seems to work. now the only thing that's kind of unexpected is that n_rows and n_cols of .Parameters() isn't what I thought
< rcurtin>
I guess I am not sure how we could make it less clumsy though, I think the best we could give would basically be "auto& linearLayer = network.Get<Linear<>>(li)"
< alsc>
like: I constructed a network with the following lines
< alsc>
hold on
< rcurtin>
yeah, the linear layer looks to store memory all in one row: "weights.set_size(outSize * inSize + outSize, 1);"
< alsc>
ahhh
< alsc>
so is that the last one the biases?
< rcurtin>
hmm, I see that internally it has a 'weight' and 'bias' member that would be much more suited for what you want
< rcurtin>
but those aren't made accessible through a function or anything
< alsc>
in fact it wasn't a multiple of the layer size, let me check
< rcurtin>
but yeah, the last outSize parameters are the biases
< alsc>
uhmm super weird. where's that weights.set_size(outSize * inSize + outSize, 1) ?
< rcurtin>
linear_impl.hpp:35
< rcurtin>
I think that's the constructor though, maybe one of the visitors is doing something else weird to it
< alsc>
ah yes
< alsc>
I get back to you with a pastebin, one sec
< alsc>
yeah that's it!
< alsc>
this made me spot a bug in my code actually, hehe
< alsc>
ah there's the assignment of n_rows and n_elems in the 2nd constructor
< alsc>
kind of stiff because Reset is called at training time.. in this case I am loading with boost::archive and I get all the dimensions squashed
< alsc>
(I am coding a vanilla decoder network in pure C that can use these coefficients)
< alsc>
shall I just call Reset(); on line 36?
< rcurtin>
no, I think that one of the visitors when FFN::Reset() is called calls Reset()
< rcurtin>
I guess I am a little confused about how the linear layer you are getting has the wrong size, can you tell me more of what the issue is?
< alsc>
ah, well it's squashed in 1-dim
< rcurtin>
yeah; if there was public access to the 'weight' and 'bias' members I think that those would be in the format you expect
< alsc>
ahh I see!
< alsc>
ok, I'll add it
< alsc>
I already had to expose FNN::network btw
< rcurtin>
yeah, I guess you could add a wrapper around variant::get<> if you wanted
< rcurtin>
and we could merge that also
< alsc>
Weights() and Biases() ?
< rcurtin>
nah, I'd just go with the capitalized version of what the internal member is called, so Weight() and Bias() (that would match the rest of the mlpack code)
< rcurtin>
if that works for you :)
< rcurtin>
(the other option is to change the internal names, which I guess is just fine, Linear<> is super simple anyway)
< alsc>
hehe yeah
< alsc>
ok testing it
ImQ009 has joined #mlpack
< alsc>
rcurtin: ok works
< alsc>
the wrapper to variant::get<> you mean as a tempate method of FNN ?
< vmg27>
error : libc++abi.dylib: terminating with uncaught exception of type boost::archive::archive_exception: input stream error-Undefined error: 0 unknown location:0: fatal error: in "AdaBoostTest/PerceptronSerializationTest": signal: SIGABRT (application abort requested) /mlpack/src/mlpack/tests/serialization.hpp:215: last checkpoint
< vmg27>
any help?
< alsc>
sorry, no idea, but unknown location looks like i has to do with the paths
< alsc>
it*
sameeran has joined #mlpack
sameeran has left #mlpack []
< zoq>
vmg27: Do all Serialization tests fail or just the Adaboost? e.g. SparseCodingTest
< zoq>
alsc: Looks good, not sure about the getNumberOfLayers name, do you think NetworkSize works as well?
alsc has quit [Quit: alsc]
< vmg27>
Yeah.. SparseCodingTest is failing too
< zoq>
How did you install mlpack and boost?
alsc has joined #mlpack
< alsc>
yeah sounds good, shall I?
< zoq>
alsc: Do you like to open a PR or should I cherry pick the changes from your repo?
< alsc>
I don't know how to to a PR with non-consecutive commits
< alsc>
whats the difference?
< alsc>
ok last commit d0c44f6b17314991e84ed11de1f10aa24e9682ff renames it
< zoq>
Cherry pick can be used to pull a single commit. What about creating another branch and redo the changes over there.
< alsc>
branching from?
< vmg27>
I installed boost by brew and followed steps to build mlpack in the website except for installing dependencies
< alsc>
my master is not up to date with mlpack's
< zoq>
alsc: I see, and can you update the master branch: git remote add upstream https://github.com/mlpack/mlpack.git && git fetch upstream && git checkout master && git rebase upstream/master?
< zoq>
alsc: If not I guess it#s easier to cherry pick the commit
< alsc>
it will have lots of conflicts....
< alsc>
yes please
< zoq>
alsc: Okay, let me do this later today, does this sound good?
< alsc>
yup sure
< alsc>
I have all that part we talked about last time already committed on master
< alsc>
termination policies
< zoq>
ah nice
< alsc>
so it has diverged quite a lot from mlpack's master
< alsc>
in fact I am using it quite a lot... I am passing a lambda as termination policy so it's handy, local to the calling code... computing validation accuracy, savind models, and plotting from there
< alsc>
zoq: I have refactored some of the default parameters of SGD into the default termination policy... you'll follow easily. let me know if I can be of help
< zoq>
alsc: I guess, for the termination feature your plan is to open a PR?
< alsc>
probably yes, I should have branched though
< alsc>
what do you think could be the best way?
< alsc>
maybe I could branch to something where I revert to what's in mlpack/master
< alsc>
then cherry pick into there
< zoq>
as long as you only work on a single feature you can easily use the master branch, after the feature is merged, update is easy and in the future you probably should open a new branch :)
< alsc>
fact is that I have trimmed a lot of CMake and testing from my master, the stuff I don't need
< alsc>
maybe I'll just fork a new one into my personal account, change stuff in there