verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
keonkim has joined #mlpack
kris1 has joined #mlpack
kris1 has quit [Quit: kris1]
govg has joined #mlpack
partobs-mdp has joined #mlpack
< partobs-mdp>
zoq: rcurtin: Right now migrating back to LayerTypes. To this end, I'm trying to merge Sumedh's code into my PR. However, I get this compilation error (from gru_impl.hpp):
< partobs-mdp>
outputHidden2GateModule = new LinearNoBias<>(outSize, outSize);
< partobs-mdp>
(weird stuff, only last line is there)
< partobs-mdp>
The strange thing is that it crashes on LinearNoBias (complaining that it doesn't have all the template arguments) but doesn't crash on Linear
< partobs-mdp>
Obviously, it just doesn't see LinearNoBias in using LayerTypes = boost::variant<...>, but why?
< zoq>
partobs-mdp: How does layer_types.hpp look like?
< zoq>
partobs-mdp: looks, good have to take a closer look into the issue
kris1 has joined #mlpack
< zoq>
partobs-mdp: I can't test it right now, but what happens if you put 'template<typename InputDataType, typename OutputDataType> class GRU;' after the LSTM in layer_types.hpp?
< partobs-mdp>
zoq: Didn't work :(
< zoq>
partobs-mdp: Okay, I guess what you could do is to remove the GRU related code, I'll see if I can take a closer look into the issue in the next hours.
< zoq>
partobs-mdp: Including "gru.hpp" after "lstm.hpp" in "layer.hpp", should solve the problem.
< kris1>
lozhnikov: I think the orilley example that you mentioned used the batch norm layer.
< kris1>
I don’t think we have that in mlpack right now.
< zoq>
Currently it does not work with the convolution layer.
< kris1>
Hmmm well i needed it just for that……
< zoq>
ah, okay, in this case you probably have to skip the layer for now
< partobs-mdp>
zoq: That resolved the issue, but there is still a long way to go - I've got a huge compiler error message. The latest version is pushed.
< zoq>
Looks like you missed some files: 'visitor/reset_cell_visitor.hpp' file not found
< kris1>
zoq: Do we have some equivalent of the reshape layer available.
< partobs-mdp>
zoq: Added reset_cell_visitor and reset_cell_visitor_impl to CMakeLists, still getting error message - it's huge, but rather monotone (it mostly complains on some boost::variant issue)
partobs-mdp has quit [Remote host closed the connection]
< zoq>
kris1: What does the reshape layer do?
< kris1>
Well the example i am looking at is something like this. There is linear layer Whose output would column vector. Which is reshaped into 3d matrix where the channels = 1 to feed into a CNN. The parameters of the linear are being learned also.
< zoq>
kris1: You don't need a Reshape layer, the conv layer handles the reshape for you: take a look at the cnn test
< zoq>
I see, so in mlpack you don't need a reshape layer
< kris1>
I think i get it thanks...
< zoq>
if you need help with the model definition let me know
< kris1>
yup sure…
sheogorath27 has left #mlpack []
shikhar has joined #mlpack
< zoq>
partobs-mdp: 'mlpack/methods/visitor/forward_with_memory_visitor.hpp' file not found we could just remove the header for now, it's only used by the NTM model
< zoq>
or maybe not ...
< zoq>
partobs-mdp: Looks like you missed to add FFN<NegativeLogLikelihood<>, RandomInitialization>*, in layer_types.hpp
< zoq>
partobs-mdp: You should use: boost::apply_visitor(ForwardVisitor(std::move(h), std::move(searchOutput)), search); instead of boost::apply_visitor(ForwardVisitor(std::move(h), std::move(searchOutput), search));
< zoq>
partobs-mdp: Also it looks like the TreeMemory uses the FFN class instead of LayerTypes.
< zoq>
partobs-mdp: And you might need to switch to LayerTypes instead of LayerTypes&.
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
< rcurtin>
just a heads-up: there will be some downtime for masterblaster probably late this week or next week; I've managed to convince some people to install a Titan X GPU
< rcurtin>
it seems like no long-running benchmark jobs or anything are running, so I think this should be no problem
< zoq>
no way ... awesome :)
< rcurtin>
too early to celebrate yet, but it seems likely at this point :)
< rcurtin>
a second Titan X should be able to be added a few weeks later, but we need to order some extra hardware and new power supplies for that
< zoq>
also, I forgot ... I'll keep my excitement low at least for the moment
< rcurtin>
I guess the peaks are probably big jobs starting
< rcurtin>
Erwan_: sorry for the slow response, I have been traveling
< rcurtin>
I don't do anything special for deserialization, typically that is just used in the mlpack main programs with 'data::Load()' and 'data::Save()'
< rcurtin>
Erwan_: if you want to open a bug report, at this point it sounds like what is going on in your case is a little complex, so maybe that is the easier way to solve it instead of over IRC
shikhar has quit [Quit: WeeChat 1.7]
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
mikeling has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
vivekp has joined #mlpack
< kris1>
Hi, zoq there
< kris1>
I have implmeented that example i am just having difficulty with generator part
< kris1>
I am confused what should be the padding size for the generator network. Since the strategy is same padding that means the input and output dimension are same
< kris1>
padding size is coming out be around 29 which seems wrong to me....
< kris1>
lozhnikov: I also tried a classification test for GAN but the results were not good….. I used the gaussian(0, 1) as real dat aand uniform(-5. +5) as noise and trained the gan using that
< kris1>
Then i generated further data using the same distribution and tried to predict their lables using the Discriminator. I was getting around 33% accruracy i don’t know why
< kris1>
But i did not explore the idea further.
vivekp has quit [Ping timeout: 246 seconds]
vivekp has joined #mlpack
mikeling has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
kris1 has joined #mlpack
vivekp has joined #mlpack
< kris1>
Figured out the convlution part....
kris1 has quit [Quit: kris1]
kris1 has joined #mlpack
< rcurtin>
zoq: as I go through the static analyzer output, I came across this one: