verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
aashay has quit [Quit: Connection closed for inactivity]
govg has quit [Ping timeout: 260 seconds]
kesslerfrost has joined #mlpack
kesslerf_ has joined #mlpack
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerf_ has quit [Ping timeout: 260 seconds]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
vivekp has quit [Ping timeout: 268 seconds]
aashay has joined #mlpack
vivekp has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
agneet42 has quit [Read error: Connection reset by peer]
govg has joined #mlpack
vinayakvivek has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
bldskspod has joined #mlpack
< bldskspod> Is mlpack participating in GSOC this Year?
BigChief has joined #mlpack
< BigChief> Accepted organisation will be released on 27th FEB
bldskspod has quit [Quit: Page closed]
BigChief has left #mlpack []
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
vivekp has quit [Ping timeout: 240 seconds]
kesslerfrost has quit [Quit: kesslerfrost]
vivekp has joined #mlpack
mikeling has joined #mlpack
kris has joined #mlpack
< kris> So, i am implementing the fanin visitor pattern.
< kris> I was going through some visitor patterns output width they have a condition that HasCheckModel and HasCheckoOutputWidth.
< kris> But when i try to run something like boost::apply_visitor(OutputWidthVisitor(), model); this results in a error.
< kris> I can't understand why. can anyone explain.
< kris> Okay i think the model layers have to implement the OutputWidth() function. Am i right.
< kris> At present does any layer implement the OutputWidth funtion.
kris has left #mlpack []
kris has joined #mlpack
< zoq> kris: Hello, if you call boost::apply_visitor(OutputWidthVisitor(), model); make sure model is of type LayerTypes.
< zoq> kris: Also the conv layer implements the output width function.
< kris> okay maybe this is a sklearn thing. But isn't model something like ffn. So the model would consists of layers with LayerTypes right.
< zoq> That is right, but the visitor expects a single layer. You can take a look at line 292 in ffn_impl.hpp.
< zoq> kris: What you like is a combination of OutputWidthVisitor and ParametersVisitor. If a layer implements the Parameters() function you can do return layer->Parameters().n_rows for the input size and layer->Parameters().n_cols for the output size.
< zoq> About the model, let's say model is of type ffn you can do:
< zoq> for (size_t i = 0; i < model.Model().size(); ++i)
< zoq> {
< zoq> boost::apply_visitor(OutputWidthVisitor(), model.Model()[i]);
< zoq> }
< kris> Okay so model.Model()[i] is equivalent to network[i]
< zoq> yes, model.Model() returns a the network parameter, which is vector of type std::vector<LayerTypes>
< kris> Aha thanks.....
< kris> :)
< kris> This is more of conceptual question. I have seen visitor patterns in which all the implementation for all the types is given in the overloaded operator method. They say it helps in removing the algorithm from the data structure. But in the case of lets say mlpack we implement all the functions not in the operator but in the layers itself. Is there any specific reason for that.
< kris> Do you understand the question.
< kris> zoq
< zoq> Not sure I complete understand your questions, but if you implement everything in the overloaded operator, doesn't that mean, that every data-structure works in the same way e.g. let's say we have a queue and a stack, I can call insert or remove on both datatypes since there is already an abstraction.
< zoq> In case of mlpack the abstraction is implemented in each layer, so that the vistor can call e.g. Forward or Backward on the datatype.
< kris> Actually you overload the implementation for each type in operator method. So we will have one implementation for the stack and another for queue. Same as given in the boost variant example.
< zoq> I guess in a small project it does make sense to implement some functions in a single class, but I'm not sure it does for a bigger. If someone likes to implement a new Layer outside of mlpack, extending the existing visitor class isn't that easy, but implementing a new class is straightforward.
travis-ci has joined #mlpack
< travis-ci> eddelbuettel/mlpack#3 (master - fa2d416 : Dirk Eddelbuettel): The build passed.
travis-ci has left #mlpack []
< kris> zoq Yeah i get it now thanks. I will submit the Xavier init PR by tonight or tommrow.
kesslerfrost has joined #mlpack
< kris> zoq: model.Model() gives me error saying that ffn module has no member named Model()
< kris> *named Model
< zoq> kris: Ah, I haven't pushed it, you can add: std::vector<LayerTypes>& Model() { return network; } to ffn.hpp
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerf_ has joined #mlpack
kesslerfrost has joined #mlpack
kesslerf_ has quit [Ping timeout: 240 seconds]
darkknight__ has quit [Ping timeout: 260 seconds]
mikeling has quit [Quit: Connection closed for inactivity]
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Ping timeout: 260 seconds]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1868 (master - db088bd : Ryan Curtin): The build is still failing.
travis-ci has left #mlpack []
kesslerfrost has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
kesslerfrost has quit [Ping timeout: 240 seconds]
nish21 has joined #mlpack
< nish21> i noticed something in code for Adam optimizer. in the for loop at line 73 in adam_impl.hpp currentFunction is incremented along with the iteration count. Isn't it missing a modulo numFunctions?
< nish21> Suppose numFunctions is 3, visitationOrder will have 3 elements. In the 4th iteration of the loop, we get garbage when we try to access visitationOrder[4] right?
< zoq> nish21: In that case 'if ((currentFunction % numFunctions) == 0)' matches and resets currentFunction, so currentFunction can't be > numFunctions.
< nish21> ahhh, yes. missed that detail.
< zoq> nish21: Nevertheless nice that you looked over the code.
kesslerfrost has joined #mlpack
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1869 (master - 2e565e7 : Ryan Curtin): The build was fixed.
travis-ci has left #mlpack []
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerfrost has joined #mlpack
nish21 has quit [Ping timeout: 260 seconds]
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
agneet42 has joined #mlpack
kesslerfrost has quit [Read error: Connection reset by peer]
kesslerfrost has joined #mlpack
kesslerf_ has joined #mlpack
kesslerfrost has quit [Ping timeout: 240 seconds]
kesslerf_ has quit [Quit: kesslerf_]
agneet42 has quit [Remote host closed the connection]
aashay has quit [Quit: Connection closed for inactivity]
vinayakvivek has quit [Quit: Connection closed for inactivity]