verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
manish7294 has joined #mlpack
cjlcarvalho has joined #mlpack
< manish7294> rcurtin: BINDING_TYPE macro and it's value is defined in mlpack_main.cpp which always defines BINDING_TYPE https://github.com/mlpack/mlpack/blob/fd59d030ca31f51cc9a4864eb8f892266bfd1807/src/mlpack/core/util/mlpack_main.hpp#L26 no matter what the situation. The thing is actual error is in the order in which random.hpp and mlpack_main.cpp files are included in all method's main.cpp files.
< manish7294> In all main.cpp's random.hpp which uses BINDING_TYPE is included way before mlpack_main.cpp(which defines BINDING_TYPE) and this is the case in almost all the methods.
manish7294 has quit [Ping timeout: 252 seconds]
cjlcarvalho has quit [Ping timeout: 248 seconds]
lozhnikov has quit [Ping timeout: 268 seconds]
lozhnikov has joined #mlpack
vpal has joined #mlpack
vivekp has quit [Ping timeout: 260 seconds]
vpal is now known as vivekp
< jenkins-mlpack> Yippee, build fixed!
< jenkins-mlpack> Project docker mlpack nightly build build #370: FIXED in 2 hr 46 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/370/
< Atharva> zoq: The `Backward()`function in FFN class doesn't go over the first layer of the network, but in the case of first layer being sequential, it should go over it. Otherwise, the layers in the sequential object are left with empty errors.
vivekp has quit [Ping timeout: 240 seconds]
vivekp has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#54 (evalBounds - 35af793 : Manish): The build was fixed.
travis-ci has left #mlpack []
< zoq> Right, in most cases the backward call of the first layer isn't needed, the seq layer is an exception, and I think there are two solutions here, either we add an identity layer or we check inside the FFN class if the layer implements the Model function.
< Atharva> zoq: Even if we add an identity layer, we will have to check if the layer implementes the Model function. Instead, in that case, we can just call the BackwardVisitor once more. Am I right here?
< zoq> If we add an Identity layer before the seq layer, we will call the backward of the seq layer since it's the second layer and not the first. Perhaps I missed something?
< zoq> If we check for the Model function, which acts as an indecator, we don't have to insert an extra identiy layer.
< Atharva> zoq: Yes you are right. My doubt is, do we ask the users to add the identity layer before the seq layer or do we add it ourselves? In the later case, we would have to check for model function anyway, right?
< zoq> Atharva: Right, I guess the second idea might be the way to go, less user interaction, what do you think?
< Atharva> zoq: I think that's better too. So, while adding a layer, we would have to check if it has the Model() function and is it the first layer of the netwok. If yes, we add an Identity layer before it.
< Atharva> Or, another option can be too check if the first layer has the Model() function and just run the BackwardVisitor() on it if it has. In this case, we don't have to add an extra layer as only the backward function is concerned with it.
< zoq> Agreed, that's easier.
< Atharva> zoq: Okay then, I will make these changes in of my PRs.
< zoq> Great but don't feel obligated, we could use the identity solution for now, if you like.
< Atharva> zoq: It's not a problem, I have already made a lot of changes locally and they are not much.
cjlcarvalho has joined #mlpack
ImQ009 has joined #mlpack
cjlcarvalho has quit [Remote host closed the connection]
cjlcarvalho has joined #mlpack
cjlcarvalho has quit [Ping timeout: 248 seconds]
ImQ009 has quit [Ping timeout: 245 seconds]
ImQ009 has joined #mlpack
ImQ009 has quit [Quit: Leaving]