verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< tham>
If the speed are fast enough, I would rewrite the csv reader based on it
Stellar_Mind has joined #mlpack
< zoq>
tham: The approach looks interesting, I'm not sure if it's faster as e.g. the randomized SVD method, which is fast if m << n, that's the case for the edge boxes method (rank=1).
< zoq>
tham: I wasn't able to look into the build issue (#681). If you think, that if we split the implementation, we could decrease the build time, I think we should solve and test it first.
< tham>
zoq : in that case, I will try to implement the algo first
< tham>
about #681, there is another solution I haven't try
< tham>
put all of the implementation details into .cpp
< tham>
use switch case to deal with different type
< tham>
if(is_double) LoadCSV<double>.... things like that
< tham>
fast-cpp-csv-parser looks promise too, will give that library a try too
< zoq>
hm, doesn't sound like the best solution to me, maybe I can find some time to test your code
< tham>
zoq : definitely not the best solution
< tham>
I knew compile time of boost spirit is slow, but never expect single header file could slow down the compile times almost 2 times
tham has quit [Quit: Page closed]
< nilay>
zoq: How do i get the error info in inception layer, to go backwards
Stellar_Mind has quit [Ping timeout: 240 seconds]
K4k is now known as Guest38093
< nilay>
we set the first delta object
< nilay>
and then calculate gradient and go back
< nilay>
?
< zoq>
The second argument of the Backward function is the error from the previous layer. That's the error for the gradient.
< zoq>
nilay: You could use the network from the ConvolutionalNetworkTest test, to test the implementation.
< zoq>
e.g. use the inception layer for the second ConvLayer.
mentekid has quit [Remote host closed the connection]
Stellar_Mind has joined #mlpack
< nilay>
why is the first arguement unused?
< nilay>
zoq: should i assume the first error that we get from previous layer is concatenated at the top. then we split it and find gradients (g).
< nilay>
also once we have 4 separate parameters to input layer, we should sum all those errors and return right
< zoq>
The first argument is the input before the activation, which isn't interesting for e.g. the convolution layer, but it's interesting for e.g. the reinforcement layer.
< zoq>
You mean to accumulate the error from the 1x1 conv, 2x3 conv ...?
< nilay>
yes
< zoq>
okay, yes
< zoq>
I'm not sure what you mean with split the concatenated error from the top.
< nilay>
we must provide the input gy
< nilay>
to the layer
< nilay>
the second arguement, the error from gradient will be input to the inception layer
mentekid has joined #mlpack
< zoq>
Do you mean what you have to use as input for the backward function inside of the inception layer? Because it's unused?
< nilay>
will the backward function start like this base1.Backward(error, g1); bias1.Backward(g1, g2); conv1.Backward(g2, output)