verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
tham has joined #mlpack
marcosirc has joined #mlpack
< marcosirc> zoq:
< marcosirc> Hi! I think Jenkins has finished.
< marcosirc> (the benchmarking)
marcosirc_ has joined #mlpack
marcosirc_ has quit [Client Quit]
tham has quit [Quit: Page closed]
< marcosirc> Is it shown online?
marcosirc has quit [Quit: WeeChat 1.4]
nilay has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
Mathnerd314 has quit [Ping timeout: 272 seconds]
mentekid has joined #mlpack
mentekid has quit [Ping timeout: 250 seconds]
Stellar_Mind has joined #mlpack
mentekid has joined #mlpack
tham has joined #mlpack
tham has quit [Quit: Page closed]
Stellar_Mind has quit [Ping timeout: 250 seconds]
nilay has joined #mlpack
nilay has quit [Ping timeout: 250 seconds]
tham has joined #mlpack
< tham> zoq nilay : Hi, about the pca algorithm
< tham> Could this algorithm satisfy the requirement of edge boxes?https://maxwell.ict.griffith.edu.au/spl/publications/papers/prl07_alok_pca.pdf
< tham> zoq implement randomzied pca, this one looks like another one
< tham> the performance looks quite good for large scale data too
< tham> I can implement this algo in this week, hope this could give some help to the project
< tham> Before doing that, I want to ask do anyone try this algorithm before?Thanks
< tham> Or armadillo already have one?
< tham> Sorry, not armadillo but mlpack.I check the pca of mlpack, looks like it is normal pca
nilay has joined #mlpack
< tham> About issue #681
< tham> Do anyone have better idea?
< tham> If the speed are fast enough, I would rewrite the csv reader based on it
Stellar_Mind has joined #mlpack
< zoq> tham: The approach looks interesting, I'm not sure if it's faster as e.g. the randomized SVD method, which is fast if m << n, that's the case for the edge boxes method (rank=1).
< zoq> tham: I wasn't able to look into the build issue (#681). If you think, that if we split the implementation, we could decrease the build time, I think we should solve and test it first.
< tham> zoq : in that case, I will try to implement the algo first
< tham> about #681, there is another solution I haven't try
< tham> put all of the implementation details into .cpp
< tham> use switch case to deal with different type
< tham> if(is_double) LoadCSV<double>.... things like that
< tham> fast-cpp-csv-parser looks promise too, will give that library a try too
< zoq> hm, doesn't sound like the best solution to me, maybe I can find some time to test your code
< tham> zoq : definitely not the best solution
< tham> I knew compile time of boost spirit is slow, but never expect single header file could slow down the compile times almost 2 times
tham has quit [Quit: Page closed]
< nilay> zoq: How do i get the error info in inception layer, to go backwards
Stellar_Mind has quit [Ping timeout: 240 seconds]
K4k is now known as Guest38093
< nilay> we set the first delta object
< nilay> and then calculate gradient and go back
< nilay> ?
< zoq> The second argument of the Backward function is the error from the previous layer. That's the error for the gradient.
< zoq> nilay: You could use the network from the ConvolutionalNetworkTest test, to test the implementation.
< zoq> e.g. use the inception layer for the second ConvLayer.
mentekid has quit [Remote host closed the connection]
Stellar_Mind has joined #mlpack
< nilay> why is the first arguement unused?
< nilay> zoq: should i assume the first error that we get from previous layer is concatenated at the top. then we split it and find gradients (g).
< nilay> also once we have 4 separate parameters to input layer, we should sum all those errors and return right
< zoq> The first argument is the input before the activation, which isn't interesting for e.g. the convolution layer, but it's interesting for e.g. the reinforcement layer.
< zoq> You mean to accumulate the error from the 1x1 conv, 2x3 conv ...?
< nilay> yes
< zoq> okay, yes
< zoq> I'm not sure what you mean with split the concatenated error from the top.
< nilay> we must provide the input gy
< nilay> to the layer
< nilay> the second arguement, the error from gradient will be input to the inception layer
mentekid has joined #mlpack
< zoq> Do you mean what you have to use as input for the backward function inside of the inception layer? Because it's unused?
< nilay> will the backward function start like this base1.Backward(error, g1); bias1.Backward(g1, g2); conv1.Backward(g2, output)
< zoq> no, you have to specify a dummy input
< nilay> base1.Backward(someinput, error, g1); bias1.Backward(someinput, g1, g2); conv1.Backward(someinput, g2, output)
< zoq> yes, right
Stellar_Mind has quit [Ping timeout: 276 seconds]
nilay has quit [Ping timeout: 250 seconds]
mentekid has quit [Ping timeout: 244 seconds]
Mathnerd314 has joined #mlpack
mentekid has joined #mlpack
Guest38093 has quit [Quit: WeeChat 1.4]
K4k has joined #mlpack
nilay has joined #mlpack
< nilay> Where is the CNN<>::Gradient function called?
< zoq> nilay: UpdateGradients<>(network); (cnn_impl.hpp) calls the Update(..) function (cnn.hpp), which calls the Gradient(...) from each layer.
< zoq> line 384 in cnn.hpp
< nilay> yeah but then i tried to find where UpdateGradients<> get called .. .
< zoq> cnn_impl.hpp line 256
< nilay> ok ok thanks
< zoq> The main routine is in cnn_impl.hpp.
nilay has quit [Ping timeout: 250 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 246 seconds]
sumedhghaisas has joined #mlpack
mentekid has quit [Ping timeout: 246 seconds]
mentekid has joined #mlpack
marcosirc has joined #mlpack
< zoq> marcosirc: Unfortunately, we have to rerun some benchmarks. The last build id wasn't correct, it's fixed in: https://github.com/zoq/benchmarks/commit/8dc1cb9b9c374d480f833effc49fc283fb70314
< zoq> We just have to rerun the base-cases benchmark, also I fixed a problem in the bases cases routine: https://github.com/zoq/benchmarks/commit/8593f8fda777aecdb3f0ea2c130329d792b96e42
< marcosirc> zoq: Ok, no problem. Thanks!
< sumedhghaisas> @rcurtin: I am getting error in mlpack build... in linking mlpack to mlpack_test. ld returned 1...
< sumedhghaisas> ../../../lib/libmlpack.so.2.0: undefined reference to `vtable for __gnu_cxx::recursive_init_error'
< sumedhghaisas> any idea??
sumedhghaisas has quit [Ping timeout: 252 seconds]
mentekid has quit [Ping timeout: 244 seconds]
Karl_ has quit [Ping timeout: 250 seconds]