verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
ShikharJ/mlpack#88 (ResizeLayer - 545ff57 : Shikhar Jaiswal): The build has errored.
< rcurtin>
zoq: correct me if I am wrong... it seems like the ppa resolution failures aren't causing the build to fail but instead the build is timing out
< zoq>
right, the first time it failed because it couldn't fetch the pandas package
< rcurtin>
oh, sorry, I must have looked at the wrong log... let me look again
< zoq>
I restarted the build, to see if this is just temporary
< rcurtin>
ah, got it, ok
< rcurtin>
it looks like the trusty images take significantly longer to compile and run than the xenial images
< rcurtin>
yeah, I see what you mean... I guess it is just random
< zoq>
let's hope it is
< zoq>
we have enough jobs in the queue to get some more information
< rcurtin>
agreed
Prabhat-IIT has joined #mlpack
< Prabhat-IIT>
zoq: rcurtin: After I fetched some changes in my local repo of mlpack I'm facing a build issue "No rule to make target '../src/mlpack/core/boost_backport/collections_load_imp.hpp', needed by 'src/mlpack/cotire/mlpack_CXX_prefix.hxx.gch'" Can I get a hint what might have gone wrong?
scrcro has quit [Ping timeout: 260 seconds]
< rcurtin>
Prabhat-IIT: that sounds like a CMake issue... try wiping out your build directory and creating a new one...
< Prabhat-IIT>
rcurtin: thanks it worked :)
< rcurtin>
sure, happy to help
govg has joined #mlpack
Prabhat-IIT has quit [Ping timeout: 260 seconds]
AD_ has quit [Ping timeout: 260 seconds]
cyberp0ng has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
yashsharan/models#22 (master - 22a3119 : yash sharan): The build has errored.
sumedhghaisas has quit [Ping timeout: 268 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
AD_ has joined #mlpack
yamidark has joined #mlpack
yamidark has quit [Client Quit]
manish7294 has joined #mlpack
< manish7294>
zoq: rcurtin: Neighbour search is only working with the 4 variants of LMetric, Whenever I am trying to use Mahalanobis or IPMetric it's giving a bunch of errors. According to error the above two are not being recognized as metrics. Is it how it supposed to be?
daivik has joined #mlpack
daivik has quit [Client Quit]
daivik has joined #mlpack
< prashanthd>
rcurtin while trying to build bindings for kernel pca, to test each kernels, can loop be used?
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
< manish7294>
prashanthd: I suppose you are referring to kernel_pca bindings test. If I got it right then you want to test a particular thing for all kernels. If that's the case then you can surely use loop for testing each kernel. But implementation might be bit off, I think array of kernel's will be key here. I would recommend testing each of them individually
< manish7294>
as using loop can be somewhat cubersome.
< rcurtin>
manish7294: the BinarySpaceTree will only work with LMetric, so if you want to use a different metric you will need a different tree type
sumedhghaisas2 has joined #mlpack
< manish7294>
I am using KDTree
< dk97[m]>
rcurtin: could you please have a look at the he and lecun initialization PR?
< dk97[m]>
I have added the tests
sumedhghaisas has quit [Read error: Connection reset by peer]
< manish7294>
rcurtin: The thing is error is also stating that Mahanalobis is not in the scope mlpack::metric
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
ank04 has quit [Quit: Page closed]
ank04 has joined #mlpack
ank04 has quit [Client Quit]
ank04 has joined #mlpack
< ank04>
Hey @sumedhghaisas2,I am a final year student pursuing bachelor's in computer science.I want to participate in gsoc for variational autoencoder project.I have read both the papers provided on idea's page.I learnt that VAE is advanced version of GANs.
< ank04>
I have read a couple of blogs regarding this.For deeper understanding I am watching some youtube videos.I have installed mlpack on my system and will start solving some sample programs.I want to implement VAE on my own in couple of weeks for better understanding.
Harps has joined #mlpack
ank04 has quit [Client Quit]
< prashanthd>
@manish7294 thanks
< manish7294>
rcurtin: I tried using Standard Cover Tree with expilictly including Mahanalobis metric then also I got same bunch of errors.
AD_ has quit [Quit: Page closed]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Read error: Connection reset by peer]
ImQ009 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 248 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
robertohueso has joined #mlpack
< rcurtin>
manish7294: I can't particularly help unless you provide me actual errors
< rcurtin>
and KDTree is a typedef of BinarySpaceTree
< robertohueso>
I'm currently implementing KDE, and I'm having trouble because of underflow when I sum up probabilities. I thought of using logarithms, is there any implemented function to get the logarithm of a sum? i.e. log(x_1 + x_2 + ... + x_n)
< manish7294>
Sorry, I realize just after I messaged. I will be sending a link to errors
davidinouye has joined #mlpack
< davidinouye>
Hi, I am trying to build a Python scikit-learn wrapper for density estimation trees (DET) but using the python bindings, I cannot access the parameters of the model (i.e. split values etc.).
< zoq>
robertohues: What about arma::trunc_log(arma::sum(..))
< davidinouye>
Does anyone know how I might export or inspect the model in Python?
Harps has quit [Ping timeout: 260 seconds]
< rcurtin>
davidinouye: yeah, the bindings are built in such a way that parameters are not easily accessible. this is because of the need to provide a unified interface across languages
< rcurtin>
unfortunately, if you really want to provide fine-grained access to details of the DET class, your best bet is likely to handwrite a Cython .pxd file to provide the right members that you need
< rcurtin>
and then you can cast the 'output_model' result in the dict to the right class... you can take a look at the generated det.pyx file in your build directory under src/mlpack/bindings/python/mlpack/
< rcurtin>
manish7294: looks like you need to specify metric::MahalanobisDistance
< rcurtin>
"note: expected a type, got ‘MahalanobisDistance’"
< manish7294>
rcurtin: I tried experimenting with that and the error I gave link to is after explicity including Mahalanobis metric
< manish7294>
With mlpack::metric::MahalanobisDistance I got MahalanobisDistance is not defined in metric
< davidinouye>
rcurtin: Okay thanks! Didn't know about the build/src/mlpack.... directory that has the generated pxd files. I'll look into that and see if I can come up with something.
< rcurtin>
it's a template class so you will also have to specify the template parameter TakeRoot (or at least add '<>' to the type)
< manish7294>
Yeah, done that too
< rcurtin>
davidinouye: sure, let me know if I can help out more. The bindings were made in this way to reduce the necessary maintenance to keep them in sync across languages
< rcurtin>
davidinouye: unfortunately, that means that, e.g., if you build a decision tree in Python you can't necessarily inspect it. so it's a little difficult to work around the problem you are having
< rcurtin>
manish7294: ok, so, what was the error then? I can't help you debug unless you provide me with full details of what's going on. I am sure this should work, it looks to me like a minor issue in the code
< manish7294>
rcurtin: I shall check it all over again if something is wrong from my side otherwise I will be just keeping you uneccessarily busy :)
< rcurtin>
I can glance and give you advice quickly, but I need error messages to be able to do that
< rcurtin>
zoq: yeah, the option is commented out also
< rcurtin>
years ago when I looked over Pari's implementation, it is not clear how to do the volume regularization in logspace
< rcurtin>
I think there's an issue open somewhere on github for it
< zoq>
I see, will see if I can find the issue and probably add it to the main file.
< rcurtin>
sounds good, thanks
< manish7294>
rcurtin: Thanks! By following your advice and explicitly including mahalanobis_distance.hpp I got it working. But without including mahalanobis_distance.hpp I am getting https://pastebin.com/EJAAVyhC while as it is working fine for LMetric.
< rcurtin>
right, mahalanobis_distance.hpp does need to be included
< rcurtin>
glad to hear you got it figured out
< manish7294>
Thanks again!
vivekp has quit [Ping timeout: 245 seconds]
vivekp has joined #mlpack
daivik has joined #mlpack
govg has quit [Ping timeout: 252 seconds]
govg has joined #mlpack
< caladrius[m]>
rcurtin: I should add the information related to FReLU to the mlpack-2.2.5 documentation, right?
witness has quit [Quit: Connection closed for inactivity]
< rcurtin>
caladrius[m]: no, because FReLU is not a part of mlpack 2.2.5
< caladrius[m]>
exactly where should I put it then? I cannot figure out where the documentation of the master branch is
< dk97[m]>
zoq: rcurtin I had a question about the backward and gradient methods in ann layers.
< dk97[m]>
I cannot understand what the difference between the two is.
< zoq>
dk97[m]: Backward performs a backpropagation step through the layer, with respect to the given input. Gradient computes the gradient of the layer with respect to its own input.
< dk97[m]>
So backward calculates the backpropogation of the error through the layer with respect to the input that was given to the layer?
< dk97[m]>
I am not able to understand gradient of the layer. What is exactly meant by 'with respect to its own input.`?
< dk97[m]>
zoq:
robertohueso has quit [Quit: Leaving.]
< zoq>
We could also say, with respect to the output of the forward step.
< zoq>
Not not every layer implements a Gradient function, just the layer that needs to update it's own parameter.
tihcra has joined #mlpack
< dk97[m]>
So, we add gradient function to those layers that have trainable parameters?
< zoq>
correct
< dk97[m]>
But the internal definition of both the functions looks the same to me...
< dk97[m]>
The internal definition is the same then?
< zoq>
If you mean the function header, yes, we just pass the layer specifc parameter.
ShikharJ has joined #mlpack
< dk97[m]>
okay got it!
< dk97[m]>
Also, I have fixed the builds of dropout layer in the PR.
< dk97[m]>
Do let me know if anything else is needed.
robertohueso has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 252 seconds]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
davidinouye has quit [Quit: Page closed]
Darkshadow has joined #mlpack
Darkshadow has quit [Client Quit]
ShikharJ has quit [Quit: Page closed]
< dk97[m]>
zoq: I have added the tests for alpha_dropout layer to the PR. :)
ImQ009 has quit [Read error: Connection reset by peer]
robertohueso has quit [Ping timeout: 256 seconds]
robertohueso has joined #mlpack
tihcra has quit [Quit: Page closed]
marioloko has joined #mlpack
Samir has joined #mlpack
petris has quit [Ping timeout: 260 seconds]
petris has joined #mlpack
petris has quit [Max SendQ exceeded]
petris has joined #mlpack
petris has quit [Max SendQ exceeded]
petris has joined #mlpack
petris has quit [Max SendQ exceeded]
petris has joined #mlpack
petris has quit [Max SendQ exceeded]
Samir has quit [Ping timeout: 268 seconds]
< rcurtin>
caladrius[m]: sorry for the slow response; I meant to put it in the comments for the class itself
petris has joined #mlpack
< rcurtin>
robertohueso: sorry I somehow did not see your message for KDE
< rcurtin>
I think you might also be able to use std::log(arma::sum()) but I am guessing you are having underflow issues in high dimensions?
< rcurtin>
it might be worth working with lower dimensional data to start with (if you have not done that already)
< rcurtin>
hang on sorry I am thinking of DETs. with KDE you should really only get underflow when the bandwidth is too small, although I guess it is possible in very very high dimensions