verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
prakhar_code[m] has quit [Ping timeout: 256 seconds]
killer_bee[m] has quit [Ping timeout: 276 seconds]
prakhar_code[m] has joined #mlpack
killer_bee[m] has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#71 (impBounds - 74236a6 : Manish): The build failed.
travis-ci has left #mlpack []
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#6 (impBounds - 74236a6 : Manish): The build is still failing.
travis-ci has left #mlpack []
< jenkins-mlpack2> Project docker mlpack nightly build build #12: FAILURE in 4 hr 46 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/12/
caiojcarvalho has quit [Ping timeout: 260 seconds]
prakhar_code[m] has quit [Remote host closed the connection]
killer_bee[m] has quit [Read error: Connection reset by peer]
jenkins-mlpack has quit [Ping timeout: 260 seconds]
jenkins-mlpack has joined #mlpack
prakhar_code[m] has joined #mlpack
sourabhvarshney1 has joined #mlpack
< sourabhvarshney1> zoq: Hye!! Sorry for the excuses I made. Now I have got an intern. Can I continue my project?
sourabhvarshney1 has quit [Ping timeout: 252 seconds]
< zoq> sourabhvarshney1: Hello there, no worries at all, sure let me know what you need.
< Atharva> Has anybody used nvblas for armadillo?
< zoq> Atharva: I used it some time ago.
< Atharva> zoq: How did you link it with g++, or do I need to install armadillo again?
< Atharva> Also, the documentation says it is installed along with CUDA, so I am assuming I already have it after installing CUDA
< zoq> Atharva: You should rebuild armadillo with BLAS_LIBRARY=/path/to/nvblas in the cmake step.
< Atharva> zoq: Okay, I wil let you know how that goes.
< Atharva> How was the performance by the way?
< zoq> Atharva: Also I used nvprof to get some profiling infos.
< Atharva> Okay
< zoq> Atharva: That depends on the method, in some cases it was even slower as OpenBLAS.
< Atharva> Oh, okay, let's see how much speedup I get
killer_bee[m] has joined #mlpack
< Atharva> zoq: I am a little confused with the `TransposedConvOutSize()` function.
< Atharva> It's not giving results that I expect.
< Atharva> For example, if input width = 14, stride = 1, padding = 1, then shouldn't output width be 16?
< Atharva> But it gives it as 18
< Atharva> Also, if I change the padding to 2, it still gives output width 18
< Atharva> and filter size = 5#
< Atharva> *
< Atharva> Can you tell me what this function is evaluating, because it doesn't seem like the inverse of `ConvOutSize()`
vivekp has quit [Ping timeout: 268 seconds]
vivekp has joined #mlpack
< ShikharJ> zoq: Are you there?
< jenkins-mlpack2> Yippee, build fixed!
< jenkins-mlpack2> Project docker mlpack nightly build build #13: FIXED in 3 hr 25 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/13/
< zoq> Atharva: The outSize should change, have to look into it, perhaps Shikhar has an idea?
< zoq> ShikharJ: I'm here now.
< ShikharJ> zoq: Thanks for your suggestions, they worked for BinaryRBM, and I'm currently debugging for SpikeSlabRBM.
< ShikharJ> zoq: I'm about to get 75% accuracy for BinaryRBM, which is just as comparable as SoftmaxRegression accuracy.
caiojcarvalho has joined #mlpack
< ShikharJ> zoq: I'm unable to determine what should be the ideal benchmark of performance of our RBM class.
< Atharva> ShikharJ: Did you face any issues with the transposed conv layer?
< ShikharJ> Atharva: Like what exactly?
< Atharva> Something like outsize being wrong
< Atharva> Hmm, when you provide the input width, stride, padding, and filter size, what expression do you use to calculate the output width?
< Atharva> In my case, transposed conv is returning wrong output sizes
vivekp has quit [Read error: Connection reset by peer]
< ShikharJ> Atharva: It should be noted that in the case of mlpack, the way of computing the output width is different.
< Atharva> Okay, how exactly?
< ShikharJ> Atharva: See convolution_rules/naive_convolution.hpp, line 98 and onwards.
< Atharva> Okay, I will check
< Atharva> Thanks!
< ShikharJ> Atharva: You should also look here for the formulas that were used: https://arxiv.org/pdf/1603.07285.pdf .
vivekp has joined #mlpack
< zoq> ShikharJ: I got the same results, we could see if we can reproduce: https://www.pyimagesearch.com/2014/06/23/applying-deep-learning-rbm-mnist-using-python/ what do you think?
< zoq> ShikharJ: About tranposed conv operation, if we change the padding the outsize should still change.
< ShikharJ> zoq: outsize is the number of output channels we want a particular slice to have. I'm not sure how that should change with padding?
< Atharva> I think there has been a confusion, i meant output width
< zoq> right, output width
< ShikharJ> zoq: Atharva : Isn't that the case currently?
< Atharva> Sorry, I didn’t understand what case you are talking about. I was talking about the case when changing the padding doesn’t change the output height and width, everything else being constant.
< ShikharJ> Atharva: Ah, you should use this formula for calculating what output size you want:
< ShikharJ> size_t out = std::floor(size - k + 2 * p) / s; return out * s + 2 * (k - p) - 1 + ((((size + 2 * p - k) % s) + s) % s);
< ShikharJ> It is there in transposed_convolution.hpp. It is a general formula derived from the above paper.
< Atharva> ShikharJ: yeah, I saw that, I will try using this.
< ShikharJ> Atharva: Try substituting the values in the above formula and check if it changes the output width or not (it would be because of the first statement and the 2*(k - p) term).
< Atharva> zoq: could it be a specific case where even after changing the padding the output width didn’t change
< Atharva> Because in other cases it does work
< Atharva> I am outside right now, I will get back on this
< ShikharJ> I'm guessing if you substitute a k which is less than p, then that would try to decrease the output width.
< ShikharJ> But also, you have to be sure that the first statement doesn't get negative, or the computations would be wrong.
ImQ009 has joined #mlpack
navdeep has joined #mlpack
< navdeep> Hi I just started using mlpack
< navdeep> I was looking for list of c compiler dpendency for different versions
< navdeep> Apparently, I made my app using mlpack 3.0.2 on my mac with cc version 7.3.0
< ShikharJ> navdeep: Welcome. I use the same gcc version, and I don't think I face any issues. What exactly is the problem that you're facing?
travis-ci has joined #mlpack
< travis-ci> manish7294/mlpack#72 (tree - 911327d : Manish): The build has errored.
travis-ci has left #mlpack []
< navdeep> It works fine on mac
< navdeep> But, I get error on a linux box which has c compiler version 5.4.0
< navdeep> and I just learnt my production compiler version would be 4.9.0
< ShikharJ> navdeep: Can you post the error? Maybe we can try and replicate?
< ShikharJ> navdeep: As far as I can tell, mlpack doesn't set a dependency on the compiler versions. But I can't say for what versions, the current release has been tested.
< navdeep> Sure..let me print errors..it's a different machine
< navdeep> Is there any place where I can upload file?
< ShikharJ> navdeep: Can you make use of pastebin?
< navdeep> checking
< navdeep> I am having this eero while compiling the app on linux box
< navdeep> same app runs fine on mac
< navdeep> on mac though I am using xcode and have set up flags and lib dependency in xcode
< navdeep> here I am compiling app using command-line
< ShikharJ> navdeep: Thanks for the information, I'll take a look shortly.
< zoq> "error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options."
< zoq> Looks like if you build with -std=c++11 you are fine.
< zoq> Your command should something like:
< zoq> g++ test.cpp -o test -std=c++11 -lmlpack -larmadillo -lboost_serialization -lboost_program_options
< zoq> ShikharJ: Even tested it yet, but do you think that padding depends on other parameters like the kernel size?
< zoq> *Haven't
< ShikharJ> zoq: I'm pretty convinced of the above equation. I derived it directly from the papers, and tested it on all the examples of transposed convolutions I could find.
< Atharva> ShikharJ: I am not using filter size less than padding.
< ShikharJ> zoq: Also, if you look at the https://arxiv.org/pdf/1603.07285.pdf relation 14, and see the value of p', that would answer your question.
< navdeep> -std=gnu++11 I was just trying that out
< navdeep> thanks a lot guys
< navdeep> I am very excited to use mlpack and very impressed by support mechanism
< ShikharJ> Atharva: Can you tell me what exact parameters are you using? Maybe I can help with that?
< Atharva> <TransposedConvolution<> >(16, 1, 5, 5, x, y, z, w, 14, 14) This always returns output height an width as 18, no matter what values of w,x,y,z I use
< Atharva> It only varies with the filter size
< Atharva> maybe you can create a module and try to reproduce it
< Atharva> same is happening with any filter size, no matter what padding and stride, the output width and height is only dependent on the filter size
< Atharva> I have observed that this constant value for a given filter size is when stride = 1 and padding = 0in this expression s x (inputWidth - 1) + k - 2p
< ShikharJ> Atharva: For the case when stride = 1, the expression would evaluate to (size - 1 + k); So padding wouldn't have any effect at all.
< Atharva> ShikharJ: Okay, but the same thing happens for any value of stride, as I said stride and padding are having no effect at all
navdeep has quit [Quit: Page closed]
navdeep has joined #mlpack
< navdeep> any chance to add SVM in list of algorithms?
< ShikharJ> Atharva: Sorry, I was away for dinner. Now that I think of it, it seems that this would hold true for all cases where (i + 2*p - k ) is positive.
caiojcarvalho has quit [Quit: Konversation terminated!]
navdeep has quit [Ping timeout: 252 seconds]
< ShikharJ> Atharva: So from the example you posted above, (size=14, s=1, p=1, k=5) has an equivalent transposed convolution as follows:
< ShikharJ> It is equivalent to convolving a 12x12 matrix (o = (i + 2*p - k / s) + 1) with padding 3 (p` = k - p - 1) with kernel 5 (k` = k) and stride 1 (s` = 1).
< ShikharJ> So if you change the padding now to 0 (p = 0). The equivalent meaning that would come out would be as follows:
< ShikharJ> It would be equivalent to convolving a 10x10 matrix with padding 4, kernel 5 and stride 1.
< ShikharJ> Similarly, for p = 4 (maximum you can put the padding parameter):
< ShikharJ> It would relate to a 18x18 matrix with padding 0, kernel 5 and stride 1.
< ShikharJ> Atharva: So for your use, set (size=14, p=3, s=1 and k=5). It would be equivalent to using a 16x16 matrix with padding 1 and stride 1 as well. Though, when you take padding into account, the final output would be 18. But in the case of transposed convolutions, you would never get pure zero padded columns on the output. Hence you should try changing the kernel size to fit the actual output that you desire.
< ShikharJ> That is, the output that includes the padded columns as well.
< zoq> ShikharJ: Thanks for the clarification and the reference, pretty sure I missed a detail on my side.
< zoq> ShikharJ: Also what do you think about the rbm test?
< Atharva> ShikharJ: Thanks for explaining it!
< zoq> navdeep: The main issue about adding SVM support is to provide an implementation that offers something that you don't get from another library. One idea might be to provide a faster implementation, but I think in this niche it's difficult to beat something like libSVM.
< ShikharJ> zoq: I'll go through the link shortly. I was fixing up the ssRBM test for now.
< zoq> ShikharJ: Okay, great.
yaswagner has joined #mlpack
< Atharva> ShikharJ: I didn't get what you mean when you said p = k - p - 1
< ShikharJ> Atharva: It's p` = k - p - 1; The padding of the output matrix is p`.
< Atharva> Okay, so the output matrix in transposed convolution comes padded with zeros?
< ShikharJ> Atharva: It comes paddded, not with zeros though.
< ShikharJ> Read my message above " in the case of transposed convolutions, you would never get pure zero padded columns on the output".
< ShikharJ> Since essentially a Transposed Convolution is just a Backwards convolution.
< Atharva> So if I want to go from size 14 to 28, I need to use filter size 15?
< ShikharJ> Yeah.
< ShikharJ> Since the equivalent stride for the bigger matrix will always remain one.
< Atharva> Okay, thanks for the clarification and sorry for the trouble.
< ShikharJ> And the p paramter that you choose would then determine what was the corresponding original matrix, and the padding that it used to get the 28 size.
cjlcarvalho has joined #mlpack
< ShikharJ> zoq: Ok, ssRBM test seems to give a solid 82% accuracy on my system. I'll see if I can get the accuracy of BinaryRBM up.
< zoq> ShikharJ: Okay, already rechecked most of the code.
schizo has joined #mlpack
schizo has quit [Quit: Page closed]
ImQ009 has quit [Quit: Leaving]
killer_bee[m] has quit [Ping timeout: 240 seconds]
prakhar_code[m] has quit [Ping timeout: 256 seconds]
< ShikharJ> lozhnikov: zoq: I couldn't get the BinaryRBM accuracy above the SoftmaxClassifier accuracy. I tried a few variations with the VisibleMean() and HiddenMean() methods, but I didn't see any major improvement. I'll take a look at the link tomorrow.
< zoq> ShikharJ: Okay, I'll see if I can think of anything.
prakhar_code[m] has joined #mlpack
yaswagner has quit [Ping timeout: 252 seconds]
killer_bee[m] has joined #mlpack