verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< kris1> zoq:Dont you think i would have to implement the optimize method in all the policies
< kris1> zoq: Don't you think i would have to implement the optimize method in all the policies. Like pca example every policy implements the apply method
< arunreddy_> kris1: There would be other policies that directly influence the updates in the iteration.
< arunreddy_> for e.g. momentum policy class.
< arunreddy_> *working on it right now.
arunreddy_ is now known as arunreddy
< kris1> okay, but suppose say i implement a function f in some policy with parameters param1, param2. then i would have to implement this function in every policy with the same parameters right.
< kris1> because template <policy> policy.function() is not there in any of the policies it would result in an error
< kris1> arunreddy:
< arunreddy> How about typing those parameters to default values. So if you dont declare it has to resort to the default. If you declare use the new parameter.
< arunreddy> Correction: *have typed params and set them to default values.
< arunreddy> rcurtin_: What do you suggest?
< kris1> yes but you still need to declare these with the same number of parameters right
< arunreddy> zoq
< arunreddy> yeah
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
< arunreddy> kris1: As you upate the step size, how about having a function GetStepSize(params..) that returns the step size for different policy classes.
< kris1> Yes i can add that
< kris1> Do we really need constructors for policy classes. I don't see a need.
< arunreddy> refer to the C++ Features> templates in the design guideline
< kris1> that dosen't say we need a constructor for every class.
< arunreddy> Yeah, If you look at LMetric Class it uses different typed parameters to pick the right evaluation function.
< kris1> zoq: i have done something like this https://gist.github.com/kris-singh/e9ab5ebe4b54175fd860204d33e85597
< kris1> arunreddy: this is what i am doining.
< kris1> i will now also add the GetStepSize(params) function. Also i will convert all the variable names to camecase
< arunreddy> A suggestion why dont you pass a reference to the decay type to the constructor.
< arunreddy> As the design document suggests to use the references as and when possible.
< arunreddy> kris1
< kris1> yes i will edit that i also saw that
< kris1> though i fell there is better method to do use the policy class. something like lmetric.hpp
< kris1> anyways i should sleep a bit have a class at 11
< arunreddy> Back to back night outs..
< arunreddy> get some rest.
< zoq> kris1: https://gist.github.com/zoq/ba79b34e51d0a99aca907157e45770ea basically all policies depend on the iteration (time), for the rest we can use the constructor.
< arunreddy> zoq: So do you suggest overloaded constructors for SGD?
< zoq> arunreddy: Either overloading or specialize the constructor for each policy.
< zoq> Depending what you like to do overloading might not be possible:
kris1 has quit [Quit: Leaving.]
< zoq> does not work
< zoq> ah, one should be PolicyOne and the other PolicyTwo
< zoq> But I think in the momentum case it should work without using enable_if.
< arunreddy> Camt we have something like https://gist.github.com/zoq/ba79b34e51d0a99aca907157e45770ea
< arunreddy> check the comment
< arunreddy> zoq
< zoq> see my comment
< zoq> It's totaly fine, not to use enable_if, it might be easier, not sure.
< zoq> it should be Optimizer<PolicyTwo> and not just Optimizer
< zoq> This case is kinda special because we can't use float/double as template parameters. Otherwise we could do Optimizer<PolicyTwo<0.3> > optimzer;
< arunreddy> I though about that, but passing 0.3 as typed paramter is not that clean.
< arunreddy> Refer to my comment in policy.hpp
< arunreddy> How about having a Optimizer<true> for vanialla PolicyOne
< arunreddy> And to add to it. SGD with momentum has to store the velocity matrix over the iterations..
< zoq> hm, I think I would go with an Empty policy class instead of Momentum<false>.
flyingpot has joined #mlpack
< zoq> About the velocity matrix, you can hold that matrix inside the Momentum Polciy class, right?
< arunreddy> yeah only in the Momentum policy class, but not in the Empty policy class.
< zoq> That would just do vanilla SGD - template<typename DecomposableFunctionType, typename UpdatePolicy = EmptyUpdatePolicy> SGD
< zoq> since the update function in EmptyUpdatePolicy does nothing.
flyingpot has quit [Ping timeout: 260 seconds]
< zoq> SGD<FunctionType, Momentum> optimizer; uses SGD with momentum
< zoq> the cool thing is if someone likes to use another learning rate update strategy, he can just define another policy
< arunreddy> Now Momentum will have two functions.. one for initialization of velocity based on iterate.rows and iterate.cols
< arunreddy> and the second of updating in every iteration.
< arunreddy> What do you think?
< arunreddy> Yeah, for Nesterov Momentum there can be another UpdatePolicy class like NesterovMomentum..
< zoq> not sure I get your point; maybe it's too late
< arunreddy> Sorry for keeping you late.
< zoq> It might be possible to use the constructor of the policy class instead of an init function, not sure that's a good idea, have to think about that. But yeah, the rest looks good. I think you can do that for now with the init function, if we can think of something better, the change would be easy. What do you think?
< arunreddy> Sounds like a plan. I can start with it. At the constructor level, we dont know the size of iterate matrix.
< arunreddy> I am still thinking how to do it at the constructor level. But for now will get it moving.. :)
< zoq> yeah, in that case we have to create the policy instance after we know the size.
< zoq> You are in UTC -7h if I remember right?
< arunreddy> That makes it runtime execution
< arunreddy> Yeah.
< arunreddy> how about you. Are you on UTC+1
< zoq> some hours left :)
< zoq> yes, right
< arunreddy> looks like it is getting super late for you.
< zoq> About to get some sleep :)
< arunreddy> ok good night.
vivekp has quit [Ping timeout: 260 seconds]
vpal has joined #mlpack
vinayakvivek has joined #mlpack
flyingpot has joined #mlpack
shihao has joined #mlpack
< shihao> Hi there, I am a little confused about process of calculating prob of in NBC.
< shihao> In order to decrease floating point errors, we calculate sum of log. Here is code: testProbs.col(i) += (data.n_rows / -2.0 * log(2 * M_PI) - 0.5 * log(arma::det(arma::diagmat(variances.col(i)))) + exponents);
< shihao> Why probability needs to be devided by data.n_rows?
< shihao> And I think coefficient here '-2.0 * log(2 * M_PI)' should be -0.5 * log(2 * M_PI) as second component.
< shihao> I guess the idea is log(Pr1) + log(Pr2) == log(Pr1*Pr2). But when I calculate e^log(Pr1*Pr2), results don't lie between 0 ~ 1.
paws has joined #mlpack
< paws> hi , im interested in participating for gsoc where do i get started
< shihao> The docs are good: http://mlpack.org/docs.html.
< shihao> Maybe you can select an entry level issue to get started.
usama has joined #mlpack
Narayan has joined #mlpack
< paws> ok thanks
usama has quit [Ping timeout: 260 seconds]
< arunreddy> this can be helpful too. http://www.mlpack.org/involved.html
< shihao> Hi arunreddy. Do you have any idea about what I asked before?
shihao has quit [Quit: Page closed]
< arunreddy> shihao, Sorry didn't see your message.
< arunreddy> So, its not the regular naive bayes, but the GMM version of it.
< arunreddy> The probability calculation is based on pdf of normal distribution.
< arunreddy> You can refer eq(3) in 2.1, to get an idea.
< arunreddy> Also note that they are conditional probabilities, not neccesarily sum up to 1.
< arunreddy> P(A/C)+P(B/C) neq 1
< arunreddy> zoq: Made some progress based on our discussion. https://github.com/arunreddy/mlpack/tree/sgd_momentum_policy/src/mlpack/core/optimizers/sgd
< arunreddy> There is some issue with Optimize declaration being used by Regularized SVD function, couldn't get the build working.
< arunreddy> The following is the build error: http://pastebin.com/HqUrVZMx
< arunreddy> May be I should dig in a little into regularized svd implementation.
Narayan has quit [Ping timeout: 260 seconds]
deepooja has joined #mlpack
< deepooja> Hello
< arunreddy> hello..
< deepooja> I am here for the GSoC 2017
paws has quit [Ping timeout: 260 seconds]
< deepooja> I can see some really good ideas here https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas
< arunreddy> me too.
< deepooja> Who is the mentor here arunreddy ?
< arunreddy> its rcurtin_ and zoq
< deepooja> Oh thank you arunreddy
< deepooja> I am Deepanshu Thakur, final year student of Arya college of Engineering and I.T., Jaipur, India
< deepooja> I am currently working with Celebal Corp (celebal.com) as a data science trainee
< deepooja> What about you? arunreddy
< arunreddy> I am a third year PhD student at Arizona State University, USA.
< deepooja> Oh good :) Aren't you feeling sleepy right now?
< deepooja> On which idea you are willing to work?
< arunreddy> recurrent neural networks.
< deepooja> cool :)
< arunreddy> :)
< arunreddy> you have something in mind?
< deepooja> yes I would love to work on either Reinforcement learning or Fast k-centers Algorithm & Implementation
< deepooja> is this your first gsoc?
< arunreddy> sweet. Yes it is.
< deepooja> good all the best :)
< deepooja> Except rcurtin_ and zoq everyone else if GSoC participant?
< govg> I'm not, but I assume everyone else is.
< deepooja> Hi govg
< govg> Hi
< deepooja> How long you have been on this channel and how you are related with mplack?
< govg> I'm not affiliated to mlpack in any way.
< govg> I used to frequent this channel from around 2013, I guess, when I had to use it. I also maintained the arch linux package for a while, still lurk around here.
< govg> Or maybe 2014, dunno.
< deepooja> haha that's a really long time.
< govg> Yeah I tend to just add channels into my IRC client and forget about them.
< deepooja> haha which irc client you are using?
< govg> Though now I have some passing familiarity with mlpack by virtue of this being one of the only ones I'm active on :)
< govg> irssi
< deepooja> Since you do have some familiarity with mlpack can you please explain me more about this project?
< govg> Project as mlpack?
< govg> I do not know much about the current proposals, sorry.
< govg> You should wait for zoq or rcurtin to respond, they will do so eventually.
< deepooja> Nah I am not talking about current proposals but yes I was going through mlpack's github page and I am enjoying their introduction.
< govg> Okay.
Thyrix has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
vinayakvivek has joined #mlpack
Vladimir_ has quit [Quit: Page closed]
pg has joined #mlpack
flyingpot_ has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
hxidkd has joined #mlpack
flyingpot_ has quit [Read error: Connection reset by peer]
flyingpot has joined #mlpack
pg_ has joined #mlpack
kris1 has joined #mlpack
pg has quit [Ping timeout: 260 seconds]
< pg_> Hello, I've gone through the list of ideas for 2017.. How can I get started with the initial contribution?
shikhar has joined #mlpack
kris2 has joined #mlpack
flyingpot_ has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
kris1 has quit [Ping timeout: 246 seconds]
flyingpot has joined #mlpack
flyingpot_ has quit [Ping timeout: 240 seconds]
flyingpot_ has joined #mlpack
flyingpot has quit [Ping timeout: 264 seconds]
hxidkd has quit []
pg_ has quit [Ping timeout: 260 seconds]
hxidkd has joined #mlpack
hxidkd has quit [Client Quit]
Thyrix has quit [Quit: Thyrix]
flyingpot_ has quit [Ping timeout: 240 seconds]
irakli_p has joined #mlpack
irakli_p has quit [Client Quit]
vikas has quit [Ping timeout: 260 seconds]
Thyrix has joined #mlpack
dhawalht has joined #mlpack
dhawalht has quit [Quit: Page closed]
flyingpot has joined #mlpack
mikeling has joined #mlpack
diehumblex has joined #mlpack
flyingpot has quit [Read error: Connection reset by peer]
flyingpot has joined #mlpack
kris2 has quit [Ping timeout: 260 seconds]
shikhar has quit [Ping timeout: 260 seconds]
Thyrix has quit [Ping timeout: 240 seconds]
Thyrix has joined #mlpack
vpal is now known as vivekp
chvsp has joined #mlpack
pushpendra has joined #mlpack
pushpendra has quit [Ping timeout: 260 seconds]
topology has joined #mlpack
Thyrix has quit [Quit: Thyrix]
dineshraj01 has joined #mlpack
flyingpot has quit [Ping timeout: 246 seconds]
topology has quit [Ping timeout: 260 seconds]
dineshraj01 has quit [Read error: Connection reset by peer]
mayank has joined #mlpack
topology has joined #mlpack
shikhar has joined #mlpack
deepooja has quit [Ping timeout: 260 seconds]
shihao has joined #mlpack
topology has quit [Ping timeout: 260 seconds]
usama has joined #mlpack
< usama> thanks alot zoq it WORKED!!!!!!!!!!!!
flyingpot has joined #mlpack
< usama> it's the issue that libopenblas.dll.a was not included. That part is missing from the setup guide
flyingpot has quit [Ping timeout: 264 seconds]
< zoq> sama: yeah, do you think we should provide an screenshot or is the note just fine?
< zoq> oops, I got the name wrong
kris1 has joined #mlpack
aditya_ has joined #mlpack
tejank10 has joined #mlpack
tejank10 has quit [Client Quit]
shihao has quit [Quit: Page closed]
< usama> a note would be fine
< rcurtin> usama: I'll update the wiki page today with that information that's suggested in the ticket zoq referenced
< usama> Thanks
hxidkd has joined #mlpack
< rcurtin> if someone out there is looking for a moderate difficulty issue to solve, here is one that I would like to see solved but don't have time to do: https://github.com/mlpack/mlpack/issues/821
< rcurtin> requires some knowledge of adaboost and C++ debugging skills
hxidkd has quit [Ping timeout: 240 seconds]
yatharth has joined #mlpack
yatharth has quit [Client Quit]
mikeling has quit [Quit: Connection closed for inactivity]
omar__ has joined #mlpack
omar__ has quit [Client Quit]
omar__ has joined #mlpack
omar__ has quit [Client Quit]
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 264 seconds]
usama has quit [Ping timeout: 260 seconds]
shikhar has quit [Quit: Page closed]
shihao has joined #mlpack
< arunreddy> zoq: For in Logistic regression, the optimizer type is declared using the following..
< arunreddy> template<template<class> class typedef OptimizerType OptimizerType>
< arunreddy> template<template<typename> class OptimizerType> to be more precise..
< arunreddy> adding a new typename to the optimizer, SGD<DeformableFunctionType,UpdatePolicyType> enforces changes across the codebase where it is being used.
< arunreddy> is it possible to make it more generic?
qwe has joined #mlpack
< arunreddy> shihao: Hello..
< shihao> Hi there! Thank you for your answer!
< arunreddy> cool.
< arunreddy> np.
< shihao> Can I ask another question? My math is not very good....
< arunreddy> sure go ahead. i will try..
< shihao> Why we use mixture model? I guess features can be formed to a Gaussian Multivariate Model.
< shihao> I don't understand that weight part.
< arunreddy> For convenience.
< arunreddy> which weight are you referring to?
< shihao> I guess in the code here: testProbs.col(i) += (data.n_rows / -2.0 * log(2 * M_PI) - 0.5 * log(arma::det(arma::diagmat(variances.col(i)))) + exponents);
< shihao> Assumption is there are four identical multivariate gaussian multivariate distribution.
< shihao> Why don't we just use one multivariate gaussian distribution?
PARADOXST has joined #mlpack
PARADOXST has quit [Client Quit]
shadycs15 has joined #mlpack
< arunreddy> shihao: Just curious how you came up with number 4.
< shihao> Oh, Sorry about that. I used iris dataset so there are four features :)
< shihao> I guess I figured it out. Code in here considers distribution of each feature as a univariate gaussian distribution and then combine them linealy
< shihao> Is that right?
sai has joined #mlpack
< arunreddy> each P(X/Y) is sampled from a gaussian.
< arunreddy> All the features combined together form a multivariate gaussian, with a diagnoal covariance matrix.
< shihao> Yes, so I think adding log(X|Y) + log(Y) is enough. Why code here inverse it and multiply number of features?
< arunreddy> The log posterior is computed by summing up the log prior-log(Y) and log likelihood - SUM_i log(X_i|Y)
sai has quit [Quit: Page closed]
kris1 has left #mlpack []
< shadycs15> hi devs
< shadycs15> any word of advice for gsoc 2017 aspirants?
mayank has quit [Quit: Page closed]
< rcurtin> shadycs15: the best you can find will already be written on the website: http://www.mlpack.org/gsoc.html
< rcurtin> arunreddy: I saw your email, I will respond shortly
< arunreddy> rcurtin: Thank you.
< shadycs15> rcurtin: are there any warm challenges for the reinforcement learning idea?
< rcurtin> shadycs15: like the gsoc.html page says, you'll have to take a look through the list of open issues or see if there is some other bug you can find
< shadycs15> I see. Thanks
qwe has quit [Quit: Page closed]
shadycs15 has quit [Quit: Page closed]
shihao has quit [Quit: Page closed]
chvsp has quit [Quit: Page closed]
aditya_ has quit [Ping timeout: 256 seconds]
< zoq> rcurtin: I thought about the generic optimizer API and wouldn't variadic templates also solve the issue?
chvsp has joined #mlpack
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 246 seconds]
qdqds has joined #mlpack
chvsp has quit [Quit: Page closed]
chvsp has joined #mlpack
chvsp has quit [Client Quit]
< rcurtin> zoq: I liked that idea originally too, but what I got hung up about is that you still need a way to specify what those second template parameters are
< rcurtin> variadic templates will allow us to use the defaults, but I did not see a way other than template typedefs that we could set the second template parameter of an optimizer to something custom without changing the default
chvsp has joined #mlpack
< rcurtin> I suppose, it's possible there's something I overlooked, but I couldn't come up with a solution for that bit
chvsp has quit [Ping timeout: 260 seconds]
< zoq> rcurtin: right, you still have to use a typedef to provide an alias. I guess one advantage I can think of right now, is that using variadic templates you can also do:
< zoq> LogisticRegression<> lr;
< zoq> SGD<LogisticRegressionFunction<>, CustomPolicy1, CustomPolicy2> sgd(lr);
< zoq> lr.Train(sgd);
< zoq> instead of providing another alias, that wraps the other template parameter.
< zoq> but providing an alias isn't that much of a deal
< arunreddy> I almost finished coding with an alias StandardSVD.
< zoq> great :)
< rcurtin> zoq: yeah, that is definitely an advantage, the variadic templates do allow you some more flexibility like that
< arunreddy> I see another problem now, what if we like to use MomentumUpdate in Logisticregression to using EmptyUpdate
< arunreddy> using StandardSGD = SGD<DecomposableFunctionType,EmptyUpdate>;
< rcurtin> arunreddy: couldn't you just have another template typedef for MomentumSGD (SGD using MomentumUpdate) and then use, e.g., LogisticRegression<MomentumSGD>?
< rcurtin> zoq: I guess, in the end, it's best to use both strategies
< arunreddy> But that way we dont have the freedom of playing with different SGD's on the fly. It has to be specified upfront.
< arunreddy> and with increase in number of Policy classes, the combinations required will increase..
< rcurtin> I'm not sure I understand what you mean here
< rcurtin> if you want to use a different type of optimizer, you have to make a template typedef for it, unless like zoq said you're in a situation where you can simply pass an instantiated optimizer and it can infer the type with variadic templates
< arunreddy> In Neighbordhood component analysis, we have "sgd", "minibatch-sgd" and others...
< rcurtin> yes, those are the command-line arguments
< rcurtin> unfortunately it's necessarily the case that the command-line interface (or other bindings) can't give you something as expressive as the C++ interface
< rcurtin> so we can provide a couple of popular optimizers with the command-line interface, but we can't really provide every possible thing, the list of things to handle gets too long
< arunreddy> We create aliases for few popular implementations?
< arunreddy> few popular combinations i meant.
< rcurtin> we can create aliases for basically most of the combinations we implement in the mlpack C++ codebase
< rcurtin> but this is a different thing than what we choose to supply from the command-line interface
< rcurtin> when I say "create aliases in C++" I mean template typedef; "create aliases for command-line interface" means doing string handling and conversion from string to types in, e.g., nca_main.cpp
< rcurtin> I hope that clarifies what I'm talking about, let me know if I can clarify further
< arunreddy> got it.
< arunreddy> Any second thoughts on using variadic templates?
< rcurtin> I don't see a problem with using both variadic templates and template typedefs
< rcurtin> but a key point here is that if you are using variadic templates like in zoq's gist where your code only knows about the first template parameter
< rcurtin> (in the case of the optimizers, that first template parameter is OptimizerType)
< rcurtin> then you can't possibly specify anything except the default template parameter for any of the other template parameters
< rcurtin> except in the situation that zoq mentioned above
< rcurtin> hence, we must also have template typedefs in order that a user can use the class with a specified second, third, fourth template parameter instead of the defaults
< rcurtin> I hope I've explained that ok, let me know if I can clarify
< zoq> I think for the SGD PR it doesn't matter since we should provide an alias anyway. It might be interesting to look into variadic templates in another context/issue, maybe using variadic templates increases the build time, not sure.
< arunreddy> rcurtin, if we use class instead of typename for variadic templates the order doesn't matter. So we get little more freedom. I am not quite sure about the time of execution and speed up.
< rcurtin> arunreddy: I am not sure what you mean by that; can you explain further?
< arunreddy> rcurtin: I have misunderstood something with template params. Sorry for the confusion. Please ignore.
< rcurtin> ok, no worries :)
flyingpot has joined #mlpack
< rcurtin> zoq: the new github homepage looks great, I am happy every time I load it :)
flyingpot has quit [Ping timeout: 240 seconds]
< zoq> rcurtin: I'll have to say I had fun to play with this a little bit :)
< rcurtin> :)
aman11dh has joined #mlpack
< aman11dh> Is there anybody out there?
< arunreddy> aman11dh: hey
< aman11dh> Hey Arun
< aman11dh> I was looking at MlPack lately for starting some C++ dev work.
< aman11dh> Are there any future plans for CUDA integration?
< arunreddy> you should check with rcurtin and zoq
< aman11dh> rcurtin: zoq: Any future plans for MlPack with CUDA?
< rcurtin> this is a difficult topic... CUDA programming primitives are very ugly
< rcurtin> and don't really fit in mlpack too well
< rcurtin> one possible idea is to use NVBLAS, which can run BLAS operations on the GPU when it is predicted it will give speedup
< rcurtin> so I have heard some people using that see pretty decent speedups for some machine learning algorithms
< rcurtin> but personally I would prefer to avoid raw CUDA (or OpenCL) code inside of mlpack, I think mlpack is better at a higher level of abstraction
< rcurtin> I will say, I think eventually Armadillo (the matrix library we use) will also support matrices on the GPU, and then this will allow mlpack to work at a higher level of abstraction
< rcurtin> but this is not something that is ready yet, and it may be some months until it is :)
< aman11dh> I was planning to add integration with cuBlas or NvBlas. Though I hate pure cuda myself :P
< rcurtin> if they are just BLAS replacements, then you can just set Armadillo to use them instead of OpenBLAS or whatever else
< rcurtin> very simple! just a line or two of configuration and suddenly, boom, mlpack on the GPU :)
< rcurtin> it may not be a fully optimal implementation like that, but it's certainly better than nothing
< aman11dh> Let me check about that and get back to you in a few hours :)
< rcurtin> yeah, what I said assumes that Armadillo's documentation for replacing BLAS is good :)
< rcurtin> I think it is, but I have been so involved with Armadillo for so long that I can't have an unbiased opinion of whether or not their docs are good
< aman11dh> I agree, as I was checking the FAQs, http://arma.sourceforge.net/faq.html, looks like they have a full support of NVBLAS and ACML.
< zoq> rcurtin: It just works as you said it's super straightforward.
< rcurtin> yeah I think you just have to modify config.hpp
aman11dh2 has joined #mlpack
diehumblex has quit [Quit: Connection closed for inactivity]
aman11dh2 has quit [Quit: Page closed]
< arunreddy> rcurtin,zoq: I have an issue with the Train function in LogisticRegression.
< arunreddy> Calling LogisticRegression with StandardSGD optimizer fails.
< arunreddy> Am I missing something here?
< zoq> arunreddy: Looks good, not sure why the template argument deduction/substitution failed, have you modified the logistic regression class?
vinayakvivek has quit [Quit: Connection closed for inactivity]
< arunreddy> zoq: - SGD<LogisticRegressionFunction<>> sgdOpt(lrf);
< arunreddy> + StandardSGD <LogisticRegressionFunction<>> sgdOpt(lrf);
< arunreddy> n
< arunreddy> thats the only change.
< arunreddy> More detailed error. http://pastebin.com/RQSQ1sHF
< zoq> hm, it works if I use variadic templates, but it doesn't without, and right now I can't see why it doesn't
< arunreddy> hmm.
< arunreddy> how do you usually debug such errors during build time.
< arunreddy> make -d is not that useful.