verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
flyingpot has joined #mlpack
< zoq> it's not that easy 'make VERBOSE=1' sometimes helps
flyingpot has quit [Ping timeout: 260 seconds]
< arunreddy> no luck
< zoq> Maybe rcurtin has an idea, I'll take a closer look at the issue tomorrow ... at least it works with variadic templates
< arunreddy> sure. thanks
mikeling has joined #mlpack
flyingpot has joined #mlpack
Paritosh has joined #mlpack
Paritosh has quit [Client Quit]
keebu has joined #mlpack
topology has joined #mlpack
hxidkd has joined #mlpack
hxidkd has quit [Ping timeout: 240 seconds]
flyingpot has quit [Ping timeout: 260 seconds]
keebu has quit [Quit: Page closed]
vinayakvivek has joined #mlpack
shihao has joined #mlpack
< shihao> Hi there! I'm curious whether mlpack or arma has 'logsumexp' function?
< shihao> Or we do it step by step?
flyingpot has joined #mlpack
Thyrix has joined #mlpack
aditya_ has joined #mlpack
kris2 has joined #mlpack
kris2 has left #mlpack []
Thyrix has quit [Quit: Thyrix]
itachi has joined #mlpack
itachi has left #mlpack []
bipster has joined #mlpack
topology has quit [Quit: Page closed]
shihao has quit [Ping timeout: 260 seconds]
flyingpot_ has joined #mlpack
flyingpot has quit [Ping timeout: 246 seconds]
Thyrix has joined #mlpack
bipster has quit [Quit: Page closed]
hxidkd has joined #mlpack
hxidkd has quit [Ping timeout: 240 seconds]
govg has quit [Ping timeout: 240 seconds]
shikhar has joined #mlpack
hxidkd has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
junaid_m has joined #mlpack
junaid_m has quit [Quit: Page closed]
hxidkd has quit []
nikhilweee has joined #mlpack
shikhar has quit [Ping timeout: 260 seconds]
diehumblex has joined #mlpack
flyingpot_ has quit [Ping timeout: 256 seconds]
sdf_ has joined #mlpack
sdf_ has quit [Client Quit]
arnav has joined #mlpack
Ajinkya__ has joined #mlpack
< arnav> I would like to work on Augmented Recurrent Neural Networks project
< arnav> Plus I am even interested in Reinforcement learning project
< arnav> is it possible to work on both?
< arnav> And how much time would I have to take out for the same
< arnav> I just have basic knowledge of c++ but I am good with python
< arnav> And when will the project officialy start
< arnav> Can we start before the timeline
< arnav> Please share the exact problem to approach so that I can start researching
Captainfreak has joined #mlpack
Captainfreak has quit [Client Quit]
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 268 seconds]
Sinjan_ has joined #mlpack
< Sinjan_> <zoq> There are a few issues added by you. I want to work on them. But I am seeing in the comments section of the issues that there are already a few guys working on them. Can I too try to fix that issue? Or I should go for a separate one.
arnav has quit [Ping timeout: 260 seconds]
arnav has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]
flyingpot has joined #mlpack
< Sinjan_> I also would like to know that if I work on an issue, should it be one opened by the mentor I plan to work under or I can work on any issue.
arnav has quit [Ping timeout: 260 seconds]
vinayakvivek has joined #mlpack
Sinjan_ has quit [Ping timeout: 260 seconds]
rutuja has joined #mlpack
mikeling has joined #mlpack
SInjan_ has joined #mlpack
Ajinkya__ has quit [Ping timeout: 260 seconds]
chvsp has joined #mlpack
rutuja has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
kris2 has joined #mlpack
usama has joined #mlpack
< usama> hey everyone
< usama> I had a question about arma::mat
< usama> when i print out a mat why does it print its TRANSPOSE and not the REAL mat
< chvsp> usama: Hi. Are you loading a dataset into arma::mat using data::Load?
< usama> YES
< chvsp> usama: So, data::Load() method loads the transpose of a dataset into the arma::mat.
< usama> is that going to affect how my data is interpreted by the algorithim???
< chvsp> It is not the fault of the arma::mat, its just that the data is stored in a transposed fashion
< usama> How can i cout<< the real mat
< chvsp> Just take a transpose of the matrix. Like cout << a.t();
< usama> Thanks!!!!!!!!!!! it worked
< usama> But in my opinion the << operator should print out the real data
< usama> it would be nice to see this changed
Thyrix has quit [Quit: Thyrix]
< chvsp> Your welcome. :) I too felt it to be a bit counterintuitive at first but now, I personally don't think it is that big an issue. Although I would certainly like to know the actual reason why it is the way it is.
< usama> yes certainly
< usama> In the meantime i have detected a BUG!!! in the << operator for arma::mat
< usama> when you print out the data the first row isn't PRINTED!!!
< usama> never mind that
< usama> it was just my console buffer
Thyrix has joined #mlpack
kris2 has quit [Quit: Leaving.]
chvsp has quit [Ping timeout: 260 seconds]
flyingpot has quit [Ping timeout: 268 seconds]
Thyrix has quit [Remote host closed the connection]
Thyrix has joined #mlpack
< zoq> usama: Elements are stored with column-major ordering (column by column), e.g. numpy is row-major (row by row).
shikhar has joined #mlpack
< rcurtin> govg: here is the ICML paper I was working on: https://arxiv.org/abs/1703.00410
biswajitsc has joined #mlpack
SInjan_ has quit [Quit: Page closed]
Sinjan_ has joined #mlpack
< Sinjan_> <zoq> There are a few issues added by you. I want to work on them. But I am seeing in the comments section of the issues that there are already a few guys working on them. Can I too try to fix that issue? Or I should go for a separate one.
< Sinjan_> I also would like to know that if I work on an issue, should it be one opened by the mentor I plan to work under or I can work on any issue.
biswajitsc has quit [Quit: Page closed]
Thyrix has quit [Quit: Thyrix]
< usama> zoq why is it column major order?? it makes difficult to work with data such as the NaiveBayesClassifier works on column major
biswajitsc has joined #mlpack
< rcurtin> usama: Armadillo is column major because the tools that it is built on (LAPACK, BLAS) are column-major
< rcurtin> it doesn't make things more difficult, it is simply a different way of thinking about things
< rcurtin> when you have column-major data, the column in contiguous in memory
< rcurtin> so for a machine learning context it becomes most appropriate (due to memory ordering) to consider a single point as a column
< zoq> Sinjan_: Hello, I've seen your message, we can't respond to every message instantly, but we will respond, it could be some hours through. You can always check out the logs mlpack.org/irc/.
< zoq> Sinjan_: Regarding your question, it depends on the issue, for some issues multiple contributions are possible or you can collaborate on an issue. Also, you don't have to work on an issue that is related to the project or was created by the mentor.
< zoq> Sinjan_: The issues are there for you to get familiar with the codebase. You don't have to make a contribution to be considered, so don't worry if you can't find anything. Also, we working on adding more issues on GitHub.
flyingpot has joined #mlpack
< rcurtin> usama: sorry, I did not finish the thought, I got interrupted :) anyway, since a single point is a column now, instead of the more typical "point as row" that is used in textbooks, papers, etc.,
< rcurtin> one has to sometimes do transposition on symbolic expressions before implementing them in armadillo
< rcurtin> but in the end, it's functionally identical
< rcurtin> just a different perspective that comes from the FORTRAN roots of BLAS and LAPACK
flyingpot has quit [Ping timeout: 240 seconds]
< zoq> Sinjan_: Also, do we talk about some specific issue, like the optimizer issue?
aashay has quit [Quit: Connection closed for inactivity]
< zoq> Sinjan_: ahh, I see you commented on the optimizer issue.
< Sinjan_> zoq: I was planning to work on issue #893
< Sinjan_> But I noticed that there are three guys already involved in each of the three sub-projects. Can I still work on one of them?
< Sinjan_> Or should I choose a different one?
< Sinjan_> Other than that I am working on the implementation of policy gradients since I applied for Reinforcement Learning.
< zoq> Sinjan_: hm, you can work on the mentioned algorithms, but I don't think opening another PR does make sense. I would probably choose another issue. Working on policy gradients is nice, that's one thing where multiple contributions are nice and it shows us that someone is able to solve a problem.
kesslerfrost has joined #mlpack
< Sinjan_> Okay. Then I will work on another issue. Besides that I submitted a PR for issue #902 although that's one with P:trivial to get a hang of things. Also I will soon be done with the policy gradient implementation.
Thyrix has joined #mlpack
TooObvious has joined #mlpack
TooObvious has left #mlpack []
HashCoder has joined #mlpack
aashay has joined #mlpack
< HashCoder> Hello , I would like to work on implementing Deep Learning Modules.Could anyone provide some tips on getting familiar with the existing neural networks library and understanding it
usama has quit [Ping timeout: 260 seconds]
< zoq> HashCoder: Hello, the Deep Learning Modules project idea has been discussed on the mailing list before: http://mlpack.org/pipermail/mlpack/2017-March/003107.html and http://mlpack.org/pipermail/mlpack/2017-February/003092.html
< zoq> HashCoder: Note that there are many more posts on this in the mailing list archive to search for; those are only some places to get started.
< HashCoder> Hello zoq...Thanks a lot...I will get working :D
< zoq> HashCoder: Sounds good, let us know if you have any further questions.
< HashCoder> Yeah sure ...Thanks :D
kris1 has joined #mlpack
rajat503 has joined #mlpack
< rcurtin> ok, the "new" mlpack build slave called "dealgood" is online now and connected to masterblaster; it has 16 cores, 48GB RAM
< rcurtin> the process is much faster setting it up at Georgia Tech than at Symantec...
< rcurtin> opening the firewall only took one day instead of one year :)
< zoq> arnav: I wouldn't recommend to work on both ideas, writing a good implementation for either one of the projects takes time and writing some good tests takes often take much more time. But If you are confident that you can handle both projects or can find a way to combine both, feel free to do so; you have to make sure the proposed timeline is reasonable.
< zoq> arnav: Google Summer of Code 2017 Timeline: https://developers.google.com/open-source/gsoc/timeline. You can start before the actual coding phase begins, in fact, a lot of people getting familiar with the codebase before that.
aditya_ has quit [Ping timeout: 256 seconds]
< zoq> rcurtin: Wow, everytime you came up with a new name; I go to http://madmax.wikia.com/wiki/Dr._Dealgood and check where it is :)
< rcurtin> I was originally going to go with 'blackfinger', but the support guy I was working with suggested that 'blackfinger' was "too urbandictionary-able"
< zoq> Now I'm checking out urbandictionary .... I see
< rcurtin> yeah, I hadn't thought of that, but when I checked urbandictionary I understood his concern and went with 'dealgood' instead :)
HashCoder has quit [Ping timeout: 260 seconds]
rajat503 has quit [Quit: Page closed]
< kris1> zoq: can you explain this again i am not able to understand this https://gist.github.com/zoq/ba79b34e51d0a99aca907157e45770ea
< kris1> what was wrong with implementation that i gave https://gist.github.com/kris-singh/e9ab5ebe4b54175fd860204d33e85597
< kris1> i did read the mail log but i got confused with vardiac variables.
HashCoder has joined #mlpack
< kris1> line 33 you mention that use specified policy does that mean you will have to implement the Optimizer method for every policy. Or are we going to stepSize = step*size * Policy.decay_learning_rate.
< kris1> okay writing that i think i got your idea
Sinjan_ has quit [Ping timeout: 260 seconds]
< zoq> kris1: There is nothing wrong with your implementation; some methods expect that the optimizer has a single template parameter (the type of the function), so you can't just use Optimizer<typename, typename> when it expects Optimizer<typename>.
< zoq> kris1: The idea is to create an alias and to use default values for the other template parameters. The variadic template idea is an extension, that allows us to pass an Optimizer with multiple template parameters, but in this case, we have to modify some of the existing code.
< kris1> oh right i get that. Yes that seems like a important issue.
topology has joined #mlpack
< kris1> zoq: 1. The parameters in all the constructors of all the policy class have to have the same type and the number of params to each constructor must be equal right.
< zoq> kris1: right
< kris1> line 33 you mention that use specified policy does that mean you will have to implement the Optimizer method for every policy. Or are we going to stepSize = step*size * Policy.decay_learning_rate.
< kris1> is this line of thought correct
< zoq> Can you send me the link?
< zoq> not sure we are looking at the same file
chvsp has joined #mlpack
< zoq> We are going with stepSize = step*size * Policy.decay_learning_rate. Every policy defines another approach to update the learning rate.
< zoq> kris1: At the end we have an alias for: NAdam<FunctionType, PolicyOne> another one for NAdaMax<FunctionType, PolicyTwo> ...
< zoq> kris1: The policy implements the update strategy and we can reuse the Basic NAdam optimizer class, which uses the Policy to update the learning rate.
< chvsp> Hi @zoq, I couldn't find any implementation of BatchNorm layer in the current codebase. I think it would be a great addition as many of the recent papers have it in their architectures. Your thoughts?
< zoq> chvsp: I agree, would be great to have an implementation.
< rcurtin> arunreddy: I looked through the build log for the LogisticRegression problem you are having, are you sure that the LogisticRegression class hasn't been modified?
< rcurtin> the line that's confusing me is
< rcurtin> template<template<class> class typedef OptimizerType OptimizerType>
Thyrix has quit [Quit: Page closed]
< rcurtin> that doesn't appear in the master code, but it seems like it's in the gcc error messages, so I am thinking, maybe the code was modified?
< rcurtin> also, I clicked on your webpage, I see you will be at SDM 2017, I will be there also :)
< chvsp> zoq: I will look into it. I need to brush up on the nitty gritty of the math. I will code it, if possible, or else I will open an issue. Sounds good?
chvsp has quit [Quit: Page closed]
< zoq> rcurtin: I tested it and I haven't modified the LogisticRegression code. I used:
< zoq> StandardSGD<LogisticRegressionFunction<> > sgdOpt(lrf); in the main.cpp file for a quick test and could see the same error.
< zoq> chvsp: Sounds good for me.
< rcurtin> ah, got it, let me try that
topology has quit [Ping timeout: 260 seconds]
topology has joined #mlpack
nate23 has joined #mlpack
< kris1> zoq: okay but i think we have to define the optimizer class again right. Because for sgd we already have the optimizer method implemented. we can't use its code directly. we will have to modify it and write for Optimizer(const PolicyType& policy)
< kris1> in the sgd.hpp file we will in the end also have to define the alias as sgd<functionType, PolicyType1> sgd_nodecay() etc
< zoq> kris1: Not sure I get you point, but yes you have to write another optimizer class for the NAdam case. We can't use the SGD class for Adam.
< kris1> I am implementing the learning decay rates for sgd not adam
< rcurtin> zoq: ok, I can reproduce this, this is strange...
< kris1> i think you are confusing this with something else.
< topology> rcurtin: i spent the last 2 days brushing up the fundamentals of kernels and KDE. i also read the section on KDE from your thesis.
< kris1> annealing decay rates for sgd. we talked about day before yesterday.
< topology> you referenced a paper by Gray & Moore there
< topology> "Nonparametric Density Estimation: Toward Computational Tractability"
< zoq> kris1: I think I am, sorry.
< rcurtin> topology: yeah, that was the original dual-tree KDE paper... I think that it is a bit difficult to follow but the ideas are all there...
< zoq> kris1: Okay, so yes we write an alias for the decay cases, just as we do with momentum.
< kris1> i have a better idea then implementing optimizer class can i not in the sgd class i just overload the the optimizer method with something like template<typename policy> optimizer(......, Policy P1) but i would have to implement the overloaded function for that and we would have basically a lot of code duplication.
< topology> so i guess "Far-field compression for fast kernel summation methods in high dimensions" by Bill March is where i should start?
< zoq> kris1: Does that answer your question?
arunreddy_ has joined #mlpack
< zoq> kris1: And yes we have to modify the SGD class, so that we can use different polcies.
< kris1> Because in both methods(Optimizer class and Overloading the optimizer method) there will code duplication
mikeling has quit [Quit: Connection closed for inactivity]
< arunreddy_> Hi rcurtin, I haven't made any changes to the code modifying SGD to StandardSGD.
< rcurtin> topology: once you feel that you understand dual-tree KDE, I'd consider just implementing that with no special modifications, because it'll get you more familiar with the code
< rcurtin> and then from there you can start to consider Bill's paper and other approaches
< arunreddy_> rcurtin: Awesome, we can meet at the conference. Count me in for your talk :)
< rcurtin> arunreddy_: yeah, zoq showed me the modification to make to reproduce
< zoq> kris1: You mean we have to duplicate the Optimizer function?
< rcurtin> sounds good about the talk, I will try and make it interesting :) I am looking forward to seeing the presentation for your paper also
< kris1> zoq: Just a min i will write a gist and send it. That would make things much clear
< zoq> kris1: okay
< zoq> arunreddy_: Have you pushed the momentum sgd code? Maybe kris could take a look?
< arunreddy_> rcurtin: you are welcome. me too.
< kris1> okay just a question can you overload a function like this double Optimize(arma::mat& iterate); template<typename Policy> double Optimize(arma::mat& iterate).
huyssenz_ has joined #mlpack
< kris1> or template<typename Policy> double Optimize(arma::mat& iterate, Policy P)
< kris1> I doubt we could do that.
< kris1> thanks arun i will have a look
< arunreddy_> kris1: how about adding stepSize = decayPolicyType.GetStepSize(...) at line 117 in https://github.com/arunreddy/mlpack/blob/sgd_momentum_policy/src/mlpack/core/optimizers/sgd/sgd_impl.hpp
< arunreddy_> in the optimizer iteration code.
< zoq> kris1: arunreddy_ is working on momentum and I think you can do the same thing here is the link: https://github.com/arunreddy/mlpack/tree/sgd_momentum_policy/src/mlpack/core/optimizers/sgd
< zoq> yes
< arunreddy_> and the StandardSGD can be something like.. StandardSGD = SGD<FunctionType, EmptyUpdate, NoDecay>
< rcurtin> arunreddy_: zoq: minimum working example: http://pastebin.com/xSV7EDby
< rcurtin> or, I guess, "minimum failing example"
< rcurtin> if I change it to a.C<H>(h) it compiles fine
< rcurtin> implying that if logistic_regression_main.cpp was changed to read lr.Train<StandardSGD>(sgdOpt); it would compile
< rcurtin> but the compiler should be able to deduce the correct type...
huyssenz_ is now known as huyssenz
< kris1> arunreddy_: thats what i wanted to do, but i think we were not going to change template<typename functiontype> but i think in your implementation you have done that then yes i think we could do that.
< arunreddy_> rcurtin: perfect. It compiles now.
< arunreddy_> And checked for template<typename D, typename E, typename G, typename H> class F { };
< rcurtin> yeah, I just am not sure that is how it should be... intuitively the original code should compile
< arunreddy_> multiple policy scenario.. compiles fine..
< rcurtin> so now I want to see if this is a gcc bug or if it's correct according to the stadnard
< rcurtin> standard*
< zoq> We can't be the first one who encountered this, it's strange.
< zoq> btw. same with clang
< rcurtin> yeah, clang fails also
qwe has joined #mlpack
< kris1> zoq: https://gist.github.com/kris-singh/ccb96d6fba7bcbc4bf8c18fc963abdd4 check this out. This is what i meant
< kris1> arunreddy_: maybe you could also have a look....
< zoq> If this is standard conform, I'll probably modified all places where we do the same and go with variadic templates, manually specifying the optimizer type isn't that intuitive.
< zoq> kris1: Ah I see, yeah that would result into a lot of code duplication. And you have to modfiy the SGD class everytime you add another policy.
< zoq> kris1: Also, if you link against mlpack, you can't use your own policy.
< zoq> Ah I think you can
topology has quit [Quit: Page closed]
< kris1> Another policy meaning adding something new to the SGD class yes other than learning rate. That would create a lot of duplication to sgd_impl.hpp
< kris1> This came to my mind for backward compatibility purposes otherwise i was also in the favor of using template<typename t1, typename t1> sgd for adding policies but that broke the existing code.
< kris1> I wish we had something similar to the decorator patter in python that would make life much easier though slower :)
< zoq> I think, as long as we provide some default parameters, we provide backward compatibility.
< arunreddy_> zoq: backward compatibility is a good point. So, the non-mlpack code with references to SGD is going to break.
< arunreddy_> Do we do something to support that (or) expect them to change to the default StandardSGD.
< kris1> zoq: Yes thats true. maybe something like this https://gist.github.com/kris-singh/e9ab5ebe4b54175fd860204d33e85597
< kris1> arunreddy_: but we have template<typename functiontype, typename momentum = default, typename decay = default1>.
< kris1> then even if somebody was using SGD<FunctionType> s() it would work right
< rcurtin> well, I don't like using people's time on stackoverflow, but I was not able to find any relevant documentation in the standard to answer my question so I decided to ask:
< arunreddy_> kris1: So optimizer type being used in other parts of the codebase.. like regularized_svd and logistic_regression expects only one "template template parameter"
< rcurtin> hopefully this will garner a response that can help us figure out what is actually going on here
< zoq> We could also do something similar as we did with PCA (line 145) but instead of a typedef we use an alias, which worked so well :)
< arunreddy_> kris1: function type param only. So we use StandardSGD alias with only one template param. Let me know if you need more clarity on this.
< arunreddy_> zoq: That looks a lot cleaner.
< arunreddy_> Avoids changes across the codebase.
< kris1> arunreddy:SGD<LogisticRegressionFunction<>> sgd(lrf, 0.005, 500000, 1e-10) does this break with default parameters set
nate23 has quit [Ping timeout: 260 seconds]
< kris1> arunreddy_:
< arunreddy_> kris1: Yes
< arunreddy_> zoq: Will typedef work with templates?
HashCoder has quit [Quit: Page closed]
HashCoder has joined #mlpack
< kris1> ohh....strange.....
flyingpot has joined #mlpack
< zoq> arunreddy_: Not sure what you mean.
< arunreddy_> you can refer to the following email thread.. http://knife.lugatgt.org/pipermail/mlpack/2017-March/003112.html
generic_name_ has joined #mlpack
< arunreddy_> template<typename DecomposableType> typedef SGDType<DecopmosableType, EmptyUpdate, NoDecay> SGD;
< arunreddy_> zoq
< zoq> unfortunately no
flyingpot has quit [Ping timeout: 240 seconds]
< arunreddy_> so that makes the suggested approach unusable.
< zoq> We can't use typedef, we have to use "using", if that's what you mean.
< arunreddy_> yeah. we have to use using.
< kris1> arunreddy_: so right now the problem is that logistic regression etc are breaking or have we addressed that by using the StandardSGD alias.... Wont that require re-factoring of all the code that uses sgd
< arunreddy_> kris1: We have addressed it using StandardSGD alias. But unfortunately that was breaking some other parts of the code, rcurtin suggested the fix a while ago.
< HashCoder> As no tickets are available for "Essential Deep Learning" modules could someone point out an easy bug for me to get started with.
aashay has quit [Quit: Connection closed for inactivity]
< zoq> HashCoder: You can, look through the list of open issues and see if there is any issue you think you can solve. The issues are generally tagged with difficulty.
trapz has joined #mlpack
< kris1> Okay then i will wait till the issue get resolved till then i would work on something else.
< kris1> Maybe i could complete my xavier init method thats long pending....
HashCoder has quit [Quit: Page closed]
shihao has joined #mlpack
flyingpot has joined #mlpack
< shihao> Hi, is anyone there? I think I have finished issue#593 and I am not sure about the output formats of posterior.
< shihao> Should I add another command option and output posteriors to a file like results of classify?
< shihao> And how should I write a test for this enhancement? Thanks :)
flyingpot has quit [Ping timeout: 240 seconds]
diehumblex has quit [Quit: Connection closed for inactivity]
aditya_ has joined #mlpack
aditya_ has quit [Client Quit]
aditya_ has joined #mlpack
govg has joined #mlpack
< govg> rcurtin: Nice, just saw the link. When do rebuttals start?
aditya_ has quit [Ping timeout: 260 seconds]
kesslerfrost has quit [Quit: kesslerfrost]
< rcurtin> govg: no idea, probably a couple months
qwe has quit [Quit: Page closed]
< kris1> zoq:any friendly intros to visitor patterns that are easy to understand and have code
generic_name_ has quit [Quit: Page closed]
< kris1> basically want to implement a visitor
shikhar has quit [Ping timeout: 260 seconds]
drewtran_ has joined #mlpack
drewtran has joined #mlpack
drewtran_ has quit [Client Quit]
arunreddy_ has quit [Quit: Page closed]
diehumblex has joined #mlpack
aashay has joined #mlpack
shihao has quit [Quit: Page closed]
< kris1> are there any tests for the visitor class
< kris1> :zoq
Sinjan_ has joined #mlpack
shihao has joined #mlpack
< shihao> Hi guys, I have created PR for issue#593: https://github.com/mlpack/mlpack/issues/593. Please kindly review it and let me know if there is any problem. Thanks :)
flyingpot has joined #mlpack
shihao has quit [Client Quit]
flyingpot has quit [Ping timeout: 240 seconds]
< kris1> can you explain this
< kris1> HasParametersCheck<T, P&(T::*)()>::value
< kris1> what the use P&(T::*) here
trapz has quit [Quit: trapz]
shihao has joined #mlpack
< kris1> zoq: Here is my implementation of the fanin_visitor can you take a look
shihao has quit [Ping timeout: 260 seconds]
< kris1> also i don't understand how we are checking the std::enable_if can you help with that
Sinjan_ has quit [Quit: Page closed]
nikhilweee has joined #mlpack
< kris1> zoq: Look like i am only one active right now.Maybe when you get time you could answer these questions.
kris1 has left #mlpack []
vinayakvivek has quit [Quit: Connection closed for inactivity]
prasanna082 has joined #mlpack
prasanna082 has quit [Client Quit]
prasanna082 has joined #mlpack
trapz has joined #mlpack
trapz has quit [Client Quit]
trapz has joined #mlpack