verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< kris1> it works now. Thanks
< kris1> Anyway its going to be morning here soon. so i better get some sleep. I hope to complete qlearning by tommrow.
< zoq> Oh, okay, sounds great, good night.
< arunreddy_> zoq: Can I also do nesterov momentum. It's going to be small addition to momentum sgd?
< arunreddy_> Do you prefer adding a flag to MomentumSGD class to creating a new NesterovMomentumSGD class?
< zoq> arunreddy_: Someone is already working on the Nesterov's Accelerated Gradient method. I think I've seen a paper that integrates Nesterov momentum into Adam, maybe that's another interesting method. I'll see if I can find the paper, just glimpsed over it the last time.
< zoq> But, I can't search for the paper right now, so expect my response tomorrow.
< arunreddy_> Thanks zoq.
< arunreddy_> zoq: Are you referring to http://cs229.stanford.edu/proj2015/054_report.pdf ?
< arunreddy_> "Incorporating Nesterov Momentum into Adam"
< zoq> The title sounds promising :)
kris1 has quit [Ping timeout: 260 seconds]
mikeling has joined #mlpack
< arunreddy_> :)
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 268 seconds]
govg has quit [Ping timeout: 260 seconds]
vinayakvivek has joined #mlpack
govg has joined #mlpack
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
mani has joined #mlpack
mani is now known as Guest8166
< Guest8166> hi i am interested in machine learning and c++. i need some help to start.
Guest8166 has quit [Client Quit]
mani36 has joined #mlpack
Shihao has joined #mlpack
< Shihao> Hi there! I just want ask a question about issue593. 'Classify' function seems use sum of log probability to calculate max probability.
wwhite has joined #mlpack
< Shihao> Should I just output that probability value? It seems ugly.
< Shihao> Here is the probability of one test data: '0.173633 -40.6168 -51.9916'. Even though it's correct, maybe It does not give us a intuitive probability value.
wwhite has quit [Client Quit]
Shihao has quit [Quit: Page closed]
mani36 has quit [Ping timeout: 260 seconds]
Akash__ has joined #mlpack
Akash__ has left #mlpack []
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
harry__ has joined #mlpack
< harry__> hii i like to contribute how to get started?
< govg> harry__: Have you looked at the getting started pages here : http://mlpack.org/gsoc.html
Thyrix has joined #mlpack
kris1 has joined #mlpack
kris1 has left #mlpack []
kris1 has joined #mlpack
kris2 has joined #mlpack
shikhar has joined #mlpack
kris1 has quit [Ping timeout: 260 seconds]
Smeet has joined #mlpack
harry__ has quit [Ping timeout: 260 seconds]
rainbowtiara has joined #mlpack
rainbowtiara has quit [Client Quit]
cifar has quit [Ping timeout: 260 seconds]
vinayakvivek has quit [Quit: Connection closed for inactivity]
mayank has joined #mlpack
< mayank> how do i make only the core files in src/mlpack/
< shikhar> mayank: `make mlpack` will build the library, if that is what you want
< mayank> but that builds all the files, even the once i did not change
< shikhar> Probably the files that were rebuilt depended on the files you changed
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
mayank has quit [Ping timeout: 260 seconds]
Smeet has quit []
shikhar has quit [Quit: Page closed]
kaus19 has joined #mlpack
Narayan has joined #mlpack
< kaus19> hello..I am new here. I want to contribute to mlpack.. Can anyone help me get started..
kaus19 has quit [Ping timeout: 260 seconds]
kaus19 has joined #mlpack
Narayan has quit [Ping timeout: 260 seconds]
mikeling has quit [Quit: Connection closed for inactivity]
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 258 seconds]
abhijeet has joined #mlpack
< abhijeet> Hi, I've installed mlpack and I'm trying my hand at it...
< abhijeet> I've used knn just to get familiar with the workflow
< abhijeet> however I have a doubt... how can I set up the environment for development of mlpack ?
< abhijeet> can anyone suggest me how to go about setting a development environment for mlpack ?
< govg> kaus19: Have you looked at the getting started pages here : http://mlpack.org/gsoc.html
< govg> abhijeet: What do you mean by development environment?
kaus19 has quit [Ping timeout: 260 seconds]
< abhijeet> govg: what I meant was... currently I installed mlpack using Ubuntu's package manager... so relevant files are in usr/include/mlpack/
< abhijeet> so everytime I want to modify them I have to go there and do the changes
< abhijeet> is there another way... where we can make the changes to the cloned repo and use changes directly
< govg> So I'm not sure if this is the recommended way, but you can use git to clone the repo, then work on that.
< govg> You don't have to do the make install step, instead you can run the executables directly.
< abhijeet> yeah that's fine... I can clone the repo and make the changes... but then they won't be reflected globally right ?
< govg> No, and that's a good thing, right?
< abhijeet> what I want is to clone the repo make the changes and use those changes like regular mlpack installation
< govg> Clone, make the changes, then run make mlpack_xyz (where xyz is the features you changed)
< govg> And then from the build directory you can find that executable, which can be executed as ./mlpack_xyz
< abhijeet> okk... so basically what you are saying is I have to clone the repo and build mlpack from the source right ?
< govg> One way to force your terminal to use the local copy (instead of /usr/include) is to add the bin folder to your path.
< govg> Yeah, exactly.
< govg> And don't install the built executables.
< govg> Let them live in the /bin folder inside the build directory.
< govg> rcurtin or zoq can specify if there's a better alternative.
< kris2> one question if we are using DecomposableFunctionType& function in sgd
< kris2> we are assuming that function here has 2 methods
< abhijeet> okk... thanks so much... will try building from the source
< kris2> 1. num_functions 2. evaluate.
< kris2> but where is the function defined i am not able to get that
mentekid has quit [Ping timeout: 268 seconds]
< kris2> also the gradient function is defined in function
< kris2> so i think we would have to create such a function and pass it to sgd right
< kris2> i think i get it
< kris2> now
Thyrix has quit [Quit: Thyrix]
snd has joined #mlpack
govg has quit [Ping timeout: 246 seconds]
flyingpot has joined #mlpack
snd has quit [Ping timeout: 260 seconds]
kris2 has quit [Ping timeout: 264 seconds]
kris1 has joined #mlpack
< vivekp> zoq: Regarding SMORMS3, do you think the following implementation for the second last update step is correct?
< vivekp> double x = arma::as_scalar(g * (g / (g2 + eps)))
< vivekp> iterate -= gradient * std::min(x, lRate) / (arma::sqrt(g2) + eps)
< vivekp> The blog post http://sifter.org/~simon/journal/20150420.html doesn't provide much technical details about the parameters used and the update steps.
< vivekp> 'm relying on my guesses based on my previous experience with implementing AdaMax and general understanding of gradient descent optimization algos.
< vivekp> So, I've a feeling that something's not correct.
< vivekp> Here's my implementation of all update steps: hhttps://goo.gl/gWXd0H
< vivekp> Do you mind taking a look?
< vivekp> Looks like Simon's blog is dowm atm. Don't know how long it's been down.
shikhar has joined #mlpack
flyingpot has quit [Ping timeout: 258 seconds]
govg has joined #mlpack
< kris1> in mlpack looking at the optimiser i find this that at many places (1 - rho) * (dx % dx) means (1- rho) * square(dx).
< kris1> How is that
< kris1> isn't % operator the modulo operator
< kris1> am i missing somehing
< shikhar> here % is the Schur product (element wise multiplication)
< kris1> oh thanks. so you means its overloaded in some file.
< kris1> shikhar:aah, thanks very much
< zoq> vivekp: second last is: 'p = p - grad*min(lrate, g*g/(g2 + epsilon))/(sqrt(g2)+epsilon)' ?
< zoq> vivekp: Also note that: '(Where products, divides, and min are element-wise.)' so, you should use % instead of *.
< zoq> kris1: Right, the function has to implement NumFunctions, Evaluate and Gradient.
< zoq> kris1: An example is given in rmsprop_test.cpp, in that case we optimize the SGDTestFunction function which is defined in core/optimizers/sgd/test_function.hpp.
flyingpot has joined #mlpack
< vivekp> zoq: yes, second last is: 'p = p - grad*min(lrate, g*g/(g2 + epsilon))/(sqrt(g2)+epsilon)'
< vivekp> and yes, I used % instead of * for element wise multiplication except the places where we need to multiply a scalar value with a matrix.
Sinjan_ has joined #mlpack
flyingpot has quit [Ping timeout: 260 seconds]
pg5 has joined #mlpack
< zoq> vivekp: But shouldn't it be: (g % g) / (g2 + epsilon) instead of g * (g / (g2 + eps)?
< kris1> zoq: How to only make the optimizer is there something like make mlpack_optimizers
< zoq> kris1: no, 'make', but it should only build modified files and files that include the modified file.
< shikhar> zoq: Can you please take a look at https://github.com/mlpack/mlpack/pull/895 and give me a hint
mikeling has joined #mlpack
< rcurtin> shikhar: I will look, hang on
< rcurtin> I was thinking abput it yesterday but did not have time to answer
< vivekp> zoq: Are you referring to line 17 here: https://goo.gl/gWXd0H ?
< shikhar> rcurtin: Perhaps we could use sfinae_utility.hpp and check if the type has a raw_print function, instead of using the is_arma_type
pg5 has quit [Ping timeout: 260 seconds]
vinayakvivek has joined #mlpack
< zoq> vivekp: yes
Sinjan_ has quit [Ping timeout: 260 seconds]
< vivekp> I think if we go by the BODMAS rule, g should be first divided by (g2 + eps) and then multiplied.
< rcurtin> shikhar: that's another solution, if you like that more I think it cpuld be fine
< rcurtin> *could
< vivekp> Also, I found this http://docs.chainer.org/en/stable/_modules/chainer/optimizers/smorms3.html#SMORMS3 -- might provide some hints
< rcurtin> but the include ordering issue should be pretty simple to take care of... I think :)
< shikhar> Yes. I'm on it
< rcurtin> sure, hopefully it doesn't turn out to be more complex than I thought :)
< mikeling> huyssenz: hey, are you online? Do you got any progress on RBM and other deep learning module :)
< zoq> vivekp: Sure, divide it by (g2 + eps) first, fine with me :)
< vivekp> zoq: Another concern in line 17 is that whether I used to_scalar function correctly. :)
< vivekp> I mean as_scalar
< zoq> vivekp: I'm pretty sure min(lr, x) returns a matrix, since the gradient does not depend on on other gradients.
Shihao has joined #mlpack
< zoq> vivekp: numpy.minimum(x, self.lr) also returns an numpy array with size of x
< Shihao> Hi zoq, I just want ask a question about issue593. 'Classify' function seems use sum of log probability to calculate max probability.
< Shihao> Should I just output that probability value? It seems ugly. Here is the probability of one test data: '0.173633 -40.6168 -51.9916'. Even though it's correct, maybe It does not give us a intuitive probability value.
< vivekp> zoq: okay, I see what's wrong with line 17 and 19. We don't want a scalar value out of g * g / (g2 + eps)
< vivekp> but would arma::min accept different parameter types i.e. double lRate and arma::mat x?
jarvis_ has joined #mlpack
Shihao has quit [Ping timeout: 260 seconds]
jarvis_ has quit [Client Quit]
< zoq> vivekp: I'm not sure, you probably have to use another matrix filled with lr or a for loop.
< zoq> rcurtin: Shihao: I haven't looked at the issue yet, maybe rcurtin had a chance.
< rcurtin> ah, sorry, Shihao, I'll take a look in a few minutes
< vivekp> zoq: Okay thanks, I'll try that
< vivekp> also I should probably write tests and verify the implementation
flyingpot has joined #mlpack
rainbowtiara has joined #mlpack
< zoq> vivekp: We all love tests :)
< rcurtin> I have to wonder, with all of the tests in our large test suite and with all of the machines that are continually building mlpack, how many tons of CO2 have been released while running mlpack tests? or how much power has been used? :)
flyingpot has quit [Ping timeout: 260 seconds]
< rainbowtiara> I have an idea about the methods used in machine learning and I'm looking forward to work on this. Can you help me how to start with?
< rcurtin> rainbowtiara: hello there, there are some nice pages online about what to do to get started
< zoq> rcurtin: I'm sure, if you open an issue on Github, someone will write some lines to figure it out; getting the Travis and appveyor results might be a problem
< zoq> :)
< rcurtin> haha
jarvis_ has joined #mlpack
< jarvis_> zoq: What is your final word on Inception v4 vs VGGNet? I am mostly satisfied with Inception, let me know if I have the green signal.
< jarvis_> Also, can you please give me your opinion on how I can make a strong proposal. Thanks.
ashk43712_ has joined #mlpack
< rcurtin> jarvis_: there's a lot of information online about putting together a good proposal, I suggest you search around and compile the information that you can
< rcurtin> for mlpack, probably the best guidelines you'll find will be at http://www.mlpack.org/gsoc.html and on the ideas page
< jarvis_> Thanks : )
< vivekp> zoq: All tests pass. I'm going to open a PR within an hour. Thanks for pointing out mistakes in the code.
jarvis_ has quit [Quit: Page closed]
topology has joined #mlpack
Thyrix has joined #mlpack
Shihao has joined #mlpack
< shikhar> rcurtin: I was able to fix the build message by reordering includes in a few files. Easy fix indeed!
< rcurtin> great, I am glad my intuition was right :)
< rcurtin> Armadillo has this nice way to add extra functionality to the Mat and Col and Cube classes by defining a macro, it's quite nice
< rcurtin> you can see its use in arma_extend.hpp, if you like
< rcurtin> but since we depend on the extra functionality, we have to ensure that Armadillo is properly included, hence the warning
chvsp has joined #mlpack
< shikhar> Ah, I see
< shikhar> I was looking into the misalignment problem, and the way arma solves it in print
< Shihao> Hi rcurtin, are you there? Do I need recalculate probs if we need to produce a probability between 0 ~ 1?
< rcurtin> yeah, I was going to answer on the issue but I guess I can answer here
< rcurtin> ideally what we want to return, for each point, is a probability vector for each class that sums to one
< rcurtin> so e.g. for a three-class naive bayes classifier this might be '0.8 0.15 0.05' or something like this
< Shihao> yeah, I see.
< rcurtin> but I think that is not quite your question
kesslerfrost has joined #mlpack
< Shihao> Oh, what if I do a power operation on each probability, will it be back to 0 ~ 1?
ashk43712_ has quit [Ping timeout: 260 seconds]
< rcurtin> yes, it should, yeah
kesslerf_ has joined #mlpack
< rcurtin> just e^{log probabilities} will get you that original probability vector
dineshraj01 has joined #mlpack
kesslerfrost has quit [Ping timeout: 246 seconds]
< Shihao> yeah, I will try.
kesslerf_ has quit [Ping timeout: 246 seconds]
abhijeet has quit [Quit: Page closed]
< govg> rcurtin: mlpack_test has no output except warnings, is it by design?
< govg> Or is there a flag somewhere that can be toggled?
< rcurtin> govg: ideally if everything passes there should be no output except the boost test framework's "no errors" (or whatever the string is)
< rcurtin> I think there are some algorithms that write unsuppressed output to Log::Warn, but maybe that should be disabled
< rcurtin> the Boost Unit Test Framework does have some nice command line options that allow a lot more optinos
kesslerfrost has joined #mlpack
< rcurtin> *options
< rcurtin> like if you run with '-p', it gives a progress bar
< govg> Yeah, that's what I was thinking about.
< govg> There's just a couple of outputs that are warnings, L-BFGS solver and one about not having a proper csv file.
dineshraj01 has quit [Read error: Connection reset by peer]
< rcurtin> yeah, usually output gets disabled in a certain test that might print by something like 'Log::Warn.ignoreInput = true'
< rcurtin> I think there are some that are missing that, though... if you wanted to go through and find those I would definitely merge it, it would be a nice change :)
kesslerfrost has quit [Ping timeout: 246 seconds]
Varun has joined #mlpack
< Varun> HI ! I am interested in one of your project for google summer of code
Varun is now known as Guest65953
< Guest65953> I am confident that I can contribute a lot to it
< Guest65953> HI ! I am interested in one of your project for google summer of code
< zoq> Guest65953: Hello, welcome, have you seen: mlpack.org/gsoc.html, definitely a good starting point.
< Guest65953> I am a bit confused about the procedure to apply
< Guest65953> It says I am supposed to discuss my idea with organisation before applying .
< Guest65953> Which is the right platform for that ?
Thyrix has quit [Quit: Thyrix]
Guest65953 has quit [Quit: Page closed]
kesslerfrost has joined #mlpack
< zoq> Guest65953: Either over the mailing list, IRC or GitHub; also please look through the list archives for other messages about project's: http://knife.lugatgt.org/pipermail/mlpack/
< zoq> I know he as left, but it looks like a lot of people take a look at the irc logs.
< rcurtin> I haven't figured out why mailman insists on referring to the hostname for the mlpack archives as knife.lugatgt.org instead of lists.mlpack.org
< zoq> oh, I can also use mlpack.org and it rewrites to knife?
< rcurtin> yeah, it should also work, they both point to the same ip
< rcurtin> but the archives still insist that they should be referred to as knife.lugatgt.org in all the links despite setting the hostname in the mailman config
Shihao has quit [Ping timeout: 260 seconds]
topology has quit [Ping timeout: 260 seconds]
flyingpot has joined #mlpack
< kris1> i want to add annealing rates to the sgd implementation. i was thinking of how i would implement it.
< kris1> basically the problem is overloading wont work since all 3 types take the same parameters double, int
< kris1> so basically i have to figure out which annealing method to call based on the string decay_type;
flyingpot has quit [Ping timeout: 260 seconds]
< kris1> is this even a good addition
kesslerf_ has joined #mlpack
kesslerfrost has quit [Ping timeout: 246 seconds]
Narayan has joined #mlpack
< Narayan> Hello,I am trying to install the dependencies of mlpack in OS X. The installation of armadillo is stuck at "make bootstrap" for the past hour
< Narayan> could anyone tell me what the issue may be
< zoq> Narayan: Hello, it probably takes some time, I would recommend to use homebrew or MacPorts if you have the chance.
< Narayan> i am using homebrew sir
< Narayan> brew install homebrew/science/armadillo is the command i used
< Narayan> its being running for more than an hour :P
kesslerf_ is now known as kesslerfrost
< zoq> Narayan: oh okay, I've also used homebrew without any problems ... do you see any CPU activity?
< Narayan> Yes...its active...so i should just wait maybe.
shubhamkmr47 has joined #mlpack
shubhamkmr47 has quit [Client Quit]
coder has joined #mlpack
< coder> Hello, I had certain queries about mlpack
coder is now known as Guest17157
Guest17157 has quit [Client Quit]
kesslerf_ has joined #mlpack
Smeet has joined #mlpack
tejank10 has joined #mlpack
kesslerfrost has quit [Ping timeout: 246 seconds]
shivam has joined #mlpack
Smeet has quit []
shivam is now known as Guest26972
ashk43712_ has joined #mlpack
Guest26972 has quit [Client Quit]
tejank10 has quit [Quit: Page closed]
< kris1> zoq: should i use enum{no_decay = 0 , t_decay=1, step_decay =2, exponential_decay}
< kris1> in the constructor add decay_factor variable
< kris1> and during the call the to optimize function we could check if decay_type == 1 and then we could appropriately to the action
< kris1> is the question clear
< chvsp> Hi zoq. Did you have a look at my PR? If there are any changes to be made, kindly let me know. Thanks :)
< chvsp> PR #892
< zoq> kris1: I would rather use a sperate parameter for each and initialize it with some default value. Constructor(const bool decay = true, tDecay = 0, stepDecay = 0, ...) What do you think?
< zoq> chvsp: I'll take a look once I get a chance and make comments.
mikeling has quit [Quit: Connection closed for inactivity]
flyingpot has joined #mlpack
< kris1> zoq: i will do that then
kesslerf_ has quit [Ping timeout: 246 seconds]
flyingpot has quit [Ping timeout: 240 seconds]
< kris1> zoq:but i am in favor of the enum type as then we don't have to write to additional test like checking that only one of tdecay or stepDecay is set
< kris1> with enum type we would have a single decay rate. and the user can then specify which decay type to use
kesslerfrost has joined #mlpack
rainbowtiara has quit [Quit: Page closed]
kesslerfrost has quit [Ping timeout: 246 seconds]
kesslerfrost has joined #mlpack
< kris1> do you agree
kesslerfrost has quit [Ping timeout: 246 seconds]
vinayakvivek is now known as vinayak_
vinayak_ is now known as vinayakvivek
kesslerfrost has joined #mlpack
kesslerfrost has quit [Ping timeout: 246 seconds]
shihao has joined #mlpack
< shihao> I guess in naive bayes classifier, features for each class forms multivariate normal distribution, right?
< rcurtin> shihao: yeah, that is the assumption of naive bayes, that each class is multivariate Gaussian
< rcurtin> also with the assumption that the covariance is diagonal
< shihao> I don't quite understand why it's diagonal
< rcurtin> the assumption is that there's no correlation between dimensions, so any off-diagonal elements of the covariance matrix will be 0
< shihao> So instead multiply 3 distribution, we can use one distribution.
< shihao> oh, I see. Assume each feature is independent of others.
< shihao> Writing code to implement some concept is totally different than just learning it and doing homework.
< shihao> A lot of details I missed before,
< rcurtin> yeah, that was a big jump for me when I started working on mlpack :)
< shihao> Thank you, rcurtin!
shihao has quit [Ping timeout: 260 seconds]
flyingpot has joined #mlpack
< arunreddy_> rcurtin: For the Momentum policy class do you have something like https://github.com/mlpack/mlpack/tree/master/src/mlpack/methods/amf in mind ?
chvsp has quit [Quit: Page closed]
flyingpot has quit [Ping timeout: 240 seconds]
Narayan has quit [Ping timeout: 260 seconds]
< zoq> kris1: I would go with the single parameter solution, mainly because it's more consistent with the rest of the codebase. I think for a user it would be strange if the API changes between different methods.
< kris1> zoq:https://gist.github.com/kris-singh/e3139300e7726d3c8cb94dcd34731c38 have a look at this
< kris1> the code looks much clearner this way
< kris1> i think
< kris1> zoq:let me know if this ok. if ok then i will go ahead and write the tests for this
< arunreddy_> kris1: The problem with that implementation is that the switch is computed everytime an SGD iteration is executed(runtime)
< zoq> Ah, arunreddy_ is right probably not the best idea.
< arunreddy_> rcurtin has suggested to use a policy based design, using which you can speed up as switch can totally be avoided in cases when no_decay is set.
< kris1> arunreddy_: Yes, but would this be too much of a overhead since this are basically 4 statements are executed
< kris1> oh okay. i would have to read policy based design
< arunreddy_> Check out the discussion on https://github.com/mlpack/mlpack/pull/896
< arunreddy_> and for policy based design implemented in mlplack lib, refer to https://github.com/mlpack/mlpack/tree/master/src/mlpack/methods/amf
< zoq> Another option would be to calculate the step size offset once.
< zoq> ah, I see, you can't it depends on the time step.
< arunreddy_> But what if you want to have a variable decay
< kris1> arunreddy_:sorry what is variable decay.
< zoq> So, yeah I think writing a policy for each case is a nice idea, that also solves the "enum" questions.
< arunreddy_> The ability to change the decay rate..
< zoq> Which is also consistent with the rest of the codebase since, we do this all the time.
Lakshya1605 has joined #mlpack
< arunreddy_> kris1: As per my understanding, if someone wants to have a different implementation of decay rates using policy based design would help.
< kris1> okay i would look into the policy based design.
< kris1> i haven't seen decay rate change with time though. Is it implemented in tensorflow, keras.
< kris1> annealing learning rate have been implemented there
< zoq> Iit's like momentum.
< zoq> Also ,as arunreddy_ already pointed the amf is one example the pca implementation is another.
< zoq> For policy based design.
< Lakshya1605> Hello is this a specified behaviour or i am doing something wrong ? . In test/load_save_test.cpp I have found that arma::col is treated as arma::mat so whenever i try to load a matrix into a colvec it is not giving an error rather it is loading it .
shikhar has quit [Quit: Page closed]
< kris1> and i thought this was going to be a easy issue to solve :P. I fear design patterns since i read the visitor pattern i still getting my head around that.
shihao has joined #mlpack
< kris1> is policy based design same as template meta programming
< kris1> arunreddy_: i was looking at the discussion you pointed out. but you would need to momentum == 1 only at the start at not at every iteration right.
< kris1> as rcurtin pointed out
< arunreddy_> kris1, it is related to template metaprogramming.
< arunreddy_> kris1: What if you don't want to check if momentum==1 in the loop everytime?
< arunreddy_> What if you can make it a compile time flag and avoid the flow in the runtime. Something like this.. http://alumni.media.mit.edu/~rahimi/compile-time-flags/
< kris1> arunreddy_: but thats my question right you dont have to. if momentum is set can be checked outside the loop also right?
< kris1> why do you chek in every loop iteration?
< arunreddy_> kris1: The update is different for vanilla sgd and momentum sgd..
< kris1> ohh right got your point
shihao has quit [Quit: Page closed]
ashk43712_ has quit [Quit: Page closed]
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#1899 (master - 4c87d34 : Marcus Edel): The build passed.
travis-ci has left #mlpack []
ashk43712_ has joined #mlpack
< Lakshya1605> It turns out that not onyly "load_save_test.cpp" but anyfile in test folder has the same problem can some one suggest me how to solve this
< Lakshya1605> **only
< arunreddy_> zoq: Are there any codestyle files for mlpack. I have seen the codestyle guidelines at http://www.mlpack.org/trac/wiki/NewStyleGuidelines.. but didn't come across any ide codestyle xml files.
< rcurtin> arunreddy_: I don't believe there are any of these, if you like you can make something like this or an autolinter. there is an issue open for that somewhere...
< arunreddy_> rcurtin: Thanks, will look into autolinter. Is there a link to "this.." missing?
< rcurtin> not sure what you mean, but I did notice you were looking at the Trac style guidelines; there's a newer version on the Github wiki
< rcurtin> I should probably remove Trac now, we have used github for years so everything there is out of date
Vladimir_ has joined #mlpack
< Vladimir_> Hello
Lakshya1605 has quit [Ping timeout: 260 seconds]
rcurtin_ has joined #mlpack
ashk43712_ has quit [Quit: Page closed]
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
< zoq> Vladimir_: Hello there!
usama has joined #mlpack
< arunreddy> rcurtin_: A quick search on google pointed me to the trac website.
GuilhermeGSousa has joined #mlpack
< usama> Hello everyone
< usama> Hope all is going well
< usama> I was getting started with mlpack on windows
< usama> I had successfully managed to setup the library with VISUAL studio
< usama> But when running some test code I ran into an error.
< usama> Due to very limited resources for mlpack i could not find the solution after HOURS of searching
< usama> The error is as follows:
< usama> SeverityCodeDescriptionProjectFileLineSuppression State ErrorLNK2019unresolved external symbol sdot_ referenced in function "double __cdecl arma::blas::dot<double>(unsigned __int64,double const *,double const *)" (??$dot@N@blas@arma@@YAN_KPEBN1@Z)mlpackC:\Users\UKThe\Documents\Visual Studio 2015\Projects\mlpack\Practice.obj1
GuilhermeGSousa has quit [Client Quit]
< usama> Please HELP!!
< zoq> usama: Hello, have you seen https://github.com/mlpack/mlpack/issues/881 looks like you get the same error message.
usama has quit [Ping timeout: 260 seconds]
< kris1> zoq: i read the policy design pattern from some sources. but i clearly understood the wiki example https://en.wikipedia.org/wiki/Policy-based_design. So basically what i would mean is that i implement 4 policies classes.
< kris1> nodecay, tdecay, .....
< zoq> kris1: Yes, right.
< kris1> where the code that is common to all the classes resides in the base class
< zoq> correct
< kris1> lets svd_impl class and the code like update of the learning rate is implement in the policy class
< kris1> but wont i be calling the policy class methods at each iteration anyway.
< kris1> is this not tradeoff between switch and function call.
< kris1> am i making sense .
< zoq> kris1: If you make the function inline the compiler should place a copy of the code of that function at that point.
< kris1> yeah okay. but i have the correct idea right. if yes then i am would start implementing it.
< zoq> yes
< kris1> ok thanks i would get on it.
chvsp has joined #mlpack
flyingpot has joined #mlpack
flyingpot has quit [Ping timeout: 240 seconds]
asingh has joined #mlpack
asingh has quit [Client Quit]
arunreddy has quit [Quit: Page closed]
chvsp has quit [Quit: Page closed]
vinayakvivek has quit [Quit: Connection closed for inactivity]
irakli_p has joined #mlpack
< irakli_p> hello
< irakli_p> is anyone here ?
irakli_p has quit [Client Quit]
< zoq> irakli_p: Hello, I'm here.