naywhayare changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
sumedh_ has joined #mlpack
sumedh_ has quit [Ping timeout: 240 seconds]
sumedh_ has joined #mlpack
Anand_ has joined #mlpack
Anand_ has quit [Ping timeout: 246 seconds]
Anand_ has joined #mlpack
< marcus_zoq> Anand_: I'll spend some time thinking about a better class design, I think we should have a function to build the model, a function to benchmark the model and another function to run the metric.
< marcus_zoq> Anand_: The last two function should reuse the model, so we don't need to rebuild it. I'll can sketch something up, tonight, if you like. But right now the code that I checked in yesterday should work?
< Anand_> You didn't call the RunMetrics method in RunMethod anywhere?
< marcus_zoq> Anand_: Right I transferd the code in the NBCScikit function.
< marcus_zoq> So that we can reuse the model from the timing benchmark.
< marcus_zoq> But we need to change the design.
Anand_ has quit [Ping timeout: 246 seconds]
Anand_ has joined #mlpack
sumedh__ has joined #mlpack
sumedh_ has quit [Ping timeout: 240 seconds]
Anand_ has quit [Ping timeout: 246 seconds]
udit_s has joined #mlpack
Anand_ has joined #mlpack
udit_s has quit [Ping timeout: 240 seconds]
arcane has joined #mlpack
arcane has quit [Client Quit]
udit_s has joined #mlpack
udit_s has quit [Ping timeout: 240 seconds]
udit_s has joined #mlpack
Anand_ has quit [Ping timeout: 246 seconds]
oldbeardo has joined #mlpack
< oldbeardo> naywhayare: I tried storing the basis vector in CosineNode, it does not work well with the algorithm
< oldbeardo> specifically in step 3(c) of the algorithm, storage of the basis is independent of the priority queue
< oldbeardo> also, for iterating through the priority queue I need to copy it everytime
andrewmw94 has joined #mlpack
udit_s has quit [Ping timeout: 265 seconds]
andrewmw94 has quit [Quit: andrewmw94]
udit_s has joined #mlpack
< naywhayare> oldbeardo: this does not make sense; those entries in V correspond exactly to the orthonormalized basis vectors of those nodes in the priority queue
< naywhayare> further, copies should not be necessary if you are using references
udit_s has quit [Ping timeout: 276 seconds]
< oldbeardo> naywhayare: I will explain what the problem is
< oldbeardo> in the first iteration of the algorithm, you pop out the root node
< oldbeardo> so the priority queue is empty
< oldbeardo> now, you calculate the normalized version of the centroid of the left child
< oldbeardo> the orthonormalized version of the right child's centroid cannot be calculated since the left child's vector is not in the priority queue
< oldbeardo> and you cannot push it into the priority queue since you cannot calculate the Monte Carlo estimate without the right child's vector
udit_s has joined #mlpack
< naywhayare> okay... so you modify the modified gram schmidt function's signature to also accept the orthonormalized basis vector for the left child
govg has joined #mlpack
govg has quit [Changing host]
govg has joined #mlpack
oldbeardo has quit [Ping timeout: 246 seconds]
andrewmw94 has joined #mlpack
< andrewmw94> naywhayare: I've thought about the problem some more, and it seems to me that the best solution would be to have another class that interacts with the user and hides the changes in the root. So we could have RTreeWrapper { public: //methods
< andrewmw94> private: RTreeNode* root; ...
< andrewmw94> };
< andrewmw94> something like that
govg has quit [Ping timeout: 264 seconds]
< naywhayare> but when you then traverse this tree, you have to traverse with RTreeNodes and not RTreeWrapper
< naywhayare> so it will break the existing abstractions
govg has joined #mlpack
govg has quit [Changing host]
govg has joined #mlpack
< naywhayare> I know that I'm not helping by providing a working alternative
< naywhayare> but I think in this case, maintaining one RectangleTree class and ensuring that the constructor returns the root node means that there's no way around some copying of data every time the root node overflows
< andrewmw94> couldn't you have a traverse(arma::vec& point) method that searches the root node?
< andrewmw94> obviously it would really have the query matrix, but you get teh idea
< andrewmw94> ahh, I get it. Existing abstractions.
< naywhayare> well, you could, but the thing is that the code we currently have works for the existing BinarySpaceTree and CoverTree, but if the RectangleTree works significantly differently, then the code we have won't work for the RectangleTree
< naywhayare> yeah
< andrewmw94> hmm. well as far as the API, we could have functions that match but ignore, eg. the referenceNode& argument
< naywhayare> although it's possible to change the abstractions, that's often a ton of work and I think in this case we'd lose easy support for traversing subtrees instead of an entire tree (despite the fact that we have no code that explicitly does this, I'd like to leave the possibility open for future work)
< naywhayare> can you clarify what you mean?
< andrewmw94> well, let's assume that I could make the RTreeWrapper class in an efficient way (I think I know how to do that)
< naywhayare> sure
< andrewmw94> I'm not sure what abstractions you're referring to, but say we wanted something similar to the SingleTreeTraverser classes Traverse() method. We could have a method that takes the same arguments but ignores the ReferenceNode argument. I don't really like that, but I think it would work.
< andrewmw94> As regards, searching a particular subtree, we could support that. It's already supported. It's just that usually you want to search the root.
< naywhayare> so then how would that function get the tree itself?
< naywhayare> (if we ignore the referenceNode argument)
< andrewmw94> it would default to using the root node, which would be stored in the wrapper
< andrewmw94> it's easy to track the root, it's just annoying to try to return it from the constructor
< naywhayare> right, but how are you going to have access to the wrapper?
< naywhayare> the traversers are generally standalone classes, because often they may be applicable to more than one type of tree (though not always)
< andrewmw94> Ahh. I didn't realize that
< andrewmw94> even so, I think you could call Traverse(queryIndex, wrapper.Root()) or something like that
< andrewmw94> basically, I think you could do it with just the RTreeNodes if you have the user pass a pointer whenever they add or remove points. But I don't like forcing them to do that (and having the constructor return a misleading address)
< andrewmw94> so I think it makes sense to hide these behind another class
< naywhayare> the options I see are to hide them behind another class, which is going to be difficult to integrate with the existing abstractions, or to make copies whenever the root node overflows (should be ~log N times)
< naywhayare> given that the cost of assembling the tree should be at least ~N log N anyway, I think an additional log N cost isn't a huge deal, so I think we should at least investigate that first
< naywhayare> one potential option which is an extension of your idea is to take all the data stored in a RectangleTree node and store it in some RectangleTreeData class (the name is not great; I can't think of anything better at the moment)
< naywhayare> each RectangleTree node only holds a pointer to its data; so copying a RectangleTree node takes O(1) time (just a single pointer copy)
< naywhayare> however, I think the drawback of this idea is that this will incur a pointer dereference every single time Child(), Point(), or any other RectangleTree function will be called (and often these are called very very many times so I don't think the cost will be negligible)
< andrewmw94> I don't think that works though, since the matrix is going to be divided between the two nodes unpredictably
< sumedh__> naywhayare: Really sorry to interrupt... Just one little doubt... AMF::Apply() is const.. Hence the changes to update rule are not allowed... I am currently using mutable... Will that be okay?? cause I haven't seen mutable anywhere in the library..
< naywhayare> sumedh__: make AMF::Apply() non-const; it shouldn't be const anyway, because it can modify the update ruels
< naywhayare> *rules
< naywhayare> andrewmw94: I thought every non-leaf matrix was empty though ?
< andrewmw94> yes, but you have to split leaf nodes quite frequently
< naywhayare> sure, but we already talked about an idea to split leaf nodes at the parent level
< naywhayare> you can't avoid the copy there
< sumedh__> not exactly... If you think of it... only tuning parameters should be considered here... it only changes momentum matrix which changes in any scenario...
< naywhayare> andrewmw94: if you're splitting the root, and the root isn't a leaf, then you're just moving RectangleTree pointers around, not making deep matrix copies (unless I've misunderstood)
< sumedh__> so I am making momentum matrix mutable...
< sumedh__> this solves everything... tuning parameters are never changed...
< naywhayare> sumedh__: yeah, I understand. I was pointing out that it's not valid to say AMF::Apply() is const, because as you pointed out, applying the update rules may mean that variables local to the update rules may need to be changed
< andrewmw94> naywhayare: yeah I think so. I didn't realize we were only talking about the root
< naywhayare> andrewmw94: yeah; for non-root splitting, you just do it at the parent level. I'm referring to the idea around 21:15 at http://mlpack.org/irc/mlpack.20140610.html
< naywhayare> I *think* that idea will work, but I could be wrong -- I haven't actually sat down and implemented it
< andrewmw94> it seems like it should, but I'm not sure I understand it entirely.
< naywhayare> ok; how should I clarify my idea?
< sumedh__> naywhayare: both ways are reasonable to me... Will just make it non-const :)
< naywhayare> sumedh__: ok, sounds good
< andrewmw94> naywhayare: I'm unsure on the children.clear(), children.push_back(copy of this*), children.push_back(b)
< naywhayare> ok, so the part when you are copying the root node
< andrewmw94> yeah
< naywhayare> so, the basic idea here is that we'll copy the root node, then call SplitNode() on the copy of the root node; this will give us two children. then we'll take the real root node, clear its children, and add the two children that SplitNode() gave us
< andrewmw94> ahh. Sounds like it should work.
< naywhayare> the main cost of this technique is the copying of the root node; however, this should only mean copying the member variables of RectangleTree (including the vector of children)
< naywhayare> so this is a maximum of sizeof(RectangleTree) + sizeof(RectangleTree*) * maxNumChildren (or something like that)
< naywhayare> which doesn't have any dependence on the size of the dataset, which is good
< naywhayare> the dependence on the dataset comes from the number of times the root can be split, which is log(N)
< naywhayare> so we can just hope the constant in front of the O(log N) isn't too bad :)
< naywhayare> realistically, though, tree construction time is usually way, way less than tree traversal time for dual-tree algorithms (for single-tree algorithms this can be slightly different depending on the number of query points used)
< andrewmw94> for nodes that are not the root, I don't see any reason why we need to change the current algorithm. Did I miss something?
< naywhayare> hm, you could be right
< naywhayare> if you see a way to make it work without modifying the current algorithm, then go ahead and do it
Anand_ has joined #mlpack
< sumedh__> naywhayare: Okay I made all the changes, added momentum to SVDBatchLearning.. Papers mentions default momentum of 0.9... but it does not give good returns on GroupLens or MovieLens...
< sumedh__> Empirically I think 0.2 is better... As it improves time and does not affect residue that much...
< sumedh__> I will make the commit... So you will be able to check the code...
< Anand_> Marcus : Did you finalize the the design? The current code also will work. We just can call RunMetrics from RunNBCScikit instead of pasting the whole code inline!
< naywhayare> sumedh__: okay, sounds good
< naywhayare> have you tried a momentum of 0.9 on any of the datasets used in the paper?
< naywhayare> Anand_: the metrics you've built work on classifiers, right?
< Anand_> yes
< naywhayare> and you were looking for more mlpack classifiers in your recent email, I think?
< naywhayare> (if I understood right)
< Anand_> Ryan : Actually we want to make the metrics work for as many methods as we can, for the current mlpack methods
< naywhayare> ok, I had some thoughts then
< Anand_> Yeah, go ahead!
< naywhayare> logistic_regression is an out-of-the-box classifier; you might need to make minor revisions to it in the same way you did for NBC
< naywhayare> so that one should be pretty straightforward
< naywhayare> you could use GMMs and HMMs too, but that might be a little more difficult --
< naywhayare> what you can use is GMM::Probability(point) or HMM::Probability(point), which will give you the probability of a point being from a particular GMM
< naywhayare> (or from a particular HMM)
< sumedh__> naywhayare: Yes MovieLens... with 0.9 they are getting residue around 0.01... I am getting around 2.3 * e-6 with 0.2... obviously 0.2 is taking longer time... but that is comparable to NMF...
< naywhayare> sumedh__: what results do you get when you use a residue of 0.9?
< naywhayare> Anand_: so you could use labeled data and train two GMMs on two classes, then use GMM::Probability() to figure out which is more likely and use that as a prediction
< Anand_> Ryan : Ok, I will have a look at those methods
< naywhayare> but I don't know if that fits very well into your system. at the very least, logistic_regression would be a good place to start
< sumedh__> naywhayare: 0.295898 with 0.9
< sumedh__> in 3 sec...
< Anand_> Ok. Currently, what we are doing is make all the metrics work for NBC in all the other libraries too
< jenkins-mlpack> Starting build #1941 for job mlpack - svn checkin test (previous build: FIXED)
< Anand_> Next, I will take logistic_regression then
< naywhayare> Anand_: ok, that seems like a good plan for now
< sumedh__> naywhayare: 5.3 * e-6 with 0.2... time 14 seconds...
< Anand_> What other methods do you think will fit this with minor changes?
< naywhayare> Anand_: you could use the same kind of tricks for k-nearest-neighbor search (I have been meaning to implement a kNN classifier for ages, maybe I will finally do it), I think NCA, and something similar with collaborative filtering
< naywhayare> although I guess for CF the better measure is RMSE and not one of the classification metrics you've implemented so I don't know if that's entirely applicable
< Anand_> But they should work for K nearest, right?
< naywhayare> Anand_: I think so
< sumedh__> naywhayare: still NMF outperforms SVD 3.87 * e-12 ... time 16 seconds...
< Anand_> And if you implement CF, I will go on with the metric you mentioned!
< Anand_> I will implement RMSE too. Btw, what is it?
< naywhayare> sumedh__: sure, but what I mean is, you've implemented SVD with momentum just like in the paper; shouldn't it give the same results in the paper?
< naywhayare> root-mean-squared-error; I am not sure exactly how they calculated it for the netflix prize
< Anand_> We already have a mean squared error metric
< sumedh__> naywhayare: I guess they are used the bigger dataset... 100M ... ??
< sumedh__> we are using 10M...
< naywhayare> but I think it was something like the root-mean-squared-error of the difference between the predicted ratings and the true ratings
< naywhayare> the MSE metric you've implemented might work in that case, then
< naywhayare> I'd need to look into it a little bit
< Anand_> It should work, I guess
< naywhayare> sumedh__: hm, would it be easy for you to re-run the test with the 100M dataset/
govg has quit [Quit: leaving]
< sumedh__> naywhayare: yes they using 100M ... 1,000,000 users... and they have used 0.0002 learning rate...
< sumedh__> yes sure...
< naywhayare> yeah; I want to make sure we can reproduce their results
< naywhayare> definitely that dataset is too big to use in the boost tests, but if we can at least test it once, that would be good
< naywhayare> then we can use smaller datasets for the boost tests
< sumedh__> naywhayare: sorry wrong info... they are using 1M and we are using 100K...
< sumedh__> downloading...
< naywhayare> sumedh__: ah, ok
< sumedh__> naywhayare: with learning parameter 0.002 and momentum 0.9 ... its diverging in 4 iteration :(
< sumedh__> naywhayare: with learning parameter 0.0000001 and momentum 0.9 ... I am getting 0.38 in 24 seconds...
< sumedh__> without momentum I am getting residue something around e-8... but taking a longer time...
< jenkins-mlpack> Project mlpack - svn checkin test build #1941: SUCCESS in 32 min: http://big.cc.gt.atl.ga.us:8080/job/mlpack%20-%20svn%20checkin%20test/1941/
< jenkins-mlpack> sumedhghaisas: * Added momentum to SVD batch learning
< jenkins-mlpack> * AMF now calls Initialize on update rule before starting the optimization
< jenkins-mlpack> * Every update rule should now implement Initialize accepting data matrix
< jenkins-mlpack> and rank
< sumedh__> naywhayare: I guess we have to add minResidue again... NMF is not terminating...
Anand_ has quit [Ping timeout: 246 seconds]
< naywhayare> sumedh__: we should be able to produce their results for SVD with momentum nearly exactly; are you using the same learning parameter?
< sumedh__> yes... they have used 0.0002 with 0.9 momentum for SVDBatch
< naywhayare> ok, but you gave results for a learning parameter of 0.002
< naywhayare> maybe a typo?
< sumedh__> yeah ... that was a typo...
< naywhayare> ok, so you did use 0.0002 then
< sumedh__> But to be sure I am checking everything again...
< naywhayare> I would look for bugs, then. like I said, we should be able to reproduce their result exactly
< naywhayare> maybe not exactly, but at least close to it
< naywhayare> I have to grab lunch... back later
< sumedh__> okay this are iteration residues
< sumedh__> 0: inf
< sumedh__> 1: 1.24 * e+60
< sumedh__> 2: inf
< sumedh__> 3 : inf
< sumedh__> and over :(
< sumedh__> naywhayare: okay.. I will also check for bugs tonight...
sumedh__ has left #mlpack []
sumedhghaisas has joined #mlpack
< udit_s> naywhayare: I've just added the working tests, the four we talked about.
< udit_s> You could check them out, now; I'll be back after dinner.
oldbeardo has joined #mlpack
< oldbeardo> naywhayare: sorry about disappearing, had stepped out for some work
< oldbeardo> any thoughts about how I should deal with the problem I mentioned?
Anand_ has joined #mlpack
Anand_ has quit [Ping timeout: 246 seconds]
< naywhayare> oldbeardo: see my message at 13:21 http://www.mlpack.org/irc/
< andrewmw94> naywhayare: I'm stuck again. Could you have a look at my latest commit and as usual, uncomment neighbor_search.hpp:16 and then try to compile?
< naywhayare> andrewmw94: it will have to wait for a few hours, I've got some meetings in a few minutes
< naywhayare> I'm sorry for the delay but I will look at it first thing when I get back (4pm EDT)
< naywhayare> (is it EDT now? I think?)
< andrewmw94> I don't really know actually
< andrewmw94> good luck with you meetings though
< oldbeardo> naywhayare: I don't get it, how will that be a generic solution?
Anand_ has joined #mlpack
< jenkins-mlpack> Starting build #1942 for job mlpack - svn checkin test (previous build: SUCCESS)
< oldbeardo> naywhayare: I think I may have found a solution, I will store the basis vectors in the CosineNode objects, but will use a vector to reference and a vector of bools(considerBasis)
< oldbeardo> this will get rid of the join_rows() inefficiency
oldbeardo has quit [Quit: Page closed]
< jenkins-mlpack> Project mlpack - svn checkin test build #1942: SUCCESS in 32 min: http://big.cc.gt.atl.ga.us:8080/job/mlpack%20-%20svn%20checkin%20test/1942/
< jenkins-mlpack> andrewmw94: more fixes. Compilation still commented out.
< udit_s> naywhayare: so did you have a chance to look at the code ?
< andrewmw94> he said he has a meeting "I'm sorry for the delay but I will look at it first thing when I get back (4pm EDT)"
< udit_s> Oh. I'm sorry. I just went through the chat history. Thanks, andrew,
< andrewmw94> no problem
sumedhghaisas has quit [Ping timeout: 240 seconds]
Anand_ has quit [Ping timeout: 246 seconds]
sumedhghaisas has joined #mlpack
udit_s has quit [Quit: Leaving]
< naywhayare> andrewmw94: I am looking at the latest revision; I assume you meant to uncomment allknn_main.cpp:275?
< andrewmw94> I don't think it actually matters
< andrewmw94> one moment while I make sure that is the one I think it is
< andrewmw94> yeah, you can leave that as is
< naywhayare> neighbor_search.hpp:16 was already uncommented
< andrewmw94> hmm. Didn't I break the build then? I didn't get a message
< naywhayare> no: 17:26 < jenkins-mlpack> Project mlpack - svn checkin test build #1942: SUCCESS in 32 min: http://big.cc.gt.atl.ga.us:8080/job/mlpack%20-%20svn%20checkin%20test/1942/
< naywhayare> oh! nevermind. I already had it uncommented locally
< naywhayare> trunk has it commented
< naywhayare> anyway, hang on, let's see how that changes things
< andrewmw94> ahh. I was going to say, that's worse because now I can't tell why it works there but not on my computer
< naywhayare> error: ‘RTreeDescentHeuristic’ has not been declared
< naywhayare> that's the same one you're getting, right?
< andrewmw94> wait. Actually I found that one. I had heuristic and hueristic
< andrewmw94> let me recommit
< naywhayare> the line '#define max(a, b) 4*max(a-1, b-1)' is terrifying to me; what's the reason for that?
< andrewmw94> it's currently one with multiple definitions, but I think my include guard should work
< andrewmw94> that line is to break the code to double check that the __MLPACK... is defined
< andrewmw94> it gives an error with another max function that takes one argument
< naywhayare> right, that only happens if I include r_tree_descent_heuristic_impl.hpp before r_tree_descent_heuristic.hpp
< naywhayare> why not just '#error "something is wrong"' ?
< andrewmw94> I didn't know about "#error" I'll use that now
< andrewmw94> commit is complete
< naywhayare> ok, sounds good
< jenkins-mlpack> Starting build #1943 for job mlpack - svn checkin test (previous build: SUCCESS)
< andrewmw94> I did the include guards correctly right?
< naywhayare> yeah; but RTreeDescentHeuristic isn't a templated class, so it should be either inlined, or placed in a .cpp file :)
< naywhayare> I'd do inline in this case because EvalNode() is so simple
< naywhayare> adding the inline makes it compile fine... I'll go ahead and commit it
< andrewmw94> yeah, I was going to, and it seems that everything works if I do, but later ones will be more complicated
< andrewmw94> so I need to know how to do those
< naywhayare> if they're quite complex and not templatized, put the implementation of the function in a .cpp file
< andrewmw94> ahh. Ok. Would that fix the error?
< naywhayare> then make sure to add it to the SOURCES variable in the relevant CMakeLists.txt
< naywhayare> yeah, that would fix the error too
< andrewmw94> Now I'm confused. What causes the error?
< naywhayare> in r16684 I just added an inline; that's the other solution
< naywhayare> the error is caused because of multiple compilation units
< andrewmw94> but shouldn't the include guards fix that?
< naywhayare> not across compilation units
< naywhayare> the header guards only apply for one compilation unit
< naywhayare> so having functions declared (but not implemented) across multiple compilation units is okay; that's why you have header files
< naywhayare> but the implementations in .cpp functions can't be in multiple compilation units
< naywhayare> otherwise when the compiler puts it all together, it says "I have multiple definitions of this function and no guarantee that they're all the same, so I don't know what to do"
< naywhayare> sorry, to be more clear, not multiple definitions, but multiple implementations
< naywhayare> templated functions and inlined functions are an exception to this rule
< naywhayare> because in both those cases, the implementation has to be available to all compilation units that are using that code
< andrewmw94> so what are the different compilation units?
< naywhayare> allknn_test.cpp, to_string_test.cpp, cf_test.cpp, etc.
< naywhayare> each of those are individually compiled into allknn_test.o, to_string_test.o, cf_test.o, etc.
< naywhayare> which is why in the errors, you see that the multiple definitions stem from allkfn_test.cpp.o, cf_test.cpp.o, etc.
< naywhayare> (sorry, CMake compiled them into *.cpp.o, not just *.o)
< naywhayare> (doesn't make a difference either way)
< andrewmw94> so how does this differ from the other times that I defined functions?
< naywhayare> your other functions are either templatized or inline
< naywhayare> functions that you implement inside the class definition are automatically considered inline
< naywhayare> and if the class itself is templatized, then all of its member functions (except specializations) are templatized and need to be in header files, and the compiler won't have an issue with them having multiple implementations
< andrewmw94> so .cpp files and .hpp files are treated differently?
< naywhayare> yes, but by CMake, not the compiler
< naywhayare> you could technically tell the compiler to compile all the cpp files in the same compilation unit (I think) by specifying all the cpp files and hpp files at once, but this doesn't make too much sense and will make compilation slower
< naywhayare> (in general)
< andrewmw94> ahh. I was thinking, I'm nearly certain this goes against what I read in "The C++ Programming Language"
< naywhayare> what I've said is based on my understanding of the compiler, which could be wrong. so if I've said something that directly conflicts, I'd trust that book over my own thoughts
< naywhayare> (also you should tell me if I said something wrong, so I can go update my knowledge base)
< andrewmw94> After quickly thumbing through it trying to find the spot. Pg.s 203 and 216 I think. But I think it's for one compile, not the something like CMake.
< andrewmw94> I think everything you said is correct as regards CMake, but that's because it works, not because I know
< naywhayare> he uses the word "translation units" which I think is more correct than "compilation units"; either way, (in my world) they mean the same thing
< naywhayare> other than that, I think that what I said is accurate, but I think Stroustrup does a better job explaining
naywhayare has joined #mlpack
< andrewmw94> I like your way of explaining it better actually. From pg 216 it sounds to me like the include guards would work across different translation units
< naywhayare> hah, I'll call Stroustrup up and tell him his book needs revision and to list me as a coauthor. I'm sure it'll work !
< andrewmw94> I believe he gives you a few dollars too
< andrewmw94> though it's not a prestigious as finding an error in TAoCP of course ;)
< jenkins-mlpack> Project mlpack - svn checkin test build #1943: SUCCESS in 32 min: http://big.cc.gt.atl.ga.us:8080/job/mlpack%20-%20svn%20checkin%20test/1943/
< jenkins-mlpack> andrewmw94: more of the same.
< naywhayare> oh, he actually accepts patches? neat
< naywhayare> andrewmw94: in looking through your code, I noticed lots of java-like idioms that I think won't work at runtime in C++. if you want me to look through these at some point and provide some feedback, I can do that
< naywhayare> but otherwise I'll wait until you say it's finished
< naywhayare> so that I don't comment on things you already know about and have made a mental note to redo
sumedhghaisas has quit [Ping timeout: 272 seconds]