naywhayare changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
andrewmw94 has quit [Ping timeout: 256 seconds]
andrewmw94 has joined #mlpack
andrewmw94 has quit [Ping timeout: 260 seconds]
sumedh_ has joined #mlpack
Anand has joined #mlpack
Anand has quit [Ping timeout: 246 seconds]
naywhaya1e has joined #mlpack
marcus_z1q has joined #mlpack
marcus_zoq has quit [Ping timeout: 240 seconds]
naywhayare has quit [Ping timeout: 240 seconds]
marcus_z1q is now known as marcus_zoq
sumedh__ has joined #mlpack
sumedh_ has quit [Ping timeout: 264 seconds]
Anand has joined #mlpack
Anand has quit [Quit: Page closed]
Anand has joined #mlpack
Anand has quit [Quit: Page closed]
Anand has joined #mlpack
< Anand>
Marcus : The bootstrapping will run for the methods and libraries in the yaml file, right? So, we will run a config file with all libraries and methods integrated with metrics
< marcus_zoq>
Anand: Right!
< Anand>
Marcus : Ok.
< Anand>
I will ask for your help while doing the printing stuff! :)
< Anand>
As you will see, I am doing the bootstrapping thing 100 times as of now (for each method) and then normalizing the metrics to create the final metrics. Is 100 good enough? We might want to increase it
< marcus_zoq>
Anand: We can give the user the option to choose another number. Rich Caruana also use 1000 iterations.
< marcus_zoq>
Anand: A sorry I've read 1000.
Anand has quit [Ping timeout: 246 seconds]
andrewmw94 has joined #mlpack
udit_s has joined #mlpack
< udit_s>
naywhayare: Hey.
< udit_s>
naywhayare: I was looking at tests for adaboost...
< naywhayare>
udit_s: hey there; did you find any good ideas for tests?
< udit_s>
naywhayare: well, it's all mathematics; apart from the one which we talked about earlier and another one which tests how the final hypothesis performs, I'm finding it quite tricky to come up with more.
< naywhayare>
can you remind me of the one we talked about earlier?
< udit_s>
we only weigh those points which we want the perceptron to focus on.
< naywhayare>
yeah -- that was just a test for the weighted training for the perceptron, not adaboost itself
< udit_s>
okay.
andrewmw94 has joined #mlpack
< udit_s>
I'm testing it out with a sample dataset I found.
< naywhayare>
okay; I would have thought it would be easy enough to generate a dataset with the required properties
< naywhayare>
but if you found one, that's fine too
< udit_s>
I was talking about a general dataset for adaboost, not one for the weighted perceptron.
< naywhayare>
ah, okay
< jenkins-mlpack>
Starting build #2025 for job mlpack - svn checkin test (previous build: SUCCESS)
< jenkins-mlpack>
Ryan Curtin: Actually calculate score and base case for first node combination.
< naywhayare>
sumedh__: yeah, I am here
< sumedh__>
there is no default constructor for sp_mat::const_iterator....
< naywhayare>
use sp_mat::begin()
< naywhayare>
or begin_col(), begin_row(); does that help?
< sumedh__>
iterator is a member of a class...
< sumedh__>
but it is assigned value in the initialize function...
< sumedh__>
so I need to create a empty iterator...
< sumedh__>
how can I call begin() without an object??
< naywhayare>
sp_mat::const_iterator it = X.begin()
< naywhayare>
or maybe I am not understanding your question?
< sumedh__>
here X is provided in Initialize function of Termination Policy...
< naywhayare>
yeah, X is just the sparse matrix you want an iterator for
< sumedh__>
but iterator will be constructed in the constructor...
< sumedh__>
I dont have X then...
< naywhayare>
I don't understand why you need an empty iterator
< sumedh__>
Okay let me explain...
< naywhayare>
okay
< sumedh__>
Class SVDCompleteIncrementalLearning contains member arma::sp_mat::const_iterator it...
< sumedh__>
but it is to be assigned value in Initialize function...
< sumedh__>
and not constructor...
< naywhayare>
why are you keeping the iterator as a member of the class, and not just creating one when you need it?
< sumedh__>
so the default constructor will be invoked on 'it'...
< sumedh__>
complete incremental learning iterates over all the non zero entries one by one...
< sumedh__>
so storing iterator helps keep track...
< naywhayare>
I see what you mean
< naywhayare>
let me think for a few moments
< sumedh__>
the WUpdate and HUpdate is called on each entry...
< sumedh__>
ohh okay...
< sumedh__>
The last option would be to create a pointer ...
< sumedh__>
and I was wondering why not provide a default constructor?? all the stl library wrappers iterators provide one...
Anand has quit [Ping timeout: 246 seconds]
< naywhayare>
sumedh__: there can't be a default constructor because the iterator must have a reference to the sparse matrix it is iterating over
< naywhayare>
I think using a pointer to the const_iterator is the best thing to do here, like you suggested
< naywhayare>
I thought about a couple other ideas but they are all basically the same thing, and holding a pointer is the simplest
< sumedh__>
naywhayare: then I assume STL iterators must be storing pointers to object.... why can't sparse matrix iterators do the same??
< sumedh__>
any special reason??
< naywhayare>
I don't see where STL iterators have a default constructor
< naywhayare>
I thought you had to call vector::begin() or vector::end()
< sumedh__>
no they do...
< sumedh__>
wait lets check again..
< sumedh__>
yes they do...
< sumedh__>
just std::list<size_t>::iterator it;
< sumedh__>
is valid...
< sumedh__>
I don't know how it is implemented...
< sumedh__>
maybe like normal pointers... maybe pointing to random entry...
< naywhayare>
yeah, it probably holds a pointer to the list, and is NULL by default
< naywhayare>
yeah, STL vectors are supposed to be default constructible
< naywhayare>
alright... so we should patch the Armadillo sparse iterator type, then, I think, so that it's default constructible (or, has a default constructor)
< sumedh__>
great ... can I do it??
< naywhayare>
sure, if you like. we should replace the SpMat<eT>& in iterator_base with SpMat<eT>*
< naywhayare>
and then update the class
< naywhayare>
if you send me the patch when you've made it, I can test it with all of the sparse matrix tests I have
< naywhayare>
then we can send it to Conrad
< naywhayare>
we just need to be sure that it doesn't change any of the public API (but it shouldn't)
< sumedh__>
sure... can you guide me through files though?? like which files to look at...
< naywhayare>
SpMat_iterators_bones.hpp and SpMat_iterators_meat.hpp
< naywhayare>
those should be the only ones you need to change
< sumedh__>
naywhayare: and there is another thing thats bugging me... I think there should be row() and col() functions for normal matrix iterators...
< naywhayare>
well, but the normal matrix iterator is just eT*
< naywhayare>
let me send an email to conrad about the normal matrix iterators and we'll see what he thinks
< sumedh__>
ohh okay.... but the policy should be consistent... don't you think??
< sumedh__>
that way designing abstraction is easier...
< naywhayare>
I agree, but we have to see what Conrad thinks... the final call is his :)
< sumedh__>
do you think it will have any effect on speed??
< naywhayare>
not sure
< sumedh__>
by the way... meat and bones... I like that :)
< naywhayare>
:)
< naywhayare>
changing the reference to a pointer in sparse iterators should have no speed difference
< naywhayare>
but I don't know how providing row() and col() for dense matrices will work
< naywhayare>
ok, sent an email... let's see what he says
< sumedh__>
okay... I am reading those files right now... looks pretty serious code :)
< naywhayare>
you won't need to make any changes to the algorithms implemented, just change a reference to a pointer and provide a default constructor
< naywhayare>
it will probably take a while to figure out what the right things to change are, but, when you're done, it shouldn't be too complex :)
< naywhayare>
there shouldn't be any template weirdness in the sparse iterator code, either, so you shouldn't need to worry about any metaprogramming or other odd stuff
< sumedh__>
yeah... right now I am trying to figure out what to change... but I am having fun :)
< naywhayare>
:)
< sumedh__>
naywhayare : okay done... :)
< sumedh__>
pretty simple fix...
< naywhayare>
ok, go ahead and send it to me and I'll test it
< naywhayare>
you should still use a pointer to an iterator in your code, though
< naywhayare>
because we can't guarantee that the version of Armadillo will have default-constructible sparse iterators
< naywhayare>
so, you should also open a ticket in Trac mentioning that the pointer-to-an-iterator can be replaced with just an iterator when the minimum version of Armadillo has default-constructible sparse iterators
< sumedh__>
naywhayare: sent :)
< naywhayare>
thanks
< sumedh__>
yeah... you are right... lots of new and delete calls will be there...
< sumedh__>
:(
< naywhayare>
sumedh__: what does an STL iterator do if I create one without an object?
< naywhayare>
i.e. if I do 'std::vector<double>::iterator it();'
< naywhayare>
then it++ or *it or something like that, what happens?
< naywhayare>
segfault?
< sumedh__>
yes...
< sumedh__>
naywhayare: it generates segfault...
< sumedh__>
do you want to do checking ?? I think its not necessary...
< naywhayare>
no, I don't think we should check
< naywhayare>
it's very slow
< naywhayare>
especially with how much iterator functions are called
< naywhayare>
I just wanted to make sure the behavior is the same as invalid STL iterators
< sumedh__>
yes... exactly...
< sumedh__>
naywhayare: complete incremental learning working :)
< sumedh__>
commiting the code now... will write tests tomorrow... will take some sleep now...
< naywhayare>
great, good to hear that
< sumedh__>
any problem with default constructor implementation??
< naywhayare>
you missed some changes from M. to M->
< naywhayare>
but I'm working them out
< naywhayare>
it's templated code, so the compiler won't try to compile a function unless it is used
< naywhayare>
so my guess is that you didn't compile code that used every possible function
< naywhayare>
but I have a test suite that does exactly that, so it's not a problem
< sumedh__>
ohh... I didn't get any compiler errors...
< sumedh__>
ohh...
< naywhayare>
if you are interested, it's in an svn repo hosted at svn://svn.igglybob.com/arma-sparse/
< naywhayare>
but I'll work these out, it's not a problem
< sumedh__>
missed that... forgot that its a templated code...
< sumedh__>
sorry for that...
< naywhayare>
no problem :)
< naywhayare>
get some sleep :)
< jenkins-mlpack>
Starting build #2026 for job mlpack - svn checkin test (previous build: SUCCESS)
sumedh__ has quit [Quit: Leaving]
< udit_s>
naywhayare: hey.
< naywhayare>
udit_s: hey there
< udit_s>
I'm rewriting the adaboost algorithm from scratch now.
< udit_s>
Some problems with the last implementation.
< naywhayare>
okay; do you need my help with any of it?
< udit_s>
Not exactly. Just some reassurances. :)
< udit_s>
I've figured out a good way to code adaboost.m1
< udit_s>
what I will implement is adaboost.mh
< udit_s>
it boosts based on the hamming loss function. Thing is,
< udit_s>
While implementation, for each iteration there are two/three matrices of the order of (instances * number of classes) whose elements need to be updated.
< udit_s>
That is where I have my doubts.
< naywhayare>
okay. for now, implement it in the best way you can see to implement it
< naywhayare>
and we can see if there are little tricks later
< naywhayare>
does that sound reasonable?
< udit_s>
Yeah, okay. Just a little hesitant that's all.
< udit_s>
Btw,
< udit_s>
any leads on a weighted decision stumps implementation ?
< naywhayare>
almost certainly we will have to make one up ourselves, unless you can find a paper that used decision stumps for adaboost
< naywhayare>
and then maybe we can figure out what they did
< udit_s>
Yeah. I've been going through quite a few. Am yet to read any which talk about something similar.