verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
Alvis__ has joined #mlpack
Alvis__ has quit [Ping timeout: 246 seconds]
ironstark has quit [Ping timeout: 240 seconds]
yannis has joined #mlpack
mentekid has quit [Read error: Connection reset by peer]
yannis is now known as Guest6280
Alvis__ has joined #mlpack
bharath has joined #mlpack
bharath has quit [Remote host closed the connection]
sumedhghaisas has joined #mlpack
trapz has quit [Quit: trapz]
chenzhe has quit [Ping timeout: 246 seconds]
trapz has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
mikeling has joined #mlpack
trapz has quit [Quit: trapz]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
kris has quit [Ping timeout: 258 seconds]
trapz has joined #mlpack
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
bharath has joined #mlpack
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 260 seconds]
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
chenzhe has joined #mlpack
trapz has quit [Quit: trapz]
chenzhe has quit [Ping timeout: 246 seconds]
sumedhghaisas has quit [Ping timeout: 260 seconds]
bharath has joined #mlpack
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath_ has joined #mlpack
bharath has quit [Ping timeout: 260 seconds]
bharath has joined #mlpack
bharath_ has quit [Ping timeout: 258 seconds]
diehumblex has quit [Quit: Connection closed for inactivity]
vinayakvivek has joined #mlpack
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 240 seconds]
Alvis_ has joined #mlpack
Alvis_ has quit [Remote host closed the connection]
Alvis_ has joined #mlpack
Alvis__ has quit [Ping timeout: 246 seconds]
Alvis__ has joined #mlpack
Alvis_ has quit [Ping timeout: 258 seconds]
Pawan has joined #mlpack
< Pawan>
Is Marcus Edel online?
< Pawan>
I wanted to ask questions about the deep learning module gsoc
< Pawan>
I hope to turn in the applicantion
< Pawan>
Application *
Pawan has quit [Ping timeout: 260 seconds]
Pawam has joined #mlpack
< Pawam>
Is proficiency in deep learning a must to apply to any ?
Pawam has quit [Ping timeout: 260 seconds]
ironstark has joined #mlpack
Alvis has joined #mlpack
Alvis__ has quit [Ping timeout: 246 seconds]
Alvis has quit [Ping timeout: 260 seconds]
Alvis has joined #mlpack
pawan_sasanka has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 246 seconds]
pawan_sasanka_ is now known as pawan_sasanka
< pawan_sasanka>
can i find marcus edel here?
ironstark has quit [Ping timeout: 256 seconds]
shikhar has joined #mlpack
ironstark has joined #mlpack
pawan_sasanka has quit [Ping timeout: 246 seconds]
Alvis has quit [Ping timeout: 246 seconds]
pawan_sasanka has joined #mlpack
pawan_sasanka has quit [Ping timeout: 268 seconds]
pawan_sasanka has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 268 seconds]
pawan_sasanka_ is now known as pawan_sasanka
< zoq>
pawan_sasanka_: Hello, some proficiency in this area is definitely helpful, unfortunately we don't have the time over the summer to start from scratch.
ironstark has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka__ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 268 seconds]
pawan_sasanka__ is now known as pawan_sasanka
pawan_sasanka_ has quit [Ping timeout: 268 seconds]
< pawan_sasanka>
zoq isnt there may too ?
< cult->
what's a good example for serialization in mlpack? i want to convert the model to a binary file, save it, then load it again. does serializeobjectall is useful?
< pawan_sasanka>
i could spend the time reading ? im willing to dive into literature
trapz has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 268 seconds]
pawan_sasanka_ is now known as pawan_sasanka
< cult->
yeah i guess if i want to save it to database then i have to use boost serialization directly?
shikhar has quit [Ping timeout: 260 seconds]
trapz has quit [Remote host closed the connection]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 260 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 260 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 260 seconds]
pawan_sasanka_ is now known as pawan_sasanka
pawan_sasanka has quit [Ping timeout: 260 seconds]
pawan_sasanka has joined #mlpack
pawan_sasanka has quit [Ping timeout: 256 seconds]
vss has joined #mlpack
jatin has joined #mlpack
Renjie has joined #mlpack
Renjie has quit [Ping timeout: 260 seconds]
bharath has joined #mlpack
jatin has quit [Quit: Page closed]
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
Trion has joined #mlpack
ironstark has joined #mlpack
vss has quit [Ping timeout: 260 seconds]
naxalpha has joined #mlpack
pawan_sasanka has joined #mlpack
ironstark has quit [Ping timeout: 264 seconds]
bharath has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has joined #mlpack
pawan_sasanka_ has joined #mlpack
shikhar_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 240 seconds]
pawan_sasanka_ is now known as pawan_sasanka
naxalpha_ has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
sagarbhathwar has joined #mlpack
naxalpha_ has quit [Quit: Page closed]
naxalpha has joined #mlpack
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 256 seconds]
pawan_sasanka_ is now known as pawan_sasanka
vss has joined #mlpack
Alvis has joined #mlpack
nu11p7r has quit [Quit: WeeChat 1.4]
bharath has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
Alvis has quit [Ping timeout: 256 seconds]
ironstark has quit [Ping timeout: 268 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 256 seconds]
pawan_sasanka_ is now known as pawan_sasanka
naxalpha has joined #mlpack
bharath has quit [Ping timeout: 240 seconds]
sumedhghaisas has joined #mlpack
Alvis has joined #mlpack
shikhar_ has quit [Quit: Page closed]
mohtamohit has joined #mlpack
Alvis has quit [Ping timeout: 240 seconds]
mohtamohit has quit [Ping timeout: 260 seconds]
ironstark has joined #mlpack
sagarbhathwar has quit [Ping timeout: 260 seconds]
ironstark has quit [Ping timeout: 240 seconds]
pawan_sasanka_ has joined #mlpack
pawan_sasanka has quit [Ping timeout: 256 seconds]
pawan_sasanka_ is now known as pawan_sasanka
ironstark has joined #mlpack
< Trion>
zoq: Connection between server and agent breaks while agent is optimizing, connection breaks and agent dies after 10 second timeout
< pawan_sasanka>
marcus?
bharath has joined #mlpack
< Trion>
:P Nevermind found the knob to increase timeout in server Fixed!
kris has joined #mlpack
< kris>
Hey can somebody tell me that how are the parameters updated in the SGD. optimiser i know we do the iterate -= stepsize * gradient. but what if we want to update the weights, bias everything how do we do that
< kris>
Also for all the parameters in the fnn we just have arma::mat parameters rather that having arma::mat weights, arma::mat bias
< kris>
since the gradient wrt to different parameters would be different even for the same function
< pawan_sasanka>
can someone tell me more about the deep learning module project?
Alvis has joined #mlpack
Trion has quit [Quit: Have to go, see ya!]
< rcurtin>
pawan_sasanka: have you looked at the mailing list archives?
< pawan_sasanka>
could you provide me a link rcurtin ?
< rcurtin>
I am on a phone so I can't do that easily
< rcurtin>
look through mlpack.org
< rcurtin>
kris: Optimize() is called with the weights and biases matrix as the iterate, if I am remembering correctly
< rcurtin>
so that is how those are updated
< kris>
but the ffn module doesn't store the weights and biases separately i think all things are stored in the parameter matrix
< kris>
i would have to double checck that though
< rcurtin>
yes so they are the same matrix so the optimizer can update both simultaneously
bharath_ has joined #mlpack
Grant has joined #mlpack
Grant is now known as Guest46503
bharath has quit [Ping timeout: 256 seconds]
naxalpha has quit [Quit: Page closed]
Guest46503 is now known as gwilli
naxalpha has joined #mlpack
< cult->
if i execute Train on the same AdaBoost object repeatedly, does the model "adapts" each time OR each time its a completely new model regardless of the object's persistence?
< cult->
and, whats the difference between AdaBoost<> and AdaBoostModel ? which one should i use?
< cult->
(reading the unit test first, later the example program)
< kris>
rcurtin: but the gradient would be different for different for both right so function.gradient(iterate,vistationorder, gradient) this computes the gradient wrt to the function so here the function according to me would be layer. iterate would be the parameter just little bit confusing
< cult->
i am thinking on incremental learning on my previous questions
< rcurtin>
kris: you should investigate the code cloaely to.learn more
< rcurtin>
cult-: I'll answer once I am back at a computer
bharath_ has quit [Remote host closed the connection]
< cult->
rcurtin: thank you, i was just reading the code and it says i can train perceptrons multiple times and adaboost will overwrite them completely. i am a littlebit confused.
bharath has joined #mlpack
bharath has quit [Ping timeout: 256 seconds]
sheogorath27 has quit [Remote host closed the connection]
ironstark has quit [Ping timeout: 260 seconds]
ironstark has joined #mlpack
Alvis has quit [Ping timeout: 246 seconds]
ironstark has quit [Ping timeout: 240 seconds]
Alvis has joined #mlpack
vss has quit [Ping timeout: 260 seconds]
< gwilli>
Hey everyone, I've been really wanting to help contribute to mlpack for a while, and was planning on applying through GSoC. Is there anyone I could talk to about a potential project idea for more parallel optimizers? It looks like Ryan Curtin might be who im looking for?
ironstark has joined #mlpack
vss has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
Alvis has quit [Ping timeout: 246 seconds]
< rcurtin>
cult-: if you call Train() on an AdaBoost object, it will be a new model
< rcurtin>
the AdaBoost<> class is what you should use if you are writing C++ (or, at least, it's what I'd pick); the AdaBoostModel class only exists for the command-line program, so that models that use perceptrons or decision stumps as weak learners can be serialized
< rcurtin>
gwilli: glad you are interested in GSoC; I can try and answer questions you might have, but it might be worthwhile for you to first look over the mailing list and see if the same question has already been asked
< rcurtin>
ah, that's a long URL, but that's a decent attempt at trying to search the list archives :)
< cult->
rcurtin: i am actually saving the serialized model and reloading it. i am not sure what should i use if i want to only update the model with each new training.
< cult->
what should i use = what algorithm would be the best
< gwilli>
rcurtin: i did use the search function from the GSoC page on the mlpack site.
< gwilli>
ill make sure to read through as many of these as i can find though!
< rcurtin>
cult-: if you're not going to change the type of weak learner, then I'd say just serialize the AdaBoost<> model directly
< rcurtin>
gwilli: great, you can also feel free to send an email to the list and I can answer it when I have a chance (but please realize that things are very busy for me right now as a result of GSoC... :))
Alvis has joined #mlpack
< cult->
correct, but i meant, if i want classification and i want to train the model each time to better understand recent data while retain memory (such as in case of perceptrons) which algorithm should i use, and is this called incremental learning?
< gwilli>
rcurtin: awesome, i'll do a formal write up of some possible ideas for a project. And I totally understand that things are busy, and thank you for taking the time to help!
< cult->
i don't want that after Train() the model is completely new, i want it to only improve, not destroy the previous findings.
sheogorath27 has joined #mlpack
< rcurtin>
cult-: yeah, I agree, that would be called incremental learning
< rcurtin>
but unfortunately the current AdaBoost implementation doesn't support that
< cult->
or would that be the correct route, to update the weaklearner incrementally, serialize that, and after loading in the model do AdaBoost on it
< cult->
?
< rcurtin>
the only incremental learning algorithm that we have is Hoeffding trees
< rcurtin>
another option is, if you retain the entire training set from the past, you could train an AdaBoost model on that full dataset plus the new points each time you train
< cult->
that would be really long, what about the above i just said?
< cult->
incremental learning is supported in perceptron which is a weaklearner for adaboost
< rcurtin>
the weak learner in AdaBoost is only used for its parameters (like maxIterations and other parameters)
< rcurtin>
unfortunately adaboost is not an incremental learning algorithm, so it is not clear how to extend it that way
< cult->
Alternate constructor which copies parameters from an already initiated
< cult->
* perceptron.
< cult->
and on Train():
< cult->
This training does not reset the model weights, so you can call Train() on
< cult->
* multiple datasets sequentially.
< rcurtin>
yes, but that is not how AdaBoost works
< rcurtin>
the AdaBoost algorithm is not an incremental learning algorithm
< cult->
can i just use perceptron only, or is it going to be very poor without boosting?
< rcurtin>
you could use perceptrons alone to solve the problem but the performance will probably not be good
< rcurtin>
yeah, exactly, the performance is not likely to be any good
< cult->
i see
< cult->
well, thanks a lot
< rcurtin>
sure, I'm sorry the answer is not "it's easy!" :(
< rcurtin>
if there is an implementation somewhere of 'online boosting', that may be an easier route to a solution
< rcurtin>
Hoeffding trees will outperform perceptrons, but they may not give as good performance as AdaBoost
< rcurtin>
it might be worth a shot though
< rcurtin>
because those definitely support incremental training
< rcurtin>
(under the assumption that your training data comes from a stationary distribution)
< cult->
np, i will take a look at Hoeffding trees
< cult->
however, hmm incremental learning would have been the best
< rcurtin>
I think that the HMM implementation in mlpack does support incremental learning... I think
< rcurtin>
"Train() can be called multiple times with different sequences; each time it
< rcurtin>
is called, it uses the current parameters of the HMM as a starting point
< rcurtin>
for training."
< rcurtin>
so that's *sort of* incremental learning, but not completely
< cult->
thats awesome
< rcurtin>
basically it would be a fully new HMM model, but the initial parameters of the HMM at the start of training would be what the previous model was
< rcurtin>
so if you were going to do this, I think you would at least need to make sure that the new training sequences you were calling Train() with were large (like the same order of magnitude of points as the original training set)
< rcurtin>
you can do this from the command-line too, something like
< cult->
i only do it via the headers
pawan_sasanka has quit [Ping timeout: 256 seconds]
< cult->
do you know approx. if the model won't forget the previous weights etc?
< rcurtin>
this depends on the data distribution... this is why I say it's not "really" incremental learning
< rcurtin>
basically you are just using the old model as a starting point for the new optimization
< rcurtin>
if the new data you are training on has a very different distribution, then you'll end up with a very different model
< cult->
:(
< cult->
i don't want to use anything else than mlpack
< cult->
:)
< rcurtin>
heh
< rcurtin>
another option is to implement incremental training in HMMs :)
< rcurtin>
but that might be a bit more work
< rcurtin>
it may be worth trying to train a Hoeffding tree and see how it performs, because that actually is incremental learning
< rcurtin>
the Hoeffding tree is meant for the streaming data setting
< cult->
alright, i will learn more about it
< cult->
thanks a lot rcurtin!
< rcurtin>
sure, hope it helps you solve your problem in the end :)
vss has quit [Ping timeout: 260 seconds]
< cult->
i think it will
vss has joined #mlpack
< vss>
rcurtin: i designed an approx. algorithm for the k center problem based on neighbourhood search , i have shared the link can you take a look at it ? Would help a lot :)
gwilli has quit [Quit: Leaving]
vss has quit [Ping timeout: 260 seconds]
< cult->
should i use the Hoeffding tree real-time ?! or its fine to use it as batch once a day for example?
mikeling has quit [Quit: Connection closed for inactivity]
< rcurtin>
cult-: you could use it real-time
< rcurtin>
since you're writing C++ and not using the command-line program, you can just keep it loaded in memory and train on a new point each time you get one
< rcurtin>
if you call the Train() overload that takes a whole matrix (but batchTraining = false), that just loops over each point and calls the single-point Train() overload, so it's the same thing
< cult->
wow
< cult->
it learn real-time
< cult->
thats awesome
< rcurtin>
setting the right learning rate can be difficult to do in practice
< rcurtin>
but it is good for an online setting
Alvis_ has joined #mlpack
< rcurtin>
I was working on implementing a Hoeffding forest some time back, but the results were not as good as expected; there was some bug I did not find the time to figure out
< cult->
who is funding mlpack nowadays?
< rcurtin>
cult-: I work for Symantec, so technically they fund some of the effort (but also some of it comes out of my free time), and of course Google funds GSoC
< rcurtin>
I think most of the other developers are doing it in their free time, or maybe as part of their graduate work or something
< rcurtin>
when I was at Georgia Tech they never funded me explicitly to work on mlpack (well except maybe at the very beginning), I just had to make it a part of my PhD research program
Alvis has quit [Ping timeout: 246 seconds]
< cult->
ok, i will keep it in mind for future works.
Alvis_ has quit [Ping timeout: 260 seconds]
Alvis_ has joined #mlpack
Alvis__ has joined #mlpack
kris has quit [Quit: Leaving.]
Alvis_ has quit [Ping timeout: 246 seconds]
diehumblex has joined #mlpack
chvsp has joined #mlpack
Alvis has joined #mlpack
Alvis__ has quit [Ping timeout: 246 seconds]
govg has quit [Ping timeout: 240 seconds]
nu11p7r has joined #mlpack
govg has joined #mlpack
Alvis has quit [Remote host closed the connection]
Alvis has joined #mlpack
< chvsp>
Hi zoq: I wanted to know how we can use convolutions for RGB datasets like CIFAR10 in the current codebase. In textbook implementations, they use a 4 dimensional array but arma supports upto 3d arrays only.
< zoq>
chvsp: Hello, I guess you are talking about arma::cube? cube support dimensions > 3, anyway the current codebase doesn't use arma::cube instead we use arma::mat to represent 3rd order tensors. Where each col is another dimension in case of an RGB image (200, 200, 3) col(0) = [200 x 200] is the R channel col(1) G = [200 x 200] and so on.
chenzhe has joined #mlpack
chvsp has quit [Quit: Page closed]
Alvis_ has joined #mlpack
Alvis has quit [Ping timeout: 246 seconds]
chenzhe1 has joined #mlpack
chenzhe has quit [Ping timeout: 246 seconds]
chenzhe1 is now known as chenzhe
chenzhe has quit [Client Quit]
chvsp has joined #mlpack
< chvsp>
Oh ok. So how do we pass a batch of training examples to the train function? Do we send in a arma::mat of dimensions [ 200*200, 3*num_training_examples] where RGB channels of each example are placed in adjacent columns?
< zoq>
chvsp: Right, in case of an RGB image cols(0, 2) = image1, cols(3, 5) = image2, ...
< chvsp>
Got it! Thanks
chvsp has quit [Quit: Page closed]
Alvis_ has quit [Remote host closed the connection]
Alvis_ has joined #mlpack
< ironstark>
zoq: I have submitted a draft proposal for GSOC 2017. Can you please review it and give me some pointers whenever you find time.
< zoq>
ironstark: Sure, I'll take a look once I get a chance.
< ironstark>
zoq: Thanks
Alvis_ has quit [Remote host closed the connection]
Alvis_ has joined #mlpack
ironstark has quit [Ping timeout: 258 seconds]
Alvis_ has quit [Ping timeout: 246 seconds]
ironstark has joined #mlpack
HoloIRCUser3 has joined #mlpack
HoloIRCUser3 is now known as ironstark_1
minion1 has joined #mlpack
ironstark is now known as ironstark_2
minion1 has quit [Client Quit]
ironstark has joined #mlpack
ironstark has quit [Ping timeout: 240 seconds]
Alvis_ has joined #mlpack
ironstark has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
HoloIRCUser5 has joined #mlpack
ironstark_1 has quit [Ping timeout: 260 seconds]
ironstark_2 has quit [Ping timeout: 260 seconds]
Alvis_ has quit [Ping timeout: 246 seconds]
Alvis_ has joined #mlpack
< ironstark>
zoq: I read your comments on my draft proposal
< ironstark>
I am working on improving it according to your comments. Thank you :)