verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
stephentu has quit [Ping timeout: 272 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 246 seconds]
stephentu has joined #mlpack
stephent1 has joined #mlpack
apir8181 has joined #mlpack
apir8181 has quit [Client Quit]
stephent1 has quit [Ping timeout: 272 seconds]
curiousguy13 has quit [Ping timeout: 264 seconds]
curiousguy13 has joined #mlpack
< JaraxussTong>
apir8181 : sorry for the late response. I use quassel irc client, it is a distributed irc software.I set the core of quassel on my server in sanfrancisco purchased in digital ocean.
lezorich has quit [Quit: Ex-Chat]
kshitijk has joined #mlpack
kshitijk has quit [Ping timeout: 256 seconds]
kshitijk has joined #mlpack
stephentu has quit [Ping timeout: 256 seconds]
kshitijk has quit [Ping timeout: 256 seconds]
kshitijk has joined #mlpack
curiousguy13 has quit [Ping timeout: 256 seconds]
kshitijk has quit [Ping timeout: 256 seconds]
curiousguy13 has joined #mlpack
kshitijk has joined #mlpack
kshitijk has quit [Ping timeout: 265 seconds]
apir8181 has joined #mlpack
< apir8181>
Hi, is armadillo has some function like matlab bsxfun?
< apir8181>
it just has operation each_col or each_row..
< zoq>
apir8181: I think imbue and transfrom is close to the bsxfun function.
< zoq>
apir8181:: Btw. Did you get the message from JaraxussTon?
< apir8181>
oh.. I just want to have broadcast add a vector to a matrix.
< apir8181>
Not yet. I have search through the channel logs. It is I miss something here?
< apir8181>
What about the operation arma::join_rows, is it costly (due to copy of the original matrices)?
< apir8181>
I just want to add intercept (bias) term to a data matrix.
< zoq>
"JaraxussTon> | apir8181 : sorry for the late response. I use quassel irc client, it is a distributed irc software.I set the core of quassel on my server in sanfrancisco purchased in digital ocean."
< apir8181>
aha, thanks :)
< zoq>
Ideally join_rows uses the mem_ptr from the original matrix.
< apir8181>
So, there is no copy cost in this operation?
< zoq>
apir8181: Maybe we need to check the source code or maybe naywhayare knows the answer :)
< naywhayare>
apir8181: join_rows is generally expensive
< naywhayare>
it does incur a copy of the entire matrix
< naywhayare>
I just saw your update of ticket #322
< naywhayare>
the missing intercept is indeed the problem, and at one point I had a patch that fixed it, but I'm not sure where it is now
< naywhayare>
let me see if I can dig it out. it needed to be reworked a little bit, if I remember right...
< naywhayare>
those are the changes I made, but I don't think it's done quite yet
< naywhayare>
maybe you can reuse the changes there? if not, no big deal :)
< naywhayare>
either way, that patch probably needs some additional documentation about the intercept being a part of the parameters matrix
< apir8181>
OK, I will try.
< apir8181>
I could not fully understand where you add the intercept term.
< apir8181>
why not change this line [ hypothesis = arma::exp(parameters * data); ] ?
< apir8181>
and this line [ parameters.randn(numClasses, inputSize); ] to [parameters.randn(numClasses, inputSize + 1);] ?
< apir8181>
In my implementation, I change the parameters size and compute the gradient of parameter.col(0) specially.
< naywhayare>
apir8181: sorry, I stepped out
< naywhayare>
I ran some tests... yeah, my patch doesn't actually fix the tests (as they are written now)
< naywhayare>
so maybe it is not useful...
< apir8181>
so what is this patch for?
< naywhayare>
I can't remember, it was so long ago :(
< naywhayare>
the patch is just the modified files I had in a working directory
< naywhayare>
I remember working with Siddharth to isolate the issue, and I remember modifying the code to fix it, and thinking "it's not quite done yet..."
< naywhayare>
so maybe I didn't actually manage to fix it and the patch is useless...
< apir8181>
aha. BTW, it seems that many machine learning project are rejected in GSOC this year.
< apir8181>
for what I know, shogun, scikit-learn, mlpack, xapian
< naywhayare>
did scikit-learn apply?
< naywhayare>
I was talking to the shogun guys a few days ago about it
< naywhayare>
I met them at the summit last year... cool guys
< naywhayare>
I think maybe one reason is that only 137 organizations were accepted this year, as opposed to 190 last year, and something like 80 organizations were new this year
< naywhayare>
so maybe they just wanted to try something new? I don't know
kshitijk has joined #mlpack
curiousguy13 has quit [Ping timeout: 246 seconds]
kshitijk has quit [Ping timeout: 272 seconds]
curiousguy13 has joined #mlpack
lezorich has joined #mlpack
< apir8181>
With the same regularization value, it seems that softmax regression regularization term is more constrained than logistic regression.
< naywhayare>
apir8181: are you sure? in the two-class case, the derivation of softmax regression reduces exactly to logistic regression
< apir8181>
Let me check again.
< naywhayare>
I have to go get lunch, but I'll be back in about an hour