verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
stephentu_ has quit [Ping timeout: 246 seconds]
stephentu_ has joined #mlpack
stephentu_ has quit [Ping timeout: 264 seconds]
jbc_ has quit [Quit: jbc_]
curiousguy13 has quit [Ping timeout: 252 seconds]
curiousguy13 has joined #mlpack
stephentu has quit [Ping timeout: 255 seconds]
stephentu has joined #mlpack
stephentu has quit [Ping timeout: 252 seconds]
sumedhghaisas has joined #mlpack
vedhu63w has joined #mlpack
curiousguy13 has quit [Ping timeout: 256 seconds]
curiousguy13 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 276 seconds]
dhfromkorea has joined #mlpack
jaskaran has joined #mlpack
dhfromkorea has quit [Remote host closed the connection]
vedhu63w has quit [Ping timeout: 245 seconds]
dhfromkorea has joined #mlpack
dhfromko_ has joined #mlpack
dhfromkorea has quit [Ping timeout: 240 seconds]
< jaskaran>
Hi I'm Jaskaran and was looking forward to contribute to mlpack as a part of GSOC 15.From where should I start?
jaskaran has quit [Quit: Page closed]
jaskaran has joined #mlpack
jaskaran has quit [Quit: Page closed]
vedhu63w has joined #mlpack
dhfromko_ has quit [Remote host closed the connection]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Ping timeout: 265 seconds]
jbc_ has joined #mlpack
HeikoS has joined #mlpack
udit_s has joined #mlpack
udit_s has quit [Ping timeout: 246 seconds]
jbc_ has quit [Quit: jbc_]
jbc_ has joined #mlpack
curiousguy13 has quit [Ping timeout: 255 seconds]
stephentu has joined #mlpack
curiousguy13 has joined #mlpack
HeikoS has quit [Quit: Leaving.]
HeikoS has joined #mlpack
< naywhayare>
HeikoS: hello there! :)
< HeikoS>
naywhayare: hi ryan
< HeikoS>
how are things?
< naywhayare>
they're going okay. trying to get some stupid algorithm working in time for the ICML deadline :(
< HeikoS>
actually, lets chat later, gotta go, but just wanted to ask if I can use a pic of you (and us) for my blog post?
< HeikoS>
haha
< HeikoS>
naywhayare: also working on icml
< HeikoS>
hope both of us make it and can hang out in lille
< HeikoS>
btw I will cycle there :)
< HeikoS>
see you later
< naywhayare>
yeah, sure, feel free :)
< naywhayare>
I don't think I can ride a bike to lille from atlanta though :(
HeikoS has quit [Client Quit]
dhfromkorea has joined #mlpack
vedhu63w has quit [Remote host closed the connection]
dhfromkorea has quit [Remote host closed the connection]
< naywhayare>
stephentu: do you think it's more correct to write "Construct the exact kernel matrix." instead of writing that it's an approximation?
< naywhayare>
I thought that the NaiveKernelRule just calculated the exact matrix
stephentu_ has joined #mlpack
< naywhayare>
hehe, you always log in on a different system as soon as I send a message...
< naywhayare>
19:24 < naywhayare> stephentu: do you think it's more correct to write "Construct the exact kernel matrix." instead of writing that it's an approximation?
< naywhayare>
19:25 < naywhayare> I thought that the NaiveKernelRule just calculated the exact matrix
< stephentu_>
yes probably
< stephentu_>
ok
< stephentu_>
sorry i was reading hte source code
< stephentu_>
for kernel PCA
< stephentu_>
and i got confused so i fixed the comment
< stephentu_>
but then your right i didnt really fix it :)
< naywhayare>
:)
< stephentu_>
also i'm wondering what you think about this
< stephentu_>
i think we might want to introduce a new type of SDP constraint
< stephentu_>
which is like a LinearFunction
< stephentu_>
so if A=11^T
< stephentu_>
it seems foolish to evaluate the dot product when we shoudl just take the sum
< stephentu_>
similarly if A = I
< stephentu_>
well A=I isn't as bad but A=11^T does a lot of useless multiplications
< stephentu_>
do you think that can be optimized away?
< stephentu_>
so LinearFunction will have (a) Evaluate() and (b) GetMatrixForm()
< stephentu_>
i'm not sure how to make it not rely on std::function or virtual method dispatch though
< naywhayare>
I have to go to a meeting... I'll answer when I get back
< stephentu_>
so if x_i and x_j are not direct neighbors
< stephentu_>
but share some neighbor x_k
< stephentu_>
I think there should also be a constraint
< stephentu_>
this is the condition (eta^T eta) > 0
< stephentu_>
in the MVU sdp
< stephentu_>
(eta^T eta)_{ij} > 0
jbc__ has joined #mlpack
jbc_ has quit [Ping timeout: 264 seconds]
jbc__ is now known as jbc_
< naywhayare>
stephentu_: I definitely agree that constraints like A = 11^T could be handled better
< naywhayare>
the trick is how to encode that it can be done better
< naywhayare>
we could introduce some LinearConstraint class or something like that with its own Evaluate() function, but then these have to be stored in the SDP class, so that's another std::vector of whatevers
< naywhayare>
maybe for the case where you have A = 11^T, you just provide an overload of the SDP class entirely?
< naywhayare>
for instance, say you have A=11^T for "some problem", then you create the SomeProblemSDP class that has that constraint baked into its EvaluateConstraint() or something?
< naywhayare>
the other idea in that vein would be to provide a LinearConstraintSDP class that has a more complex API than just SDP and is able to hold LinearConstraints internally
< naywhayare>
as for MVU, it seems like different papers have different ideas...
< stephentu_>
naywhayare: i think introducing another LinearConstraint class is fine
< stephentu_>
in fact i'm sort of inclined to make everything LInearConstraint
< stephentu_>
and have sparse/dense matrices basically be a subclass
< stephentu_>
well no then we'd have to distinguish between sparse and dense linear constraints
< stephentu_>
ok i guess we can create a LinearConstraint, and assume that it can be expressed as a dense matrix (otherwise whats the point, just use the sparse matrix repr)
< stephentu_>
and then have another vector of those
< stephentu_>
i'm going to assume that the cost of a virtual method dispatch
< stephentu_>
or function pointer invocation is not that bad
< stephentu_>
compared to the calculation
< naywhayare>
let me do a little bit of thinking about this
< naywhayare>
although the cost of virtual dispatch may indeed be "not that bad" in this case, I'd still like to avoid it, because it would introduce inheritance into the codebase
< naywhayare>
and it then becomes harder to argue against in other cases, because people start saying "but you use it there, why can't we use it here? it makes my coding work so much easier and only a little bit slower!"
< naywhayare>
the vtable lookup penalty isn't nothing at all, though it would be mostly noticed for things like distance metrics
< naywhayare>
function evaluations for an optimizer could be called many thousands of times, though, so I wouldn't be surprised if it was noticeable in that setting too
< stephentu_>
i mean i bet you can do like 1000 vtable lookups
< stephentu_>
in the time it takes to do a matrix multiply
< stephentu_>
but it does go against
< stephentu_>
the mlpack code style
< stephentu_>
so i'm curious if you have any better suggestions
< naywhayare>
let me think for a little bit and see if I can come up with anything
< naywhayare>
the catch is that for each different type of constraint, we have to know about it at compile time
< naywhayare>
what do you think of that idea? it certainly needs some work, but virtual functions aren't necessary since you'll always refer to the SDP type as LinearConstraintSDP (because it's the template parameter to the optimizer)
< stephentu_>
hwo does LinearConstraint work?
< naywhayare>
hmmm... well, it seems like I haven't solved the problem at all then
< stephentu_>
the only way i can think to solve this
< stephentu_>
at compile time
< stephentu_>
is to use variadic templates from hell
< stephentu_>
or basically you have to hardcode a std::vector for every new type of LinearConstraint
< naywhayare>
well, we do have C++11, so variadic templates from hell is a possibility
< naywhayare>
Udit and I were going to do that for AdaBoost so it could support arbitrary weak learners, but his project never managed to implement support for multiple weak learner
< naywhayare>
*learners
< stephentu_>
hmm
< stephentu_>
i'll see waht i can come up with
< naywhayare>
I'm interested too, so I'll think about it and what it would look like
< stephentu_>
man the lenghts you will go to
< stephentu_>
to avoid virtual
< stephentu_>
most people woudl be turned off by variadic templates :)
< naywhayare>
I do want to emphasize a focus on speed with mlpack, which is why I've avoided inheritance in the past
< naywhayare>
if you wanted, you could try using inheritance to implement what's currently there (sparse and dense constraints), and see what the change in runtime is (if any), for a handful of different problem sizes
< naywhayare>
if it's just as fast, I don't really have a valid argument against it...