verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2208 (master - 5c70ff9 : Marcus Edel): The build passed.
bharath_ has quit [Read error: Connection reset by peer]
bharath_ has joined #mlpack
bharath has joined #mlpack
bharath_ has quit [Read error: Connection reset by peer]
bharath_ has joined #mlpack
aashay has joined #mlpack
bharath has quit [Ping timeout: 264 seconds]
govg has quit [Ping timeout: 260 seconds]
bharath has joined #mlpack
bharath_ has quit [Ping timeout: 256 seconds]
bharath_ has joined #mlpack
bharath_ has quit [Remote host closed the connection]
bharath_ has joined #mlpack
bharath__ has joined #mlpack
bharath has quit [Ping timeout: 268 seconds]
bharath_ has quit [Ping timeout: 240 seconds]
bharath__ has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
bharath has joined #mlpack
bharath has quit [Ping timeout: 268 seconds]
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
cannon4 has joined #mlpack
cannon4 has quit [Client Quit]
naxalpha has joined #mlpack
< naxalpha>
Hi, Is it possible to enable quiet mode while building mlpack. As it becomes heavy load on appveyor and hard to test few things?
govg has quit [Ping timeout: 240 seconds]
< naxalpha>
I got it, build using verbosity quiet
trapz has joined #mlpack
trapz has quit [Quit: trapz]
vss has joined #mlpack
vss has quit [Client Quit]
vss has joined #mlpack
trapz has joined #mlpack
Trion has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
trapz has quit [Quit: trapz]
s1998 has joined #mlpack
s1998_ has joined #mlpack
s1998 has quit [Ping timeout: 260 seconds]
vss has quit [Quit: Page closed]
< Trion>
In convolution layer, the inputWidth and inputHeight are by default 0, does that mean it will handle the input height and width automatically?
< zoq>
Trion: Yes, unless you specify the width and height.
NikitaDoykov has joined #mlpack
vss has joined #mlpack
< Trion>
I am getting "what(): Mat::init(): requested size is too large" if I don't specify, I was unsure if that was the reason
NikitaDoykov has quit [Client Quit]
< zoq>
Trion: Actually for the first conv layer you have to specify the width and height. Take a look at tests/convolutional_network_test.cpp
< Trion>
Thanks :P zoq saves the day again! I hope today the Agent will do some Pew Pew
benchmark has joined #mlpack
benchmark has quit [Client Quit]
shikhar has joined #mlpack
shikhar has quit [Ping timeout: 260 seconds]
shikhar has joined #mlpack
s1998 has joined #mlpack
s1998_ has quit [Ping timeout: 260 seconds]
shikhar has quit [Ping timeout: 260 seconds]
< Trion>
final layer "model.Add<Linear<>>(700, 3)" is returning size 9 vector instead of 3, what can make it happen?
< zoq>
Trion: Strange, can you post the model somewhere?
< zoq>
Trion: Ah I see, what happens here, so frame is a matrix, but Predict does work on cols, so it outputs a prediction for each col. What you could do is to vectorise the input by using arma::vectorise(frame) before calling train and Predict.
< Trion>
Can I input a cube of frames? vectorising frame will impact the learning
< zoq>
Not without modifying the code, why does vectorising influance the learning?
< Trion>
A convolution layer will not be able to learn the frame as an image, instead as a sequence of numbers. So if a pixel is below the other, it will go in some far away location instead. Unless... conv layer has built in functionality to rearrange vector as an image :P
< zoq>
If you specify the inputWidth and inputHeight for the first layer, the conv layer reshapes the input accordingly.
< Trion>
:D Nice!
chit has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2214 (master - 8ad9e93 : Marcus Edel): The build passed.
< Trion>
model.Train is giving "Mat::operator(): index out of bounds" although frame is the same one as I used in Predict and correctedResult is just the copy of result matrix I got from Predict
chit has quit [Ping timeout: 260 seconds]
richukuttan has joined #mlpack
< richukuttan>
zoq: Am I correct in my understanding that the NPI approach can use programs it has already learned to create larger programs, while the DeepCoder approach does not have this functionality?
< richukuttan>
As in DeepCoder uses a static library as the set of commands it can use, while we can easily update the command list (or program list) of the NPI?
< Trion>
I'll check logs afterwards, have to go for now
Trion has quit [Quit: Have to go, see ya!]
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2217 (master - f412872 : Marcus Edel): The build passed.
cannon4 has quit [Read error: Connection reset by peer]
chittaranjan has joined #mlpack
< chittaranjan>
.
< chittaranjan>
Hi, I bet I'm late to jump on the bandwagon. I wanted to work on the K-Centers algorithm implementation project for this year's GSoC. Any pointers to where and how the code would fit in?
naxalpha has joined #mlpack
< rcurtin>
chittaranjan: hi there, the deadline is not passed yet for applications, but I do think it might be a little late for the k-centers project in particular
< chittaranjan>
Oh! Is there anything else I could work on?
< rcurtin>
yes, there are all kinds of other projects listed on the ideas page too :)
< rcurtin>
if you are still interested in the k-centers project, there is a lot of discussion from previous years on the mailing list:
< rcurtin>
that might be a good place to start searching
< rcurtin>
you'd need to become familiar with dual-tree algorithms and develop one to solve the k-centers problem
< rcurtin>
but unfortunately because GSoC is about code and not research, the idea of the algorithm would basically need to be done by the time of proposal submission
< rcurtin>
zoq: I just heard from the team that hosts masterblaster, it looks like we will ship the system either this week (tomorrow or Thursday) or early next week, and the system will be back online within a couple days
< rcurtin>
nothing is concrete yet, so I will provide more updates as I hear more...
kris has joined #mlpack
< chittaranjan>
Thanks Ryan. I guess it is indeed a little too late for the k-centers project. Would there be enough time to prepare a worthy proposal for the Benchmarking project though? I just did a little bit of reading up now, and it caught my eye.
< rcurtin>
chittaranjan: sure, I still think you will have to put in a good amount of time to get familiar with the benchmarking system this week, but I agree that that project is much more approachable
< rcurtin>
for that one you'll basically need to be familiar with various machine learning methods that you would be benchmarking, and familiar enough with the benchmarking system to be able to implement new functionality or new methods to it
< vss>
rcurtin: Borja's proposal from last year didn't get through ? :o
< chittaranjan>
Thanks, will do.
< rcurtin>
vss: we received 119 applications last year and could only accept 6...
< vss>
rcurtin: tough luck
< rcurtin>
yeah, it is difficult from this end, because we had more than 6 students we would have liked to accept
govg has joined #mlpack
naxalpha has quit [Ping timeout: 260 seconds]
kris has quit [Remote host closed the connection]
vss has quit [Quit: Page closed]
chittaranjan has quit [Ping timeout: 260 seconds]
trapz has quit [Quit: trapz]
trapz has joined #mlpack
< zoq>
Trion: I left a comment on the gist.
< zoq>
rcurtin: Thanks for the update, let's hope everything goes well.
< zoq>
richukuttan: That's right, maybe not a good idea to combine both projects have you looked over the recursion paper?
< richukuttan>
zoq: Not yet. Will do that.
mikeling has quit [Quit: Connection closed for inactivity]
< richukuttan>
zoq: I have read the article you mentioned (skim read for now). I understand the use of recursion. However, it will not be able to spawn off sub-programs on its own. Just like normal recursion, it will be calling itself only.
< richukuttan>
While this will increase the length of inputs it can take accurately, it still cannot work without the user creating the required sub-programs (or training the NPI for that too).
< richukuttan>
But yes, it is a good read, and I will incorporate the idea of recursion into my proposal.
< richukuttan>
The main problem with NPI creating subprograms by itself, is to understand which subprograms need to be created. For example, we are fine with creating a swap program, but a program that simpley adds 1 to the input? especially when there is another addition program? How will the NPI know to create it or not?
chenzhe has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
< zoq>
richukuttan: I agree, selecting the right subroutines is an open question and I think that might be an interesting question to look into. I guess for the GSoC context I would first try to reproduce the experiments from the paper.
< richukuttan>
zoq: Yes, I intend to do that. However, I'd also like to test the logical abilities of the NPI, using the tests in http://www.readcube.com/articles/10.1038/nature20101 where these tests were done for DNC. Do you think it's a good idea?
< zoq>
I guess, besides Mini-SHRDLU it's reasonable idea.
< richukuttan>
yes, I was talking about the bAbI tests
ironstark has quit [Remote host closed the connection]
< richukuttan>
zoq: please go through my proposal, when you have the time.