verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< rcurtin>
zoq: I see you are working with the benchmark checkout job, it looks like you got it to unstable last night
< rcurtin>
I tried to fix the XML read failure problem by stripping invalid characters from the reports with sed, but it looks like that was not successful
< rcurtin>
I think the \\x1b may need to be changed to \x1b, I am not sure
< rcurtin>
oh, ok---are you installing that as part of the build script, or should I use pip on each of the benchmark systems to bring it up to date?
< rcurtin>
although I guess if the master branch is required I would need to manually check out and install, pypi only has releases I think
< zoq>
I did it as part of the build process
< zoq>
but I guess if you like to use pip that could also work, not sure if the master branch is necessary, that's just what I used
< rcurtin>
I think xmlrunner is already installed through the master branch, so apparently that version is not new enough
< rcurtin>
I don't see where xmlrunner is getting installed... I don't see anything in the Makefile or in the Jenkins build configuration
< rcurtin>
maybe I am looking in the wrong place? :)
< zoq>
it's the last block in checkout - all nodes '# xmlrunner'
< rcurtin>
oh! I commented all of that out
< rcurtin>
I should uncomment it I suppose :)
< rcurtin>
previously it built all of the libraries, so I commented those out, I guess I commented xmlrunner out too
< rcurtin>
so when this build fails, I'll uncomment it and try again
< zoq>
I think the build already failed because mrpt_intall isn't executable ....
< rcurtin>
ah, oops :)
< zoq>
okay, fixed, if you like restart the job
< rcurtin>
ok, let's try that...
< rcurtin>
I adapted the xmlrunner build a little bit to build into libraries/xmlrunner/ not libs/xmlrunner/, and install to the same place as the other Python packages
< zoq>
sounds good
< zoq>
hm, we should put the build before the checks
< zoq>
let me change that real quick
< rcurtin>
oh! right :)
shikhar has joined #mlpack
mikeling|brb is now known as mikeling
< zoq>
rcurtin: Looks like there is some problem with matlab on some nodes.
< zoq>
rcurtin: I can take a closer look unless you have an idea.
shikhar has quit [Quit: WeeChat 1.4]
< rcurtin>
I bet I just have to restart the tunnels
< rcurtin>
getting back from lunch now... will take care of it in a few minutes
< rcurtin>
ok... we will see if it works this time :)
gtank has quit [Ping timeout: 260 seconds]
gtank has joined #mlpack
chenzhe has joined #mlpack
benchmark has joined #mlpack
benchmark has quit [Client Quit]
nish21 has joined #mlpack
chenzhe has quit [Ping timeout: 272 seconds]
vivekp has quit [Ping timeout: 240 seconds]
< zoq>
rcurtin: Looks like you solved the issue :)
< rcurtin>
:)
sumedhghaisas has joined #mlpack
< rcurtin>
hey sumedh, I was thinking of what you were saying about iceland when you were there, that it was always freezing cold and rainy
< rcurtin>
I was wishing that was the case here as I walked around in the early summer heat :(
< sumedhghaisas>
rcurtin: Hey Ryan, you are in Iceland right now?
< sumedhghaisas>
it must be always sunny this time
< rcurtin>
no, not in iceland, I was wishing that I was :)
< rcurtin>
just in hot Atlanta
< sumedhghaisas>
ohh worse... haha
< sumedhghaisas>
Edinburgh is still cold
< sumedhghaisas>
but now I see some sunny days here
< rcurtin>
I looked at the temperature for today, it looks very nice to me
< sumedhghaisas>
yeah... today it was beautiful. till like 7
< sumedhghaisas>
now its getting colder
< sumedhghaisas>
thats sad... cause I gave an exam today and still studying for the next one... I hate the weather right now. everyone is out enjoying while I am reading stupid neuroscience encoding and decoding
< rcurtin>
hehe, and I guess by the time that you have free time, it won't be nice anymore
< sumedhghaisas>
I am hoping it is... I am taking a week trip to spain before the coding starts
< rcurtin>
nice, where in spain are you planning to go?
< rcurtin>
I went to Granada for NIPS one year, it was beautiful
< sumedhghaisas>
Vigo... one of my friend here has a big house right next to the beach there
< sumedhghaisas>
no travelling
< sumedhghaisas>
just 7 days of beach fun
< rcurtin>
oh wow, that looks like it will be really nice
< sumedhghaisas>
yeah... thats going to be a sweet break. All this math is making me crazy. I am dreaming gaussian and poisson distributions now
< sumedhghaisas>
wow... granada is so beautiful ... its like a city on a hill
< rcurtin>
yeah, unfortunately I spent most of the time locked up in a hotel room working on mlpack :(
< rcurtin>
that was the conference where it was announced in 2011, and when I arrived there the library was nowhere near ready...
< sumedhghaisas>
ohh... work always comes in the middle. Any summer plans?
< sumedhghaisas>
apart from Gsoc :P
< rcurtin>
ICML is in Sydney, so I will be traveling there at the end of July and taking some time to explore :)
< rcurtin>
but I will have to answer my GSoC emails too :)
< sumedhghaisas>
I also wanted to ask you some C++ question... sorry for ruining the awesome discussion... haha. So I am spending whateever time I get working on the neural network architecture we have. Currently all the layers right now are stored in a vector.
< sumedhghaisas>
but technically we know the layers at compile time
< rcurtin>
it's ok, let's see if I can answer it :)
< sumedhghaisas>
remember the code I sent you about the architecture?
< rcurtin>
hmm, that was a while ago, let me pull it up...
< rcurtin>
ok, I have it up
< sumedhghaisas>
it compiles the layers at compile time
< sumedhghaisas>
but its too much to get it in this architecture..
< rcurtin>
right, you specify the layers A, B, D as NeuralNetwork<A, B, D>
< sumedhghaisas>
yup... which can also be done without bothering the user by introducing a create network function
< sumedhghaisas>
But how much do you think that will affect the performance?
< sumedhghaisas>
cause changing the architecture will involve lot of work now...
< rcurtin>
I have not tried, but I think it's difficult for the user if they have to deal with these very long types to address their network
< sumedhghaisas>
ohh no that can be fixed...
< rcurtin>
I think that the visitor paradigm and boost::variant<> may work in such a way that there is no extra overhead
< rcurtin>
but I am not sure about that, I have not checked
< sumedhghaisas>
you mean the extra overhead of inheritance?
< sumedhghaisas>
or the vector?
< sumedhghaisas>
cause it also involves ... table lookup call
< rcurtin>
I am not sure, I haven't looked into it
< rcurtin>
in either case the first thing to do would be an empirical comparison, to see if it is worth digging in deeper
< rcurtin>
the boost variant code and the visitor code is very complex, so it might behave in unexpected ways and provide better performance than expected
< sumedhghaisas>
for the user ... the code I sent also has a function to create a network which return the network object created... which the user can auto it
< sumedhghaisas>
yes... thats what I am trying to do... but how?
< rcurtin>
you could build two networks of identical structure, then feed points through them and time that
< rcurtin>
one network with the mlpack ANN code, one network with what you have built
< sumedhghaisas>
yes but for that I have to built the entire framework around my framework ... exactly like the current framework... haha... okay so thats the only option
< sumedhghaisas>
okay wait... now I am confused how variant works here...
< sumedhghaisas>
I thought its just a better way to iterate the vector
< rcurtin>
yeah... I am not certain of this, it has been a while since I have dug into variant
< rcurtin>
but what I remember is that I was incredibly impressed and variant does things I did not think were possible
< sumedhghaisas>
yeah... its static time unions
< sumedhghaisas>
impressive
< rcurtin>
yeah
< rcurtin>
in the case of the mlpack ANN code, I am not sure exactly how optimized the result is
< rcurtin>
maybe zoq knows, but I am not 100% sure, I have not tested it myself
< sumedhghaisas>
okay... anyways I have couple other changes to do before I get to seriously consider it. Talked to zoq abou
< sumedhghaisas>
*about it
< sumedhghaisas>
also thats softmax thing? I checked many codes online ... no one implements the reduced form... why?
< rcurtin>
I have no idea, I haven't had any time to look into it
< sumedhghaisas>
and from the point of view of optimization... the current form is linear invariant right... the reduced form is not... so it should be better for optimization right?
< rcurtin>
I am not sure, I have not had any time to think about it in quite some time
< sumedhghaisas>
ahh me too... I am so looking forward to summer. I want to get back to implementing things.
< rcurtin>
I know the feeling, in many ways I haven't been able to write as much code as I like :)
< rcurtin>
ok, I think I am headed out for now... talk to you later!
< sumedhghaisas>
yeah... talk to you later
nish21 has quit [Ping timeout: 260 seconds]
mentekid has quit [Quit: Leaving.]
mentekid has joined #mlpack
mikeling has quit [Quit: Connection closed for inactivity]