verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
trapz has joined #mlpack
trapz has quit [Quit: trapz]
wasiqm has joined #mlpack
chenzhe has quit [Ping timeout: 256 seconds]
trapz has joined #mlpack
mikeling has joined #mlpack
trapz has quit [Quit: trapz]
trapz has joined #mlpack
trapz has quit [Quit: trapz]
shivakrishna9 has joined #mlpack
AndroUser has joined #mlpack
< AndroUser>
@zoq thanks for merging my pull request (#27). I am interested in the better benchmarking project listed on project ideas for gsoc 2017. How to proceed now.
vinayakvivek has joined #mlpack
ironstark has joined #mlpack
AndroUser has quit [Remote host closed the connection]
govg_ has joined #mlpack
govg has quit [Ping timeout: 240 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
Nax_ has joined #mlpack
ironstark has quit [Ping timeout: 246 seconds]
HoloIRCUser has joined #mlpack
HoloIRCUser has quit [Client Quit]
HoloIRCUser has joined #mlpack
HoloIRCUser is now known as ironstark
slayerjain has joined #mlpack
chenzhe has joined #mlpack
ironstark has quit [Ping timeout: 246 seconds]
HoloIRCUser has joined #mlpack
HoloIRCUser has quit [Ping timeout: 246 seconds]
HoloIRCUser has joined #mlpack
HoloIRCUser has quit [Ping timeout: 246 seconds]
Nax_ has quit [Ping timeout: 260 seconds]
HoloIRCUser has joined #mlpack
HoloIRCUser has quit [Client Quit]
mikeling has quit []
mikeling has joined #mlpack
vivekp has quit [Ping timeout: 264 seconds]
vivekp has joined #mlpack
Nax_ has joined #mlpack
slayerjain has left #mlpack []
vss has joined #mlpack
vinayakvivek has quit [Quit: Connection closed for inactivity]
chenzhe has quit [Ping timeout: 256 seconds]
trapz has joined #mlpack
trapz has quit [Client Quit]
trapz has joined #mlpack
trapz has quit [Client Quit]
jarvis_ has joined #mlpack
< jarvis_>
Hi mentors, I have submitted a draft proposal based on our earlier discussions to implement 3 cnn architectures. Let me know your views!
jarvis_ has quit [Client Quit]
trapz has joined #mlpack
vinayakvivek has joined #mlpack
s1998 has joined #mlpack
vss has quit [Quit: Page closed]
Trion has joined #mlpack
sumedhghaisas has joined #mlpack
thyrix has joined #mlpack
Trion has quit [Remote host closed the connection]
< zoq>
Nax_: I like the idea, so all we have to do is to create a port file for armadillo, and mlpack right since there is already an port for boost
jenkins-mlpack has quit []
jenkins-mlpack has joined #mlpack
jenkins-mlpack has quit [Client Quit]
jenkins-mlpack has joined #mlpack
jenkins-mlpack has quit [Client Quit]
< zoq>
hm, is masterblaster moving?
jenkins-mlpack has joined #mlpack
jenkins-mlpack has quit []
jenkins-mlpack has joined #mlpack
< rcurtin>
zoq: nope, I was working with the authentication
< rcurtin>
now you should be able to log into masterblaster via the github account
< rcurtin>
can you test and make sure that works for you also?
< zoq>
it works
< rcurtin>
great
< rcurtin>
this will make it easier to give, e.g., GSoC students access to Jenkins without needing to make individual accounts on masterblaster
< zoq>
so any org member can access jenkins right?
< rcurtin>
yes, should be
< zoq>
neat
< rcurtin>
the list of administrators can be configured on the 'configure global security' page
< Nax_>
zoq: Exactly!
nish21 has joined #mlpack
trapz has quit [Quit: trapz]
< rcurtin>
zoq: I have matlab + toolboxes installed on dealgood, next I will work on getting that running on the benchmarking systems
< zoq>
rcurtin: sounds great :)
s1998 has quit [Quit: Page closed]
kris1 has joined #mlpack
Trion has quit [Quit: Have to go, see ya!]
< kris1>
zoq: Could you have a look at GaussianInit PR.
< zoq>
kris1: Just looked over the PR can you adresse the comments?
chenzhe has joined #mlpack
chenzhe has quit [Ping timeout: 240 seconds]
chenzhe has joined #mlpack
nish21 has quit [Ping timeout: 260 seconds]
nish21 has joined #mlpack
< nish21>
hello! could someone explain me how serialize works?
vss has quit [Quit: Page closed]
< nish21>
when i try to open a model.txt file, is it the output of serialize call?
mikeling has quit [Quit: Connection closed for inactivity]
trapz has joined #mlpack
chenzhe has quit [Quit: chenzhe]
chenzhe has joined #mlpack
< rcurtin>
nish21: yes, that is using boost::serialization
< rcurtin>
you can serialize as .bin, .txt, or .xml
< rcurtin>
xml is the easiest to understand but the files are very large
< rcurtin>
since each value in the matrix gets its own tag...
nish21 has quit [Ping timeout: 260 seconds]
nish21 has joined #mlpack
< nish21>
i like how easy to understand the xml serialization is :)
Nax_ has left #mlpack []
chvsp has joined #mlpack
< chvsp>
Hi zoq: I have tried typecasting, it didn't work. I also creates another variable of size_t and passed it, still I get the same error in AppVeyor.
chenzhe has quit [Read error: Connection reset by peer]
chenzhe1 is now known as chenzhe
< zoq>
chvsp: Strange, can you remove the "default" Constructor BatchNorm() and try again?
< chvsp>
zoq: Ok will try.
chenzhe has quit [Quit: chenzhe]
nish21 has quit [Ping timeout: 260 seconds]
nish21 has joined #mlpack
< nish21>
rcurtin: this is regarding adaboost error fix, i have looked at the discussed issue. we need to push w as well as alphat before terminating for the model to classify, does it make sense to push alpha caclulated using DBL_MAX into the model and then terminate?
< nish21>
let me describe the problem more clearly.
< rcurtin>
yes, I think I understand what you mean, let me think about it
< nish21>
alright
< rcurtin>
in Classify(), you need two things: the vector of weak learners wl[], and the vector of alpha values alpha[]
< rcurtin>
in the situation where the weak learner perfectly fits the data on the first iteration, you simply need that single weak learner in wl[] and then the value '1' for alpha[]
< rcurtin>
since alpha[] is the weight applied to each weak learner, if you have only one learner and it got everything right, then its weight should be 1
< rcurtin>
actually, technically, its weight does not matter at all since the vector of probabilities for each point across each weak learner (cMatrix) is then normalized
< rcurtin>
thus 1 is a reasonable value to use
< nish21>
rcurtin: yes, that works. so push 1 to alpha and terminate
< chvsp>
zoq: No it also doesn't work. It is giving the same error.
< rcurtin>
nish21: yeah, this should be a reasonable solution
< nish21>
rcurtin: i just confirmed locally that it works. i'll push the changes soon.
DDOOOqq has quit [Quit: Leaving]
< rcurtin>
nish21: great, thanks
nish21 has quit [Ping timeout: 260 seconds]
chvsp has quit [Quit: Page closed]
trapz has quit [Quit: trapz]
nish21 has joined #mlpack
< nish21>
rcurtin: actually, after that change, one test, WeakLearnerErrorIris_DS fails
joshua__ has joined #mlpack
< joshua__>
hello there
< zoq>
joshua__: Hello!
< joshua__>
i just wanted to know what will be the type of projects be done through gsoc 2017 in mlpack c++
jenkins-mlpack has quit [Remote host closed the connection]
< kris1>
zoq: I have updated the changes in GaussianInit. Please let me know if there are some other changes. I have some time right now. Thanks
< joshua__>
@zoc will be checking on that .Thank you for the prompt reply
< kris1>
Also for my proposal i am thinking about ssRBM, RBM,BRNN and GAN, Stack GAN.
< zoq>
kris1: Looks good, just some minor style issues I'll fix after the merge; I have to step out for an hour or so, once I get back I merge it in.
joshua__ has quit [Quit: Page closed]
< kris1>
Thanks zoq. I will also get some sleep. I will finish the xavier init tomorrow most probably.
< zoq>
kris1: hm, maybe you focus on some of the models and propose to implement another if there is time left?
< zoq>
kris1: I think implementing all of them is probably to much work.
< kris1>
okay i will take that into account.
< zoq>
kris1: okay, sounds good
< zoq>
kris1: Don't underestimate the time to write some really good tests.
nish21 has quit [Quit: Page closed]
< kris1>
Yes i was thinking about those. Maybe i will discuss about that with you tomorrow. If you have some time.
kris1 has left #mlpack []
< rcurtin>
ok, great! slake.mlpack.org now has matlab + toolboxes
< rcurtin>
let me deploy that to the other four systems...
trapz has joined #mlpack
< rcurtin>
zoq: I am working with the reuters dataset from the benchmark-data repo, but I can't seem to find the labels for the training set
< rcurtin>
there is reuters_train.csv, reuters_test.csv, and reuters_labels.csv (the labels for the test set); do you have the training labels anywhere?
< zoq>
rcurtin: hm, it's not the last row of the training set?
< rcurtin>
oh!
< rcurtin>
yep, exactly, sorry about that
trapz has quit [Quit: trapz]
trapz has joined #mlpack
< zoq>
If I clone the github repo and configure a job that uses the cmake plugin (choose no build type) it builds with "-g"?
< zoq>
I thought DEBUG=ON is the default setting for the master branch so set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0 -ftemplate-backtrace-limit=0")
< zoq>
ah, it's not
< rcurtin>
this was changed sometime back to default to DEBUG=OFF after someone wrote a paper comparing against mlpack compiled with -g -O0 because they didn't know better :)
< rcurtin>
I can't remember the PR, maybe I can find it...