verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
kdkw has joined #mlpack
< kdkw>
Hi. Although, it seems pretty late now, but I have a question regarding Gsoc.
< kdkw>
Can we propose an idea of our own?
< rcurtin>
kdk
< rcurtin>
kdkw: sure, you are welcome to do that
< rcurtin>
amoudgl: sorey for the slow response; I'm not sure what the issue is
< kdkw>
So, I was thinking that why not provide for an API for GANs (Generative Adversarial Networks)?
< kdkw>
GANs are a really hot topic of research in DL, and have pretty cool applications
< rcurtin>
I thought this was already a project?
< rcurtin>
I have certainly seen aome mailing list discussion
< kdkw>
Uh oh
< rcurtin>
I certainly agree that GANs are a hot topic and could be useful to have implemented
< rcurtin>
hah :) yeah, I have not followed the discussion particularly closelu
< rcurtin>
closely*
< rcurtin>
but I did see that it was being discussed
< kdkw>
So, for over the past year, I have been involved in research on GANs
< rcurtin>
you can see the archives at lists.mlpack.org/pipermail/mlpack if I remembered the URL right
< kdkw>
That's why I thought it might be a good fit.
< kdkw>
I will check the mailing list discussion. Thanks @rcurtin for pointing out :)
< rcurtin>
yeah, that sounds reasonable, that would definitely be a useful thing to discuss in the proposal
< rcurtin>
sure, I am in the car on a phone so I can't get too deep in discussiob
< rcurtin>
and am not good at spelling :)
kris1 has quit [Quit: Leaving.]
< kdkw>
Haha thank you so much!
vinayakvivek has quit [Quit: Connection closed for inactivity]
< kdkw>
Btw, I noticed that Marcus Edel is incharge of that project. Can I reach out to him directly over email?
< kdkw>
Or, I am thinking that sending an email to the mailing list might be better
amoudgl has quit [Quit: Connection closed for inactivity]
mikeling has joined #mlpack
< zoq>
kdkw: Yeah, please use the mailing list.
richukuttan has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2203 (master - ae93d74 : Marcus Edel): The build was fixed.
< sagarbhathwar>
Uh oh! I synced my fork with mlpack and Travis is updating the build details here! How to stop it?
ironstark has quit [Ping timeout: 260 seconds]
ironstark has joined #mlpack
ironstark has quit [Ping timeout: 240 seconds]
ironstark has joined #mlpack
bharath_ has joined #mlpack
sagarbhathwar has quit [Ping timeout: 260 seconds]
< bharath_>
Hello, I have an issue with running a test program with mlpack. I have installed mlpack(2.0.1.0) with macports which installed all its dependencies. When i run a test program with g++, i get an error that "library for larmadillo not found". How can i include armadillo libraries for executing my programs.
govg has quit [Ping timeout: 268 seconds]
vivekp has quit [Ping timeout: 260 seconds]
vivekp has joined #mlpack
govg has joined #mlpack
bharath_ has quit [Remote host closed the connection]
bharath_ has joined #mlpack
kdkw has quit [Quit: Page closed]
bharath_ has quit [Ping timeout: 268 seconds]
bharath_ has joined #mlpack
govg has quit [Remote host closed the connection]
govg has joined #mlpack
diehumblex has joined #mlpack
naxalpha has joined #mlpack
saurabh_ has joined #mlpack
< saurabh_>
Can you elaborate a bit more on the implementation part of this project? I understand that we have to make different docker containers to test the builds on different compilers and different architectures I have to set up environment for automated building of docker images (dockerfiles) Testing methodology for windows and OS X
< saurabh_>
Need guidance with this project- Build testing with Docker and VMs. Can you elaborate a bit more on the implementation part of this project? I understand that we have to make different docker containers to test the builds on different compilers and different architectures I have to set up environment for automated building of docker images (dockerfiles) Testing methodology for windows and OS X
bharath_ has quit [Remote host closed the connection]
bharath has joined #mlpack
vss has joined #mlpack
bharath has quit [Ping timeout: 256 seconds]
darkknight__ has joined #mlpack
trapz has joined #mlpack
naxalpha has quit []
srbh1 has joined #mlpack
< srbh1>
Need guidance with this project- Build testing with Docker and VMs. Can you elaborate a bit more on the implementation part of this project? I understand that we have to make different docker containers to test the builds on different compilers and different architectures I have to set up environment for automated building of docker images (dockerfiles) Testing methodology for windows and OS X
saurabh_ has quit [Ping timeout: 260 seconds]
srbh1 has quit [Read error: Connection reset by peer]
govg has quit [Ping timeout: 268 seconds]
< cult->
your question doesn't make sense. read and learn more before you ask.
bharath has joined #mlpack
< zoq>
sagarbhathw: Either remove the .travis.yml file from your fork or if you like that Travis builds your fork but not to send notifications remove the 'notifications:' section from the .travis.yml file.
< zoq>
cult: To whom are you referring?
< zoq>
bharath: If you like to run the tests, I would recommend to use cmake & make instead of a handcrafted g++ command line; take a look at: http://www.mlpack.org/docs/mlpack-git/doxygen.php?doc=build.html for more information about how to build the test suite.
< zoq>
vss: Of course you can, once we came to an conclusion how to do it, but we haven't figured that out yet. Maybe you like to add your thoughts to the issue?
< bharath>
Zoq: Thanks for your response. I will try to build the mlpack using cmake and make. But, macports has already done that job by installing it. isnt it? Do you think that method is not reliable?
< zoq>
bharath: I just glanced over the MacPorts file, and I think what it does is to install the library/header files and the executables, but not the executable for the test suites. Unless you have something that is called mlpack_test?
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2204 (master - 81e6957 : Ryan Curtin): The build passed.
< vss>
zoq : i saw the comments and how some of them suggesting to create a exception base class and then creating modules which inherit from that base class . I don't understand whats wrong with the current c++ exception handling library
< zoq>
vss: That is a good question, that you could ask in the issue, so that it can be addressed and discussed.
vinayakvivek has joined #mlpack
< zoq>
rcurtin: Realized that we first have to login to github to access jenkins, so you can't see the gsoc blog and the badge generation also failed. Not sure there is a non hacky solution.
Trion has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2205 (master - 2e1fe2d : Ryan Curtin): The build passed.
< rcurtin>
zoq: I am not a huge fan of Jenkins permissions, let me see what I can figure out...
< rcurtin>
ah, there we go: "grant read permissions for anonymous users"
< zoq>
rcurtin: great works again
shikhar has joined #mlpack
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 246 seconds]
sagarbhathwar has joined #mlpack
richukuttan has joined #mlpack
< richukuttan>
Hi, I am having a problem with the mlpack mailing list. I tried to send a mail via the list (to the address mlpack@cc.gatech.edu). However, the gmail daemon got the following response: 550 #5.1.0 Address rejected. Please help.
< rcurtin>
richukuttan: are you subscribed to the mailing list?
< richukuttan>
Yes, I subscribed today.
< sagarbhathwar>
I think the mailing list id is mlpack@lists.mlpack.org
< richukuttan>
I'll try mailing there. In the mean time however, I'd like to point out that the mailing address at https://mailman.cc.gatech.edu/mailman/listinfo/mlpack and in the welcome mail to the list both mention the address above.
Trion1 has joined #mlpack
Trion has quit [Ping timeout: 256 seconds]
Trion1 is now known as Trion
< richukuttan>
@sagarbhathwar Is this the place where I subscribe for the mailing list: https://mailman.cc.gatech.edu/mailman/listinfo/mlpack? Because mlpack@lists.mlpack.org accepted my mail, but replied that I am not subscribed.
< shikhar>
richukuttan: Check the "Promotions" tab in your Inbox, if using gmail
darkknight__ has quit [Ping timeout: 260 seconds]
Trion has quit [Quit: Have to go, see ya!]
sagarbhathwar has quit [Ping timeout: 260 seconds]
govg has joined #mlpack
bharath has joined #mlpack
< rcurtin>
richukuttan: I made a note on the cc.gatech.edu list, though I don't control it anymore
< rcurtin>
I'll ask those administrators to delete that list
< rcurtin>
I checked the welcome message of the mlpack@lists.mlpack.org list, I didn't see anything wrong there
< richukuttan>
rcurtin: No, at that point of time, I was still under the impression that the cc.gatech.edu was the primary mailing list. I was talking about the welcome message of that list.
< rcurtin>
yep, I don't think I can change that list
< richukuttan>
rcurtin: Thanks, and sorry for the trouble.
< rcurtin>
no problem! it's good to know that list is still active so that I can fix it
shikhar has quit [Quit: Page closed]
< vss>
rcurtin : found a code snippet which modifies backtrace and prints the filenames with the line no's
< vss>
should suffice
< rcurtin>
vss: did you look at the existing backtrace code?
< vss>
rcurtin: nope. But if i have a look at it we can use it to get the function line no (hopefully)
< rcurtin>
vss: take a look at the existing backtrace code first
< vss>
rcurtin: looking at it now
vss has left #mlpack []
mikeling has quit [Quit: Connection closed for inactivity]
richukuttan has quit [Quit: Page closed]
bharath has quit [Remote host closed the connection]
bharath has joined #mlpack
bharath has quit [Ping timeout: 240 seconds]
vinayakvivek has quit [Quit: Connection closed for inactivity]
richukuttan has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
mlpack/mlpack#2207 (master - 86602fd : Marcus Edel): The build passed.
< richukuttan>
Hi, I have sent a mail over the mailing list, proposing a new topic for the GSoC. Please read through it and give some feedback, when you are free. Thank you.
< richukuttan>
zoq: That's funny, did you receive the mail I sent about 5 hours back, about the edit rights? Because I can't seem to find that one there either.
< richukuttan>
zoq: The mailing address is mlpack@lists.mlpack.org, correct?
< zoq>
richukuttan: Right, you need to subscribe first.
< richukuttan>
Ah, I forgot to confirm my subscription. Sorry. Will send the mail once more.
< richukuttan>
Have sent the mail. Please look through it.
< zoq>
richukuttan: I really like Neural Programmer-Interpreter idea, because it is able to learn sub-procedures. Unfortunately, it requires strong supervision in the form of execution traces, so the number of "datasets" is kinda limited.
< richukuttan>
zoq: from what I understood, the execution traces are only the output. For example, if we have a program which does 1 digit addition, and we need a program that does multiple digit addition, the only input required is a dataset with 2 variables and its sum (and maybe the carry), at least from what I understood. It would try to use the initial program and come up with an execution trace of its own. Am I wrong? In such a case, do w
< richukuttan>
zoq: Because it is almost impossible to get hundreds of different execution traces for the program to "learn" from it. Then again, an execution trace is pretty static, i.e, if addition is taken as a training example, it makes no sense to use it as a testing example too. It saves all traces, (read the part where adding a new program is discussed), so we cannot expect a less than perfect output if the same example is used for testi
< richukuttan>
And this would be the first neural network I have seen that saves its training data (if execution trace is taken as training data)
< zoq>
richukuttan. Your last messages got cut off.
< richukuttan>
zoq: Because it is almost impossible to get hundreds of different execution traces for the program to "learn" from it. Then again, an execution trace is pretty static.
< richukuttan>
i.e, if addition is taken as a training example, it makes no sense to use it as a testing example too. It saves all traces, (read the part where adding a new program is discussed), so we cannot expect a less than perfect output if the same example is used for testing and training.
< richukuttan>
And this would be the first neural network I have seen that saves its training data (if execution trace is taken as training data)
< richukuttan>
And I have seen a Python execution of the program : https://github.com/mokemokechicken/keras_npi Though I did not analyze it yet, from what I understood, the create_training_data.py is a quite simple program.
< richukuttan>
The cut off message : Am I wrong? In such a case, do we need strong supervision?
< zoq>
I agree, it's simple for some problems but as I said you have to create some synthetic execution traces, to train the model (strong supervision). Take a look at the bubblesort example, the HAM, NTM or DNC model is different since you can train it on simple input output pairs.
< zoq>
If I remember right, there is another paper that adds recursion can't remember the name of the paper.
< richukuttan>
zoq: In the example of bubble sort, for example, I will create training data for the bubble and reset commands. I may also need to create simple c programs (eg: swap a b) which act as the base program. But again, the creation of synthetic traces as such do not enter the equation.
< richukuttan>
If you can find a way to induce recursion, it will be great, of course, but I think it is doable without it too.
< richukuttan>
Even for the car program, given basic action programs (like act(left) and so on), the network should be able to work without synthetic traces, I believe.
< richukuttan>
Creating a few "basic programs" should not be that difficult, I think.
< zoq>
As I said, writing some simple routines to generate synthetic execution traces isn't a problem if you can break down the problem into some subroutines. But it's easier if you don't have to do this, right? How would you train the model on a speech recognition task?
< richukuttan>
True, but I believe that the main aim of the papaer is to create a program that can do work in a "human like" process. For example, we can create programs htat do speech recognition even now. But an output of a NPI would be a human-readable trace, unlike a normal neural network which outputs a set of unreadable weights.
< richukuttan>
For this, we inherently need subroutines. If the program can create subroutines by itself, it is great. Otherwise, we need to provide subroutines.
< richukuttan>
This is like how we would teach a human. For speech recognition, we would first train a human to understand letters (creating 26 new subroutines), then teach him to weave it into words. This is what an NPI tries to emulate.
< richukuttan>
Of course, to save time, I proposed to create the subroutines synthetically, but I believe that most subroutines can be created by training with subroutine - specific training sets.
< richukuttan>
The advantage of this is that once we create a set of basic programs, as we create more difficult programs, the NPI learns it "on top" of older subroutines, which means that catastrophic forgetting may be avoided.
< richukuttan>
For example, the same program that creates sentences out of words may be used for speech recognition and predicting the next word of an incomplete sentence (maybe).
< richukuttan>
Soon, we would have a wealth of smaller subprograms, and any new task can be learned much quicker.
< richukuttan>
zoq:Gimme a moment
richukuttan has left #mlpack []
richukuttan has joined #mlpack
< zoq>
Don't get me wrong the NPI idea is great but more from a theoretical perspective. Do you think someone would use the model if he could just use something like DNC? Where we don't have to come up with subroutines to solve some problem, where the model is able to figure out how to solve the problem on it's own?
< richukuttan>
For example, we can make the multiplication program learn human-friendly multiplication (from a single digit multiplier program, and the adder program), then create an exponential program on top of it.
< richukuttan>
Yes, if we need only the single program, something like DNC may be the best. This can be used to create a new package, for example, from the bottom up. Theoretically, if this program knew all the basic C commands, it may be able to create any C program by itself, if I did not read it wrong.
< zoq>
Yes, that's right. Actually there is another paper, that is realted to the idea, I guess DeepCoder or somthing like that?
< richukuttan>
And this is where NPI becomes powerful, as the number of subprograms it knows increases, the potential to create new program increases exponentially,
< richukuttan>
Let me read that.
< zoq>
Anyway, I like the idea, and if you like to switch your proposal in that direction, please go ahead and do that.
< richukuttan>
Thanks. Would you be able to remain the mentor for the new proposal, or should I find another one?
< richukuttan>
Also, if I feel it is related enough, can I tie this idea to the Deepcoder paper you suggested?
< richukuttan>
And finally, do you think it is realistically possible to finish such a project within the timeline of the GSoC?
< zoq>
I think I can mentor the project, if anybody else is interested to mentor the project, let's collaborate.
< zoq>
I think so, yes.
< zoq>
Also if you go for the idea, make sure to take a look at: "Making Neural Programming Architecture Generalize Via Recursion"
< richukuttan>
Thanks, then by tomorrow, I will redo the proposal. Unless you would like me to send a new one for this idea?