verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
< zoq>
palashahuja: 3. 'Denoise' the weights, in line 94 you set weights randomly to zero which is fine, but we have to fallback to the orginal weights once we calculated the gradient.
< zoq>
There is another problem, that has nothing to do with your code. More with the way how the optimizer works.
< zoq>
The main problem is that the drop connect layer acts differently in training mode and in prediction mode. The optimizer doesn't know that, so it's always in training mode.
< zoq>
Here comes the problem SGD or RMSprop starts by evaluating the complete dataset, which results in an empty weight matrix because every time Forward is called we set some weights to zero right? So when we actually start to train the network, we start with an empty weight matrix.
chick_ has quit [Quit: Connection closed for inactivity]
< palashahuja>
zoq, what is the proposed solution for empty weight matrix ?
< palashahuja>
Should we try manipulating it somehow using layer traits ?
< zoq>
palashahuja: I modified the RMSprop optimizer to test the code. That's pretty easy because all networks implement a third parameter for the Evaluation function, to set the state. But we can't just modify all optimizer to set the state. Maybe, there is another solution ... I'll need to think about that
< zoq>
palashahuja: If you like I can send you the code, so that you can test everything.
< palashahuja>
zoq, Yes please send the gist if you'd like ..
< aacr>
almost done but have problem compiling lua-gd
< aacr>
here's the error
< aacr>
?
< aacr>
why can't i copy and paste
kirizaki has joined #mlpack
< aacr>
can not be used when making a shared object
< aacr>
recompile with -fpic
< aacr>
tem cc1IqKXb.o :error adding symbols :Bad vaule
< aacr>
thanks!
< aacr>
usr] bin]ld: ]tmp]cc1IqKXb.o: relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC ]tmp]cc1IqKXb.o: error adding symbols: Bad value collect2: error: ld returned 1 exit status Makefile:70: recipe for target 'gd.so' failed make: *** [gd.so] Error 1
< ranjan1234>
Log::Info where is the logfile stored ?
< kirizaki>
Log::Info / Warn / Fatal / Assert is just giving You message inside program
< kirizaki>
it's not writing any log to file
< kirizaki>
to see Log::Info You have to run Your program with -v flag
< kirizaki>
aacr: for better read, copy Your log to www.pastebin.com
< ranjan1234>
ohhk . got it ! thanks :)
< kirizaki>
aacr: and You should wait for #zoq, he gonna help You with this for sure
< kirizaki>
#ranjan1234: happy to help You ;)
< ranjan1234>
when I am running a test and the test is calling a function which has Log::Info . when I am runnning the test prog with --verbose it is not showing the expected msg
< ranjan1234>
I am running a test and the test is calling another function which has Log::Info and when I am runnning the test prog with --verbose it is not showing the expected msg .
< ranjan1234>
#kirizaki ping
< kirizaki>
ok
< kirizaki>
because the test it's not taking the -v flag
< aacr>
ok. thanks
< kirizaki>
it doesn't have CLI::ParseCommandLine
< kirizaki>
better use Log::Warn
< kirizaki>
in this situation
< kirizaki>
it will be showed always
< kirizaki>
"Log::Warn is always shown, and Log::Fatal will throw a std::runtime_error exception, when a newline is sent to it"
< kirizaki>
and more over, this doc. which I followed to You by link
< kirizaki>
is not updated but Log::Fatal prints some sort of backtrace
< kirizaki>
but only if You build mlpack with debugging symbols: cmake -DDEBUG=ON ../
niks has joined #mlpack
< niks>
Hi! I am Nikhil and I read your Projects' list for GSOC 2016. I know C/C++ and am good at data Structures. Plz guide me so that I can contribute.
< aacr>
Hi! I'm a GSoC student too. Perhaps you can first have a look at this page
< kirizaki>
and if You will have any questions You can go via mailing list, github or here ;)
< kirizaki>
"We don't respond instantly... but we will respond. Give it a few minutes. Or hours"
niks has quit [Quit: Page closed]
ranjan1234 has quit [Ping timeout: 252 seconds]
aacr has quit [Quit: Page closed]
rohitpatwa has joined #mlpack
rohitpatwa has quit [Ping timeout: 252 seconds]
rohitpatwa has joined #mlpack
rohitpatwa has quit [Ping timeout: 246 seconds]
mrbean has joined #mlpack
McCathy has quit [Ping timeout: 240 seconds]
mrbean1 has quit [Ping timeout: 240 seconds]
McCathy has joined #mlpack
McCathy has quit [Quit: Leaving]
wasiq has quit [Ping timeout: 268 seconds]
Nilabhra has joined #mlpack
FRossi has joined #mlpack
rebeka has joined #mlpack
< rebeka>
Hi! Anyone online? :)
< FRossi>
Hi! I'm Federico
< FRossi>
I am a university student and I want to apply in this project for GSOC
< zoq>
rebeka: Hello, we are always online :)
< zoq>
FRossi: You are welcome to do so.
< rebeka>
Hi. I'm Rebeka, I'm also interested to apply to mlpack for GSoC.
< rebeka>
I have read the list of ideas on the website, some of which I have already studied and worked on before. I just wanted to know more about some of the projects to help me decide which one to go for?
< rcurtin>
hi Rebeka, you can take a look through the mailing list archive... there is a lot of extra information about the projects before
< rcurtin>
er
< rcurtin>
there is a lot of extra information about the projects that has already been written
< FRossi>
I'm in the same situation. I have studied and worked with some of the methods included in the project. Now I'm installing mlpack
< FRossi>
Thanks for help!
rohitpatwa has joined #mlpack
< FRossi>
I have a compile time error with isnan and isinf macros and I was looking for info about it in the github issues but I didn't find anything
< FRossi>
Anyone know anything about its causes?
< rcurtin>
FRossi: I seem to remember a github issue about this at one point...
< rcurtin>
are you using the git master branch?
< FRossi>
sorry, I found it
< FRossi>
I will try with that solutions
< rcurtin>
yeah, I think it's fixed in the latest master branch
pkgupta has joined #mlpack
archange_ has joined #mlpack
ank_95_ has quit [Quit: Connection closed for inactivity]
ank_95_ has joined #mlpack
LimeTheCoder has joined #mlpack
rebeka has quit [Ping timeout: 252 seconds]
rohitpatwa has quit [Ping timeout: 244 seconds]
LimeTheCoder has quit [Ping timeout: 252 seconds]
tafodinho has joined #mlpack
< tafodinho>
hello everyone i will love to work on the project to implement tree types please how do i get started
anveshi has joined #mlpack
christie has joined #mlpack
cache-nez has joined #mlpack
< christie>
hi , i downloaded and build mlpack from source today. After installing , i tired some command line executables mlpack proviedes, but i'm not able to run any of it
< christie>
it shows this error "error while loading shared libraries: libmlpack.so"
< christie>
although the libraries are there in the lib folder (inside build folder)
< christie>
i mean the so files
< christie>
can anybody please help ?
< christie>
@zoq ?
< kirizaki>
hi
< kirizaki>
did You tried to add flag -lmlpack while compile Your program?
< zoq>
tafodinho: There's been a lot of interest in the project to implement different types of trees. One good way to get started with that project might be to take a look at this other mailing list reply: https://mailman.cc.gatech.edu/pipermail/mlpack/2016-March/000760.html
< tafodinho>
zoq: please so which other project can u advice me to try
< rcurtin>
tafodinho: what other projects are you interested in?
< rcurtin>
realistically we can't tell you what the best project for you is, because that depends on your interests
< tafodinho>
ok then i will take a look at other projects because i based my interest on the implementation of different type of trees
< rcurtin>
it might be useful to browse the mailing list archive for more information; there are lots of emails exchanged there:
< rcurtin>
Rishabh: it's not just you, I need to fix the CSS
< rcurtin>
every time the doxygen version changes the stylesheets change...
mizari has left #mlpack []
christie has quit [Quit: Page closed]
rohitpatwa has joined #mlpack
Nilabhra has quit [Remote host closed the connection]
tafodinho has quit [Ping timeout: 240 seconds]
kirizaki has quit [Ping timeout: 244 seconds]
kirizaki has joined #mlpack
yvtheja_ has quit [Quit: Leaving]
yvtheja has joined #mlpack
< kirizaki>
rcutin: with mlpack::backtrace PR I changed docs about "mlpack Input and Output" but it's still not updated on website, could You check it, is it proper ?
ach_ has quit [Quit: Connection closed for inactivity]
kalingeri has joined #mlpack
ranjan123 has joined #mlpack
vineet has joined #mlpack
< kalingeri>
Hi, I find the project on neuroevolution algorithms to be extremely interesting. I set up the nes emulator and understood how it interfaces with mlpack. I also ran a FFN on iris and kaggle data. I am going through the papers on the topic. Since the tickets related to this are already fixed, are there any warmup task I can do. I found it a little hard initially to set up the neural network component compared to other modules, would it be a good idea to
< rcurtin>
kalingeri: your message was too long, it got cut off after "would it be a good idea to" :)
< kalingeri>
Oh :). I just wanted to know if it would be a good idea to write a tutorial on it with example code ?
< rcurtin>
I think that eventually when the ANN code is stable a tutorial will get written and added to the list of tutorials, but that wiki page should be helpful for now
Eloitor has joined #mlpack
< kalingeri>
Yes I used the same page to set up a simple network as well. I was thinking of an example centric one, but it's better I wait then. Any other tasks I can get my hands on ?
< rcurtin>
did you take a look at the github issues list?
< rcurtin>
I think there are some issues marked "easy" that relate to the ANN code
< kalingeri>
I will spend my time there then, thanks :)
< rcurtin>
sure, glad I could help
palashahuja has joined #mlpack
< palashahuja>
zoq, hi
< palashahuja>
for dropconnect we could simply transfer the attributes to a temporary layer ..
tsathoggua has joined #mlpack
Stellar_Mind2 has joined #mlpack
< Stellar_Mind2>
Hi! I am a 4th year undergraduate student at BITS Pilani, Goa campus pursuing Electrical and Electronics engineering. I am mighty interested in the neuro evolution algorithm project idea. I had planned to implement this in the summer before itsel, the fact that it is a part of MLPACK makes it sweeter. I was previously referring to this video and the links in the description- https://www.youtube.com/watch?v=qv6UVOQ0F44.
< Stellar_Mind2>
I have previously developed a high speed neural network classifier for classification and regression of online sequential data with incremental classes as an Intern at NTU Singapore. The results have been submitted to IEEE IJCNN 2016. I would like to contribute to this project, can anyone guide me on the best way to showcase my proficiency?
< zoq>
Stellar_Min: Hello, that sounds great. A good start would be to compile mlpack and explore the source code, especially the neural network code and the code to communicate with the emulator. You can find the code used to communicate with the NES emulator here: https://github.com/zoq/nes. Also take a look at the mailing list archive, to get more informations: https://mailman.cc.gatech.edu/pipermail/mlpack/2016-M
< zoq>
arch/thread.html
pkgupta_ has joined #mlpack
pkgupta has quit [Ping timeout: 244 seconds]
pkgupta_ is now known as pkgupta
palashahuja has joined #mlpack
Eloitor has quit [Ping timeout: 252 seconds]
< zoq>
palashahuja: I'm not sure what you mean.
< palashahuja>
What I meant was to think dropconnect as layer
< palashahuja>
So my idea is to assign the weights attribute to the dropconnect class itself and not to baselayer
< palashahuja>
and so on and so forth
< zoq>
palashahuja: I'm not sure why we should do that, the code I sent you yesterday works fine.
< zoq>
palashahuja: Maybe I missed something?
< palashahuja>
zoq, for adam optimizer are there any papers that you could recommend ?
< palashahuja>
never mind I found it
< zoq>
palashahuja: The problem is there is no way we can solve the optimizer problem inside the layer. I'll have to think about the problem, there is definitely a solution. What you could do is to open a pull request using the code I sent you yesterday and test it with the modified optimizer.
< zoq>
palashahuja: The paper "Adam: A Method for Stochastic Optimization" by D. Kingma is pretty good.
< palashahuja>
or the idea that I suggested earlier ..
< rcurtin>
I found a really cool feature of the Boost Unit Test Framework today:
< Stellar_Mind2>
Hi @Zoq. I am on it! That NES emulator is really cool. Are you by any chance sethbling on Youtube? (Just some trivia I wanted to confirm)
vineet has quit [Ping timeout: 246 seconds]
< zoq>
palashahuja: You could do that, but it wouldn't solve the optimizer issue.
< zoq>
Let me try to explain the issue: Once we call the Train(..) function the optimizer is called e.g. SGD or RMSprop. So the first step of the optimizer is to calculate the first objective function, by calling the Evaluation function for each sample of the dataset.
< zoq>
The network implements that Evaluation function and runs the forward step for each sample.
< zoq>
And here comes the problem, once the Forward function of the DropConnect layer is called, it doesn't know in what state it is. It could be in the training state or in the predicting state.
< zoq>
So let's say, state = train. We randomly set weights to zero we do that for all samples, which ends up in a matrix filled with zero (not necessary but in most cases).
< palashahuja>
hmm .. okay
< zoq>
The only solution is to tell DropConnect the state of the current optimization process. So for the first n samples prediction mode, for the next x samples training mode and for the last m samples prediction mode.
< zoq>
Stellar_Min: No, sorry!
< Stellar_Mind2>
Wow. there are so many interesting projects in MLpack! Is there any precedence list for the project ideas?