verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
wenhao has joined #mlpack
robertohueso has left #mlpack []
wenhao has quit [Ping timeout: 260 seconds]
eriat has joined #mlpack
eriat has quit [Client Quit]
vivekp has quit [Ping timeout: 256 seconds]
vivekp has joined #mlpack
govg has joined #mlpack
Sayan98 has joined #mlpack
Sayan98 has quit [Client Quit]
Prabhat-IIT has quit [Ping timeout: 260 seconds]
< luffy1996> zoq: I think bandit algorithms should be implemented in mlpack. Bandit algorithms gives a real taste of reinforcement learning to the first timers. It will be great for the crowd using mlpack for machine learning. More over going through scikit-learn I saw that reinforcement learning is not supported by them. It will be great if mlpack does so, because it will easily benefit the first timers in the field. Hence I
< luffy1996> believe it should be added to mlpack. I think I will complete the implementation of bandit algorithms by March End. Presently I am busy with my semester exam. I will be free in a week after which I will implement a simple multiarm bandit algorithm in mlpack. What are your ideas ?
< luffy1996> More over multiarm bandits can easily be run on a CPU. So I think we should implement this :)
ImQ009 has joined #mlpack
kaushik_ has joined #mlpack
Trion has joined #mlpack
Trion has quit [Ping timeout: 240 seconds]
Trion has joined #mlpack
Trion_ has joined #mlpack
Trion has quit [Ping timeout: 245 seconds]
Ankit has joined #mlpack
Ankit has quit [Quit: Page closed]
AD_ has joined #mlpack
< dk97[m]> Hi there! Dakshit again.
< dk97[m]> zoq: rcurtin I see that adamax and Nadam are not implemented in mlpack. Is it alright if I implement these optimizers?
< dk97[m]> https://arxiv.org/pdf/1412.6980v8.pdf See section 7, it mentions adamax.
< dk97[m]> It will help in expanding the variety of optimizers available to the mlpack community.
AD_ has quit [Quit: Page closed]
AD_ has joined #mlpack
Trion_ has quit [Remote host closed the connection]
Trion has joined #mlpack
Trion has quit [Ping timeout: 252 seconds]
govg has quit [Ping timeout: 240 seconds]
govg has joined #mlpack
govg is now known as Guest97828
Guest97828 has quit [Quit: leaving]
govg_ has joined #mlpack
ShikharJ has joined #mlpack
daivik has joined #mlpack
ShikharJ_ has joined #mlpack
ShikharJ has quit [Ping timeout: 260 seconds]
ShikharJ_ is now known as ShikharJ
Trion has joined #mlpack
ShikharJ has quit [Ping timeout: 260 seconds]
ShikharJ has joined #mlpack
manthan has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> yashsharan/models#3 (master - 260811d : yoloman): The build has errored.
travis-ci has left #mlpack []
ShikharJ has quit [Ping timeout: 260 seconds]
travis-ci has joined #mlpack
< travis-ci> yashsharan/models#4 (master - f9fa9ae : yoloman): The build has errored.
travis-ci has left #mlpack []
< zoq> dk97[m]: Adamax and Nadam is already implemented: http://www.mlpack.org/docs/mlpack-git/doxygen/namespacemlpack_1_1optimization.html
< caladrius[m]> zoq: Hi! Is there any functionality in mlpack which lets the user have independent biases for different strides of convolution filters?
< zoq> caladrius[m]: Currently, there is just the standard convolution operator implemented, I can't see a simple method to e.g. add it with a single line of code. So we would have to go and e.g. implement it as another layer or integrate it into the convolution class.
< zoq> luffy1996: Do you have something specific a specific Bandit algorithm in mind? I think the selected algorithm should fill some niche, e.g. faster as RL or more accurate. Best of luck with your exams.
Trion has quit [Remote host closed the connection]
pc14 has joined #mlpack
< caladrius[m]> zoq: There's a FlexibleReLU activation function which tackles this problem to some extent. Here's the paper for this: https://arxiv.org/pdf/1706.08098.pdf . It adds a bias to the rectifier function and improves the performance of convolution layers.
< zoq> caladrius[m]: Oh nice, that would be easy to add.
< caladrius[m]> Yeah. Could I try it out then?
< zoq> Sure, please feel free.
< caladrius[m]> Thanks!
< pc14> Hi people, I am a prospective student candidate for GSoC. I was wondering if someone could point me to possible warm-up tasks. Thanks (https://drive.google.com/file/d/18oh0ssYriN3k9dWR3NAyuqq993EgoO4D/view)
pc14 has quit [Quit: Page closed]
< dk97[m]> thanks zoq sorry to miss that
< dk97[m]> could you have a look at the PR I submitted? If the implementation is alright, I can write a few tests for the same.
< zoq> dk97[m]: I'll take a look at the PR once I get a chance, but please be patient there are a bunch of PR's open, so this might take some time, but we will respond as fastest as possible.
< dk97[m]> sure no problem! :)
govg_ has quit [Quit: leaving]
ashish has joined #mlpack
Trion has joined #mlpack
Trion has quit []
govg has joined #mlpack
< manthan> i subscribed o mlpack list but i didnt get the confirmation mail. Approximate how many days will it take for the same? Also, is there any other medium to introduce ourselves to the community?
daivik has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
daivik has joined #mlpack
daivik has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
daivik has joined #mlpack
daivik has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
daivik has joined #mlpack
daivik has quit [Client Quit]
daivik has joined #mlpack
daivik has quit [Client Quit]
ashish has quit [Ping timeout: 260 seconds]
govg has quit [Quit: leaving]
navabhi has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> yashsharan/models#6 (master - b715877 : yoloman): The build has errored.
travis-ci has left #mlpack []
moksh has joined #mlpack
< moksh> Hey @zoq, I am quite new to reinforcement learning, so as you suggested, I was thinking of implementing the SARSA algorithm, since it is quite similar to Q-Learning, will that be fine?
travis-ci has joined #mlpack
< travis-ci> yashsharan/models#6 (master - b715877 : yoloman): The build has errored.
travis-ci has left #mlpack []
manthan has quit [Ping timeout: 260 seconds]
< dk97[m]> zoq: there is another layer proposed in the Self Neural Networks paper- alpha-dropout- a variant of dropout that they used. Is it alright if I implement that as a follow-up to my PR? Then an implementation of an SNN could be done.
< dk97[m]> rcurtin:
< luffy1996> zoq: When it comes to implementation of bandit, I feel it will be faster as it will be coded on C++. The benefit which I see from my side is, it will make life easier for the crowd trying to implement these algorithms using reinforcement learning. Apart from this I see that various algorithms like Sarsa or TD(lambda) {eligibility traces} , monte carlo methods are yet to be implemented in mlpack. Coming from
< luffy1996> multiagent reinforcement learning background I feel that the implementation for IQL ,DDPG should also be present. The number of games defined under RL should increase to support diversity in playing games. One should also not forget the classic bellman equations to solve MDPs. There are a lot in RL literature which can be implemented on mlpack. The questions stands how we should prioritise this?
< luffy1996> Having said that I do understand this requires time and effort. It will be great to know your inputs and how you feel about implementation of any of these . My current plans for March includes implementation of bandit algorithms using RL and adding one or two game in the kitty . This will necessarily help the user to implement algorithms in diverse environment. Thanks for wishing me luck. Hope the semesters go well
< luffy1996> :)
< rcurtin> luffy1996: my take on implementing bandits as a project is that this isn't traditionally functionality that mlpack has supported
< rcurtin> it would be okay to add bandit algorithms
< rcurtin> but the key to any proposal would be that at the end of the summer we have a fully finished product that users can easily use
< rcurtin> this has been done successfully in the past, with e.g. the CF code
< rcurtin> but I would say that it is very important for us to provide complete functionality, so I might suggest that you spend your time planning your proposal carefully
< rcurtin> as opposed to quickly implementing bandit algorithms to be merged into mlpack
< rcurtin> dk97[m]: I don't have a problem with that so long as the alpha-dropout layer is useful outside the context of SNNs
witness has joined #mlpack
< dk97[m]> alpha-dropout was more to be used with SELU activation function because of a different default low variance value
< dk97[m]> rcurtin:
< dk97[m]> Although, since it is a variant of dropout, its usage outside of SNN should not take a hit.
daivik has joined #mlpack
< navabhi> Hi everyone !! I cloned the latest mlpack github repo and compiled it on my system in debug mode.
< navabhi> I was just going through the tutorials for different log levels in mlpack from http://www.mlpack.org/docs/mlpack-git/doxygen.php?doc=iodoc.html .
< navabhi> but on running the example, I did not see any Debug or Info message on my terminal.
daivik has quit [Client Quit]
< navabhi> I compiled mlpack with -D DEBUG=ON; and the above file as "g++ example.cpp -std=c++11 -lmlpack -g -rdynamic --verbose". Am I missing anything here ?
navabhi has quit [Ping timeout: 260 seconds]
daivik has joined #mlpack
manthan has joined #mlpack
moksh has quit [Ping timeout: 260 seconds]
navabhi has joined #mlpack
manthan has quit [Ping timeout: 260 seconds]
< daivik> navabhi: after building, did you install mlpack by running make install?
< navabhi> yes, I did that too.
< daivik> Strange, there were no errors during the build? are you sure you configured using -DDEBUG=ON and then ran both make and make install?
tarun has joined #mlpack
tarun has quit [Ping timeout: 260 seconds]
< navabhi> Yes, there were no errors. A couple of warnings though.
< navabhi> The CMake output on running 'cmake -DDEBUG=ON ../' can read here: https://pastebin.com/jcHq5Git
< daivik> you are running the program with the --verbose flag?
< daivik> also, you should have got an error on the CLI::ParseCommandLine(argc, argv) line -- what did you do about that?
< navabhi> just commented it out; also tried --verbose flag but got the same output
< daivik> https://thepasteb.in/p/oYhlGqpjjpBHZ -- > try running this
< navabhi> I tried running your script, but still got the same output. (I also had to link boost to run the code)
< daivik> yes, did you run with the --verbose flag?
< daivik> you should atleast get the INFO outputs
daivik has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
daivik has joined #mlpack
bob___ has joined #mlpack
< navabhi> yes, I used this command: 'g++ test_log.cpp -std=c++11 -lmlpack -lboost_program_options --verbose'
< navabhi> I had to pass the verbose flag at runtime. Sorry for the mistake. I am now getting the INFO logs
bob___ has quit [Client Quit]
< navabhi> The log of the make command can be accessed from https://pastebin.com/ybTHw4y4, if it helps
< daivik> I'm also not getting any debug outputs -- which is strange because I get debug outputs when I run tests.
< daivik> also, when I try something like mlpack::Log::Debug.ignoreInput=true; it tells me that Debug is a NullOutStream .. which means that DEBUG is not set while compiling. This is probably an issue. rcurtin, zoq - could you help us out please?
< navabhi> well, I don't get debug outputs when running tests too which is why I was trying this example program. I do get info outputs on setting log_level as all.
< daivik> I compiled with -DDEBUG=ON and -DTEST_VERBOSE=ON and I see DEBUG outputs for tests.. maybe you can try -DTEST_VERBOSE=ON and that will help
< daivik> sorry, but I have to go now... I'll look into this more tomorrow. Will let you know if I find something :0
< daivik> * :)
daivik has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
ImQ009 has quit [Read error: Connection reset by peer]
kaushik_ has quit [Quit: Connection closed for inactivity]
navabhi has quit [Quit: Page closed]