verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
ray_li has quit [Remote host closed the connection]
ray_li has joined #mlpack
ray_li has quit [Remote host closed the connection]
aashay has joined #mlpack
ray_li has joined #mlpack
rajeshdm9 has joined #mlpack
ray_li has quit [Client Quit]
ray_li has joined #mlpack
ray_li has quit [Client Quit]
rajeshdm9 has quit [Ping timeout: 260 seconds]
ray_li has joined #mlpack
ray_li has quit [Client Quit]
ray_li has joined #mlpack
ray_li has quit [Client Quit]
daivik has joined #mlpack
Ashutosh77755 has joined #mlpack
Ashutosh77755 has quit [Client Quit]
manan has joined #mlpack
aashay has quit [Quit: Connection closed for inactivity]
ImQ009 has joined #mlpack
witness has joined #mlpack
rehas has quit [Ping timeout: 240 seconds]
alsc has joined #mlpack
nav_ has joined #mlpack
nav_ has quit [Quit: Page closed]
manan has quit [Ping timeout: 260 seconds]
avantikasingh has joined #mlpack
< avantikasingh>
Hey everyone! I am new to Gsoc and interested in the project on Reinforcement Learning. Can I get further details on this project? Or maybe some bugs or issues to solve, anything to start with!
avantikasingh has quit [Quit: Page closed]
avantikasingh has joined #mlpack
< zoq>
avantikasin: Hello, the first step would be to look into the existing code src/mlpack/methods/reinforcement_learning/, Shangtong wrote weekly updates that can be found here: http://www.mlpack.org/gsocblog/ShangtongZhangPage.html that should be helpful in the process. Also, please take a look at the tests: rl_components_test.cpp and q_learning_test.cpp, you can run each with: 'bin/mlpack_test -t
< zoq>
RLComponentsTest' and 'bin/mlpack_test -t QLearningTest'.
< zoq>
avantikasin: If you like you can work on (stochastic) Policy Gradients as a small project to get familiar with the codebase, but don't feel obligated.
sskhrnwbie has joined #mlpack
< sskhrnwbie>
@zoq I re-implemented the tests for variance scaling initlializer based on the gaussian initialization and the OIVS initialization tests.
< sskhrnwbie>
The build passed at my end and travis-ci passed all tests except python_bindings test.
< sskhrnwbie>
Feel free to review when you get the chance :-)
< zoq>
sskhrnwbie: Okay, great I'll take a look at the PR, once I have a chance and make comment.
sskhrnwbie has quit [Ping timeout: 260 seconds]
alsc has quit [Quit: alsc]
avantikasingh has quit [Ping timeout: 260 seconds]
kaushik_ has joined #mlpack
ray_li has joined #mlpack
ray_li has quit [Client Quit]
ray_li has joined #mlpack
ray_li has quit [Client Quit]
rajeshdm9 has joined #mlpack
alsc has joined #mlpack
alsc has quit [Client Quit]
witness has quit [Quit: Connection closed for inactivity]
rgesgs has joined #mlpack
rgesgs has quit [Client Quit]
rehas has joined #mlpack
alsc has joined #mlpack
desai-aditya has joined #mlpack
< desai-aditya>
Hello everyone. I am new to opensource and GSOC. I am quite passionate about ml and intend to contribute here in the long term too. I am interested in working on essential deep learning models. I also have suggestions about a new algorithm that I wish to implement.
< zoq>
desai-aditya: Hello there, the models on the ideas page are just suggestions, so if you have something interesting in mind, please feel free.
< desai-aditya>
I recently went on an academic internship to NTU, SIngapore where I learnt about Extreme Learning Machines. Apparently they aren't so widely used but have a better accuracy than neural nets in general.
< desai-aditya>
As far as I know, they basically learn as they go but only if any new info is present in the new data. They modify themselves(add new nodes or new layers) if they see new info and at the same time maintain that they still function on the old data properly.
< desai-aditya>
I wish to implement this and I am ready to learn whatever it takes. I may not know much right now but I am a fast learner given the resources which the internet obviously has.
< desai-aditya>
I am quite eager to meet new people and learn many new things in this journey (that of GSOC and later)
< zoq>
hm, "better accuracy than neural nets in general", on which task? I thought ELM's are related to SVM's, so single layer?
< desai-aditya>
classification tasks I presume. Also they modify themselves so not really single layers. Relation to SVMs - I'll have to check.
sshkhrnwbie has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 240 seconds]
sshkhrnwbie has quit [Client Quit]
< zoq>
I think they could be much faster as traditional gradient based models, since they are much smaller, not sure.
< zoq>
I guess you are talking about evolutionary ELM's right? I think the standard model does not evolve.
sshkhrnwbie has joined #mlpack
< zoq>
There is something similar for deep learning called Neuroevolution.
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
< desai-aditya>
As such , I do not know too much about elms. We had but a brief lecture on it. I asked around and people didn't seem to use it much.
< zoq>
I see, I like the idea and I guess that could be an interesting project, we should make sure there are tasks where ELM's outperform similar model (same number of parameter).
< desai-aditya>
Where do you suggest I should start?
< desai-aditya>
For now , I am getting familiar to the code. I was hoping maybe you could guide me a bit.
< sshkhrnwbie>
does mlpack support variations of convolutions used for different architectures like fractionally strided convolutions, dilated convolutions, depthwise and spatially separable convolutions ?
< zoq>
desai-adity: Maybe we can find some papers that run experiments against ELMs and similar models.
< zoq>
desai-adity: If you like you can test some models with interesting datasets, but don't feel obligated.
< zoq>
sshkhrnwbie: Currently no support on that front.
sumedhghaisas has quit [Read error: Connection reset by peer]
< desai-aditya>
zoq: Thank you so much. I will get it done and get back asap.
sumedhghaisas has joined #mlpack
< zoq>
desai-adity: Here to help, take all the time you need, there is plenty of time left.
sumedhghaisas2 has joined #mlpack
< sshkhrnwbie>
@zoq : are these convolution variants desirable ? if yes i can open an issue to help attract contributors
< sshkhrnwbie>
Also i saw that some activations like elu, hard tanh are in the layers part of the ann module while others like relu and tanh are in activations. Is this a purposeful design choice as in derived activations are placed in layers ?
< desai-aditya>
zoq: Time ,however much, is always little, I believe, so I'l be quick to get it all done at max by tomorrow.
sumedhghaisas has quit [Ping timeout: 256 seconds]
< daivik>
rcurtin: Thanks for reviewing the PR I opened for the mlpack_hmm_train CLI binding. I do require some clarifications a few of the comments, so could you please check back on the PR when you have time. In the meantime, I want to go back and look at the serialization issue we were facing with Boost v1.58 (ref. IRC logs from 6th and 7th Feb. Sorry it
< daivik>
took me a while to get to it, but I do want to solve it). You referred me to PR 1229 (Boost serialization issue in v1.66 - now merged), unfortunately that did not solve the problem in v1.58 (running mlpack_test still gives 3 errors all related to serialization). Will keep you posted on what I find.
alsc has quit [Quit: alsc]
sumedhghaisas has joined #mlpack
sumedhghaisas2 has quit [Ping timeout: 256 seconds]
< zoq>
sshkhrnwbie: Desirable yes, but I don't think this is a good entrance task since ideally, we should end up with a fast implementation, which takes a lot of time. If someone likes to go for it that's fine, but I don't like to force it. What do you think?
< zoq>
sshkhrnwbie: The idea is to implement a specific layer if there is the need for an additional parameter like the scale parameter of the ELU function.
< sshkhrnwbie>
@zoq I agree that it's not a good entrance task. Maybe you can ask whoever works on the DL modules to have a look at it over summer.
< zoq>
I think, implementing a bunch of convolution methods, could be a project on its own :)
< sshkhrnwbie>
Yes it could indeed ! Now that I think about it the tasks are not going to be trivial at all.
< desai-aditya>
sshkrnwbie : Are you talking about implementing CNNs ?
< sshkhrnwbie>
@zoq : There are some rectifier variants like relu6, randomized relu, threshold relu, scaled elu etc implemented in PyTorch, TF etc which have shown good results in comparison papers. Seeing that they are used in most dl packages I thought I would open an issue to ask people to contribute / contribute myself with links to papers, other implementations.
< sshkhrnwbie>
@desai-aditya : I was more specifically referring to the different convolution approaches. Like dilated, atrous, separable etc They are used for designing CNNs for different tasks like image segmentation and also in modern classification architectures like inception, xception
< desai-aditya>
@sshkhrnwbie : I see. I'll have to read up on it.
< zoq>
sshkhrnwbie: Scaled ELU, isn't that ELU * Const so ELU followed by a Const layer?
< zoq>
sshkhrnwbie: Randomized relu, could be an interesting alternative to dropout, not sure maybe it's equivalent.
< zoq>
sshkhrnwbie: Is relu6 really commonly used? I thought it's only usefull for fixed-point inference?
< zoq>
sshkhrnwbie: I'm not against adding anything you have mentioned, but we should clear first if there is a benefit.
ray_li_ has joined #mlpack
ray_li_ has quit [Remote host closed the connection]
< sshkhrnwbie>
@zoq : their immediate benefit wasn't apparent to me as well. particularly threshold relu, relu6 are simple variants of relu and selu is exactly what you are saying. that's why i thought it would be best to ask over irc
< zoq>
sshkhrnwbie: If you like and have the time, I would say let's take a look at randomized relu.
< zoq>
sshkhrnwbie: Might be a good candidate, to get started.
< sshkhrnwbie>
@zoq : this paper https://arxiv.org/pdf/1505.00853.pdf reports randomized relu doing better than other variants. Sure I will have a look at it.
< sshkhrnwbie>
did you happen to get a chance to go over the variance scaling PR. i am not sure why the python_bindings test is failing
< zoq>
The python issue isn't related to your code, I'll take a closer look at the code later today or tomorrow.
sshkhrnwbie_ has joined #mlpack
< sshkhrnwbie_>
Oh okay. i wasn't sure about that. thanks
s1998 has quit [Read error: Connection reset by peer]
< zoq>
rcurtin: Oh okay, in this case I agree we should avoid ELMs.
< rcurtin>
daivik: sounds good, let me know if I can help out. and sure, I will respond to the PR hopefully later this morning
< rcurtin>
zoq: yeah, I am not sure, maybe aditya might implement ELMs and show good results; but at the very least we would need to test thoroughly before accepting it to make sure we can easily reproduce results that are shown in ELM papers
< zoq>
rcurtin: Agreed, I'm not sure the results will hold up, also I don't think they are in general better.
< rajeshdm9>
@zoq Hey i went through Shangtong's blog posts and they gave a very good idea of what is expected from a GSoC student over the summer. Also, i've been going through the codebase to get a better understanding of whole structure.
< rajeshdm9>
I was trying to run the tests you had suggested but ran into some problems. The latest stable version 2.2.5 did not seem to have Reinforcement modules included and hence I cloned the master repository to run the RL tests.
< desai-aditya>
I did not know about the controversy . It certainly discourages me to implement ELMs but only one thought remains - are they actually better? I see no other way to find that out other than implementing them. But obviously it would be better if I invested time in something that will be guaranteed to produce results. What do you think?
< rcurtin>
desai-aditya: sure, I agree, that is the best way to learn :)
< rajeshdm9>
@zoq But I am encountering the following error during installation : make[3]: *** No rule to make target `/usr/lib/libarmadillo.so', needed by `lib/libmlpack.so.2.0'. Stop. make[2]: *** [src/mlpack/CMakeFiles/mlpack.dir/all] Error 2 make[1]: *** [src/mlpack/tests/CMakeFiles/mlpack_test.dir/rule] Error 2 make: *** [mlpack_test] Error 2
< rcurtin>
if we can't get good results with ELMs though, it's likely that we should avoid merging them
< rajeshdm9>
any pointers on where I could be going wrong
< rcurtin>
rajeshdm9: did you remove/reinstall/replace Armadillo after you configured your build with cmake?
< rcurtin>
this typically happens when CMake had previously found the Armadillo library at /usr/lib/libarmadillo.so but now that file no longer exists
< rcurtin>
my suggestion would be to remove the build directory and make a new one and reconfigure CMake... that is likely to solve the issue
< rajeshdm9>
oh ok .. i'll try that
< rajeshdm9>
thank you
< rcurtin>
sure, I hope it helps; let me know if not
< desai-aditya>
@rcurtin : Would it be a good idea to propose that(ELMs) for GSOC?
< sumedhghaisas>
Hi All. I have just added a new project for GSoc, Variational Autoencoders. The description is given on ideas page. Feel free to ask any questions.
< rcurtin>
desai-aditya: you could propose that but personally I would want to see proof that ELMs can perform well so I think it would be a lot of work to prepare a good proposal like that...
< desai-aditya>
@rcurtin : I want to see my work being used by people all around the world. Maybe I could propose a different idea for GSOC and do the ELMs after that. I have a fairly decent idea of DL(implemented a k layer from scratch but that was python) . I plan to go for research in ML for Masters so I am ready to accept any kind of project. What kind of a project would you think be suitable for me?
< desai-aditya>
currently reading DL by Ian goodfellow and bishop pattern recognition too.
nonaon has joined #mlpack
nonaon has quit [Client Quit]
< rcurtin>
desai-aditya: sorry for the slow response. if you're looking do a master's focusing on research in machine learning, then I would suggest a project that has more of a research component to it
< rcurtin>
for instance, some of the projects could result in a short workshop paper at the end of the summer. even some of the projects like "accelerate a machine learning algorithm" could turn into something like that, and you would become very familiar with the algorithm you had chosen
< rcurtin>
on the other hand, something like the string processing utilities project sounds like it would not be a great fit for you
< rcurtin>
at the end of the day, in that whole list of projects, you will have to pick what is most interesting to you (I can only help so much with that)
< rcurtin>
or alternately you could propose another project, like you did with ELMs. unfortunately with the ELMs there is the controversy though :)
< desai-aditya>
@rcurtin : Would it be fine if I may not know much about the project I select now such as 'accelerating an algorithm' , but if I am willing to learn it fairly quickly?? I would need guidance (pointers to resources more specifically ) from people like you already in the field since long.
< rcurtin>
desai-aditya: of course that is okay; remember that applications for students aren't even open for another month :)
< rcurtin>
we are not choosing the students that we accept today :)
< rcurtin>
I'm sure you can see that there are a lot of requests for help from mentors, so there is probably a limit for the amount of help you can get, but for any algorithm the first place to start will be reading any relevant papers
< rcurtin>
so that you can become familiar with the algorithm itself
< rcurtin>
then after that, diving into the mlpack implementation and understanding it, and seeing if you can make a minor speed improvement (or plot out a plan to speed it up) is probably a good way to go
< rcurtin>
of course, accelerating a machine learning algorithm is not the only project so if that is not captivating to you, you could always pick something else :)
< desai-aditya>
@rcurtin : I want to go all in as soon as possible. It's just recently that I got acquainted with open source and this community will probably be the most aligned with whatever I want to learn. I do not have much time if I am going to be learning a lot of different stuff (algos and detailed C++ features).
sumedhghaisas2 has joined #mlpack
robhueso has joined #mlpack
robhueso has quit [Client Quit]
robhueso has joined #mlpack
sumedhghaisas has quit [Ping timeout: 252 seconds]
< kaushik_>
rcurtin: hi, I was going through https://www.tensorflow.org/tutorials/word2vec. Basically it talks about word2vec and some of the models like Vector space models, n-grams, skip-grams.
< kaushik_>
I am talking in context of the "String Processing Utilities". could you let me know if i am going in the correct direction.
< robhueso>
Hi, I'm interested in contributing to mlpack during GSOC 2018, where would you recommend me to start? I'm familiarized with C++ but not with Boost/Armadillo
< Manish7294>
rcurtin: looks like mvu implementation went down with time in respect to current LRSDP implementation.
< Manish7294>
rcurtin: I think it needs a total remodeling.
desai-aditya has quit [Ping timeout: 260 seconds]
ShikharJ has quit [Quit: Page closed]
< rcurtin>
kaushik_: word2vec would be a nice thing to have implemented, definitely
< rcurtin>
Manish7294: yeah, so the long story with the MVU implementation goes like this...
< rcurtin>
from about 2007-2009 Nick Vasiloglou wrote a couple of papers on using MVU with LRSDP (and his variation of MVU, "maximum furthest neighbors unfolding" or MFNU)
< rcurtin>
I became interested in getting the LRSDP+MVU into mlpack, so I worked with it for some time and implemented an early version of the LRSDP support that you see now
< rcurtin>
however, it turned out to be very difficult to get the LRSDP+MVU implementation to converge, even for simple datasets like the swiss roll dataset
< rcurtin>
at some point I ran out of time, and had to give up, but I have always thought it would be nice to have that working correctly
< rcurtin>
you may be right that it may need to be totally redone
< Manish7294>
rcurtin: It may seem weird but I totally couldn't resist my laugh the way your started mvu journey. Sorry for that :) . But I totlly love the way you elaborated. It would definitely be nice if at some point MVU becomes a full-fledged part of mlpack.
< Manish7294>
Could you mention some paper where the LRSDP form of MVU is clearly stated. It would be nice if there is some reference on gsoc wiki page too. It would definitely benefit the aspirants who would like to take up on that.
rajeshdm9 has quit [Ping timeout: 260 seconds]
< rcurtin>
ah, sure, I will update that shortly
< rcurtin>
I am not sure how clear the papers are, but I can at least provide some links... :)
< rcurtin>
Manish7294: ok, added some links, if you refresh the page it should be a handful of papers
< Manish7294>
rcurtin: Thanks for that. It would sure be helpful.
< Manish7294>
I thought current implementation is from nick's paper. I will also try to get my hands on this one after exam's ^_^
< Manish7294>
*think
< rcurtin>
no, the current implementation I actually redid... the original implementation was written by nick and I adapted it
< rcurtin>
if I run back enough in time I bet I can find it... hang on...
sumedhghaisas2 has quit [Ping timeout: 240 seconds]
desai-aditya has joined #mlpack
sumedhghaisas has joined #mlpack
< zoq>
rcurtin: I can't include SVRG and SGD at the same time since both use a separate NoDecay class sgd/decay_policies/no_decay.hpp and svrg/no_decay.hpp which are in the same namespace. I could just rename the SVRG NoDecay class or I could add the method of the SVRG NoDecay class to the SGD NoDecay class. I would go with the second option what do you think? I think if we agree on one option there is no need to
< zoq>
open a PR?
desai-aditya has quit [Ping timeout: 260 seconds]
< rcurtin>
I think either works fine; if you want to add it to the SGD NoDecay class, you should at least add a comment mentioning that overload is used by SVRG
< rcurtin>
if you want to commit directly I think it's fine, otherwise if you open a PR I'll basically immediately approve it once we see travis builds it ok (but I guess some of the jobs will probably fail :))
< zoq>
okay, let's go with the protocol on this one.
daivik has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
desai-aditya has joined #mlpack
sumedhghaisas2 has joined #mlpack
sumedhghaisas has quit [Ping timeout: 256 seconds]
< rajeshdm9>
Hey guys. I installed the mlpack version which I cloned from the current git-repo as the stable version does not have RL component included.
< rajeshdm9>
I was able to run the tests "bin/mlpack_test -t RLComponentsTest" and "bin/mlpack_test -t QLearningTest" when I built just the testing part using make mlpack_test.
< rajeshdm9>
But I am not able to isntall the complete package properly. Though the make and make install were successful I am getting the error error while loading shared libraries: libmlpack.so.2: cannot open shared object file: No such file or directory
< rajeshdm9>
any pointers if the problem is because of wrong build or I have some permission/path issues
sumedhghaisas has joined #mlpack
< daivik>
rajeshdm9 you need to set the LD_LIBRARY_PATH variable to where the libmlpack.so is located
sumedhghaisas2 has quit [Read error: Connection reset by peer]
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< rajeshdm9>
@daivik I did exactly that ... I can even see it when I do an echo
< daivik>
hm .. thats strange, it should work with that
AlishDipani has left #mlpack []
< zoq>
rajeshdm9: daivik is right, are you sure libmlpack.so.2 is in the path you exported?
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< rajeshdm9>
I have that file in the directory where I have installed mlpack -/ mlpack-master/build/lib/libmlpack.so.2 ... I don't have the folder in /usr/include/mlpack .. That's why I was wondering if there is some problem with installation ...
travis-ci has joined #mlpack
< travis-ci>
PlantsAndBuildings/mlpack#2 (hmm-cli-tests - 2996a65 : daivik): The build has errored.
< rajeshdm9>
ok it worked after i rebuilt mlpack .. I think it was the same problem as before .. I had changed the version of armadillo after doing a cmake which ecurtin mentioned ..
sumedhghaisas has quit [Read error: Connection reset by peer]
sumedhghaisas has joined #mlpack
< rajeshdm9>
@zoq - I have also been going through the code base in the meantime. Could you let me know what to do next to be able to contribute to the RL project
< rajeshdm9>
ok I will try that and get back to you if I have any querries :)
< zoq>
Sounds good.
rajeshdm9 has quit [Quit: Page closed]
< desai-aditya>
after building and testing successfully on the latest build given in the document "http://www.mlpack.org/docs/mlpack-git/doxygen.php?doc=build.html" , I tried compiling using the sample code for covariance matrix using command 'gcc -std=c++11 covariance.cpp -o covariance -lmlpack' . It says /usr/bin/ld: /tmp/cccVKJvB.o: undefined reference to symbol 'wrapper_ddot_' //usr/lib/libarmadillo.so.6: error adding symbols: DSO missing
< zoq>
desai-adity: link against armadillo: -larmadillo
< desai-aditya>
@zoq : then this error occurs - /usr/bin/ld: /tmp/cci5jbAS.o: undefined reference to symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEaSEPKc@@GLIBCXX_3.4.21'