verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
Stellar_Mind2 has quit [Ping timeout: 276 seconds]
Stellar_Mind has joined #mlpack
kirizaki has quit [Quit: Konversation terminated!]
agobin has quit [Quit: Connection closed for inactivity]
wasiq has quit [Ping timeout: 276 seconds]
Stellar_Mind has quit [Ping timeout: 246 seconds]
cache-nez has quit [Ping timeout: 248 seconds]
wasiq has joined #mlpack
dhna has joined #mlpack
Stellar_Mind has joined #mlpack
Nilabhra has joined #mlpack
rohitpatwa has joined #mlpack
Peng_Xu has joined #mlpack
Peng_Xu has quit [Client Quit]
Peng_Xu has joined #mlpack
Stellar_Mind has quit [Ping timeout: 240 seconds]
Nilabhra has quit [Remote host closed the connection]
Stellar_Mind has joined #mlpack
Stellar_Mind has quit [Ping timeout: 244 seconds]
tsathoggua has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
rohitpatwa has quit [Ping timeout: 252 seconds]
manumeral has joined #mlpack
manumeral has quit [Client Quit]
anveshi has joined #mlpack
govg has quit [Ping timeout: 244 seconds]
Rishabh has quit [Ping timeout: 250 seconds]
Peng_Xu has quit [Quit: This computer has gone to sleep]
Awcrr has joined #mlpack
Peng_Xu has joined #mlpack
cache-nez has joined #mlpack
rcurtin has quit [Ping timeout: 268 seconds]
rcurtin has joined #mlpack
wasiq has quit [Ping timeout: 244 seconds]
Stellar_Mind has joined #mlpack
chiragbhatia72 has joined #mlpack
Stellar_Mind has quit [Ping timeout: 244 seconds]
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
Awcrr has quit [Quit: (see you)]
anveshi has quit [Quit: Page closed]
Nilabhra has joined #mlpack
Peng_Xu has quit [Quit: This computer has gone to sleep]
cache-nez has quit [Ping timeout: 246 seconds]
vasanth has joined #mlpack
Awcrr has joined #mlpack
Awcrr has quit [Remote host closed the connection]
Awcrr has joined #mlpack
Awcrr has quit [Client Quit]
vasanth has quit [Quit: Bye]
ranjan123 has joined #mlpack
Rishabh has joined #mlpack
Rishabh has quit [Remote host closed the connection]
Rishabh has joined #mlpack
Rishabh has quit [Ping timeout: 260 seconds]
Rishabh has joined #mlpack
dnisarg13 has joined #mlpack
< dnisarg13> Hello , How to Get Started Developing for mlpack.
Stellar_Mind has joined #mlpack
cache-nez has joined #mlpack
Stellar_Mind has quit [Ping timeout: 264 seconds]
dnisarg13 has quit [Ping timeout: 252 seconds]
dhna has quit [Quit: Page closed]
ranjan123 has quit [Ping timeout: 252 seconds]
cache-nez has quit [Ping timeout: 240 seconds]
agobin has joined #mlpack
kirizaki has joined #mlpack
dnisarg13 has joined #mlpack
dnisarg13 has quit [Ping timeout: 252 seconds]
cache-nez has joined #mlpack
cache-nez has quit [Ping timeout: 276 seconds]
chris___ has joined #mlpack
< chris___> hi , i build the nes today and installed fceux and all other dependencies that was specified in the github link(for the nes system)
< chris___> when i started fceux and loaded the super_mario_bros.lua program , im getting the following errors
< chris___> Lua thread bombed out: ./server.lua:8: module 'socket' not found: no field package.preload['socket'] no file './socket.lua' no file '/usr/share/lua/5.1/socket.lua' no file '/usr/share/lua/5.1/socket/init.lua' no file '/usr/lib/lua/5.1/socket.lua' no file '/usr/lib/lua/5.1/socket/init.lua' no file './socket.so' no file '/usr/lib/lua/5.1/socket.so' no file '/usr/lib/lua/5.1/loadall.so'
< chris___> im having lua 5.1 and lua 5.3 installed ,there seems to be no files in the /usr/lib/lua/5.1 folder
< chris___> so i've copied files from 5.3 folder and it doesnt seem to work
< chris___> anybody got any ideas?
< chris___> *copied files from /usr/lib/lua/5.3 to /usr/lib/lua/5.1
kirizaki has quit [Ping timeout: 260 seconds]
< zoq> chris___: I guess you are using lua-rocks to install 'luasocket'? You have to build lua-rocks with lua 5.1.
< zoq> chris___: You can also build fceux against lua 5.1.
< chris___> i used yaourt to install luasocket as i couldn't get it on pacman
< zoq> chris___: hm, so I guess the easiest solution is to remove lua5.2 and to reinstall luasocket etc. Is that an option for you?
na1taneja2821 has joined #mlpack
< na1taneja2821> @rcurtin Hi, I have commented on issue#553 on github. Please see it and verify so that I can go ahead with the implementation
lokesh has joined #mlpack
ana__ has joined #mlpack
agobin has quit [Quit: Connection closed for inactivity]
lokesh has quit []
kirizaki has joined #mlpack
na1taneja2821 has quit [Quit: Page closed]
ana__ has quit [Ping timeout: 244 seconds]
rohitpatwa has joined #mlpack
mtr_ has joined #mlpack
mtr_ has quit [Ping timeout: 252 seconds]
Awcrr has joined #mlpack
agobin has joined #mlpack
ranjan123 has joined #mlpack
< ranjan123> hello rcurtin! how to validate the performance of parallel sgd ?
chick_ has joined #mlpack
wasiq has joined #mlpack
rohitpatwa has quit [Ping timeout: 240 seconds]
rohitpatwa has joined #mlpack
Awcrr has quit [Ping timeout: 248 seconds]
Awcrr has joined #mlpack
Awcrr has quit []
< rcurtin> ranjan123: in what sense do you want to validate the performance?
< rcurtin> are you trying to test that it's correct?
< ranjan123_> yes
< rcurtin> or are you trying to compare the speed of the optimizer with other optimizers?
< rcurtin> okay
< ranjan123_> no .
< rcurtin> then it would probably be a good idea to take a look at the other optimizer tests (like sgd_test.cpp, lbfgs_test.cpp) and write a similar test
< rcurtin> you could also look at, e.g., logistic_regression_test.cpp, and use your parallel SGD implementation to train a logistic regression model
< rcurtin> and then check that the trained model is similar to a model trained with another optimizer
< ranjan123_> I want to compare the speed of the optimizer
< ranjan123_> ok
< ranjan123_> I think I have to check it for logistic regression.
< rcurtin> if you want to compare the speed of the optimizer, your best bet is probably to write a standalone program that trains a model on a large dataset using your optimizer
< rcurtin> and then that trains a model on the same dataset with another optimizer
< rcurtin> you should run this many times with different random seeds, since SGD has a randomized component and may converge differently with each training seed
< ranjan123_> for testing with Generalized Rosenbrock Function the number of iteration is given zero
< ranjan123_> SGD<GeneralizedRosenbrockFunction> s(f, 0.001, 0, 1e-15, true);
< rcurtin> yes, that runs an unlimited number of iterations and instead terminates when the given tolerance (1e-15) is reached
< ranjan123_> ohhk
< ranjan123_> I think that is one of the drawback in my algorithm.
< ranjan123_> Still I can try to modify it .else I have to look at Hogwild paper.
< rcurtin> what is a drawback? I don't understand what you mean
< rcurtin> or, I guess, what I mean to say is, what is the drawback in your algorithm?
< ranjan123_> I have simply Implemented 2nd alogo
< ranjan123_> *algo
< rcurtin> yeah, but what is the drawback? you can still give each individual SGD implementation a tolerance parameter
< ranjan123_> so It will always iterate T times
< ranjan123_> yes. That is what I was thinking
< ranjan123_> I will divide each thread with number of segment
Nilabhra has quit [Read error: Connection reset by peer]
< ranjan123_> and put a check of tolerance parameter
ankur has joined #mlpack
rohitpatwa has quit [Ping timeout: 260 seconds]
chris___ has quit [Ping timeout: 252 seconds]
cache-nez has joined #mlpack
ankur has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
cache-nez has quit [Ping timeout: 252 seconds]
ankur has joined #mlpack
< kirizaki> um guyz :P
< kirizaki> about this: http://pastebin.com/jMkY1q2p
travis-ci has joined #mlpack
< travis-ci> mlpack/mlpack#611 (master - f0e3318 : marcus): The build is still failing.
travis-ci has left #mlpack []
< kirizaki> it's a snippet from heagding tree
< kirizaki> could someone point me some good source of knowledge to get little bit closer to template-policy programming or metaprogramming
< kirizaki> ?
< kirizaki> I already started some tutorials from web but maybe You have some good source ;)
< kirizaki> thanks in advance
< rcurtin> so a great book is "Modern C++ Design" by Alexandrescu but I don't know if you can easily find a PDF of that
< rcurtin> let me see if I can find a useful link or two
< rcurtin> this is a really long article but it might be useful: http://www.drdobbs.com/policy-based-design-in-the-real-world/184401861
< rcurtin> ah, I think this one would be better: https://www.intopalo.com/policy-based-design
< rcurtin> with the Hoeffding tree code, the key is that the user can use any FitnessFunction they want as long as the FitnessFunction class implements the Evaluate() method
< rcurtin> this could be done through inheritance, but if you do it by having the FitnessFunction as a template parameter, then there is no need for virtual methods, which can be slower than non-virtual methods
< kirizaki> ok, thanks
< rcurtin> if I can clarify anything let me know... you could always design your code to use inheritance and I can help you change it later, if the policy-based design idea doesn't make sense
Peng_Xu has joined #mlpack
ranjan123_ has quit [Ping timeout: 252 seconds]
< kirizaki> nah, I think better way will be if I'll try myself something for the beginning, later for sure I'll ask You for help ;)
< rcurtin> okay, sounds good :)
vasanth has joined #mlpack
< kirizaki> but I realize that without this knowledge and experience it's gonna be harder then I expected :P
< kirizaki> but I'm happy that this problem gonna give me a lot of new skills xD
< kirizaki> thanks again, going into deep
< rcurtin> yeah, once you figure out templates they are nice to work with
< rcurtin> the syntax can be a little unwield
< rcurtin> unwieldy
< rcurtin> and then there are weird things like "template template parameters" like in the Hoeffding tree code
< kirizaki> yes, but I slightly can see what is befind this
< rcurtin> for instance, here is an example of a template template parameter template parameter template parameter:
< kirizaki> it's huge tool
< rcurtin> template<template<template<typename> class A> class B>
< rcurtin> ridiculous looking
< kirizaki> yup
< rcurtin> and absurd names
kev has joined #mlpack
< kev> Hey! So during the proposal phase, can I only apply for one idea on the ideas page? So that I know if I should focus on a single idea out of the two I'm interested in.
< rcurtin> you can apply for I think up to five ideas, but it's often better to focus on one idea only, instead of trying to split your time between multiple proposals
< rcurtin> there are definitely cases where students have applied for multiple GSoC projects in the past and gotten accepted to both
< rcurtin> so it's possible to write two good proposals, definitely, it's just time-consuming
< rcurtin> actually one of our students from 2014 had that happen... fortunately for us, he picked mlpack instead of the other project :)
< kev> No, I mean I'm interested in two of the mlpack ideas! :P
anveshi has joined #mlpack
< rcurtin> yeah, I know, you can submit two proposals to mlpack too
< rcurtin> it's the same type of situation
< rcurtin> all I was saying by bringing up that situation was pointing out that it's possible to submit two good proposals, it's just time-consuming, that's all
< kev> Ah. Understood. Thanks!
Peng_Xu has quit [Quit: This computer has gone to sleep]
anveshi has quit [Quit: Page closed]
Nilabhra has joined #mlpack
< kev> rcurtin, where would you say the implementation of the dataset and experimentation tools project is, as far as priorities for mlpack go?
< Nilabhra> rcurtin: Hi, I did some experiments on my own and it seems that the neighborhood based implementation of CF in mlpack is already way better than weighted nn based CF. I guess there is no point in implementing a inferior method. Instead I think I should go about working on implementing FM, FMM and SVD++ all of which are the state of the art for rec-sys. Let me know what you think.
< Nilabhra> rcurtin: one more thing: Since FM's can be used for other purposes than CF, I think it should be implemented as separate method and cf can use it in the rec-sys setting. Is this approach okay?
anveshi has joined #mlpack
Divyam has joined #mlpack
ankur has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
ana__ has joined #mlpack
anveshi has quit [Ping timeout: 252 seconds]
wasiq has quit [Remote host closed the connection]
cache-nez has joined #mlpack
ana__ has quit [Ping timeout: 244 seconds]
ana__ has joined #mlpack
< rcurtin> kev: there is no priority for projects, we're looking for the best proposals overall
< rcurtin> Nilabhra: I will get back to you after lunch (1 hour or a bit more)
< Nilabhra> rcurtin: sure thing! enjoy your lunch :)
ankur has joined #mlpack
ana__ has quit [Ping timeout: 260 seconds]
ana__ has joined #mlpack
ana__ has quit [Ping timeout: 248 seconds]
kev has quit [Read error: Connection reset by peer]
Divyam has quit [Quit: Page closed]
kirizaki has quit [Quit: Konversation terminated!]
tsathoggua has joined #mlpack
archange_ has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
anveshi has joined #mlpack
bhargav has joined #mlpack
bhargav has quit [Ping timeout: 250 seconds]
vasanth has quit [Quit: Bye]
bhargav has joined #mlpack
bhargav is now known as Guest55475
random has joined #mlpack
random has quit [Client Quit]
Guest55475 has quit [Quit: leaving]
< Nilabhra> rcurtin: hey are you back?
< rcurtin> in a meeting, sorry
< rcurtin> my quick response is that I agree with having FM code separate from the CF class
< rcurtin> but it should still act like a FactorizerType class
< rcurtin> like AMF or RegularizedSVD
palashahuja has joined #mlpack
< palashahuja> zoq, hi
< palashahuja> any new bugs to solve ?
< palashahuja> (I meant implementation wise .. :))
< zoq> palashahuja: Hello, I fixed the problem in #f0e331889a8. So you could use the code you send me to open a pull request.
< palashahuja> and just to be clear .. the new code for dropconnect that has the denoise implementation ..
< palashahuja> I will send the PR ..
palashahuja has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
archange_ has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
< zoq> palashahuja: yes, actually I thought about your idea, to integrate the linear layer. Maybe we should do that but we should also give the user the option to use another layer for drop connect. So merge both ideas in a single layer. We can save a one copy operation, if we integrate the linear layer.
< Nilabhra> rcurtin: Yes I agree. More on this tomorrow? cya for today
palashahuja has joined #mlpack
Nilabhra has quit [Read error: Connection reset by peer]
< palashahuja> zoq, So, should I send the PR ?
< zoq> palashahuja: yes, actually I thought about your idea, to integrate the linear layer. Maybe we should do that but we should also give the user the option to use another layer for drop connect. So merge both ideas in a single layer. We can save a one copy operation, if we integrate the linear layer.
< palashahuja> So there should be another flag variable that decides whether to copy the layer or not
< zoq> palashahuja: I guess we provide two constructors: DropConnectLayer() and DropConnectLayer(baseLayer). And then use a boolean 'ownsLayer' or something like that.
< palashahuja> got it ..
< palashahuja> but we could we can have any behavior for the layer, when integrated ..
< palashahuja> Not necessarily a linear layer ..
< zoq> palashahuja: I'm not sure what you mean. Maybe I should draw the interface that's in my mind?
< palashahuja> No I do get what you are trying to say
< palashahuja> The layer that you are integrating with dropconnect need not necessarily be linear is all that I am saying
< rcurtin> q!
< rcurtin> hello, this was the wrong window...
< zoq> I guess, most of the users would use the dropconnect layer in combination with the linear layer. So, I think it's probably the best choice.
< palashahuja> ok .. thank you ..
< zoq> wq!
agobin has quit [Quit: Connection closed for inactivity]
< rcurtin> haha
palashahuja has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
palashahuja has joined #mlpack
Peng_Xu has joined #mlpack
Peng_Xu has quit [Quit: 离开]
Peng_Xu has joined #mlpack
< Peng_Xu> When installing, cmake cannot find the boost package. Anyone can help?
< palashahuja> same problem here .. :(
< Peng_Xu> I install the boost_1_55_0 in /usr/local/
< palashahuja> on windows ..
< palashahuja> It runs perfectly fine on Linux
< zoq> Peng_Xu: Maybe this helps: https://github.com/mlpack/mlpack/issues/563
< Peng_Xu> I also try to set BOOST_ROOT both in CMakeLists.txt and FindBoost.cmake
< Peng_Xu> Thank you, it will check it
< Peng_Xu> i will
< zoq> Peng_Xu: Let me know if this works for you.
< Peng_Xu> My system is ubuntu. The version of boost installed by apt-get is too old. And I install the new version and set the path, but it still says that the boost version is 1.46.1
< rcurtin> what did you set BOOST_ROOT to?
< Peng_Xu> I try both CMakeList.txt and FindBoost.cmake
< Peng_Xu> set(BOOST_ROOT "/usr/local")
< Peng_Xu> set(BOOST_ROOT "/usr/local/")
< rcurtin> the suggestions there might be helpful
< rcurtin> you could try setting BOOST_INCLUDEDIR and BOOST_LIBRARYDIR also
palashahuja has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
< Peng_Xu> it's ok now!
< Peng_Xu> just need to delete the CMakeCache.txt
< Peng_Xu> before cmake again
< rcurtin> yeah
< rcurtin> I thought maybe that might be the issue
< rcurtin> glad you got it fixed :)
abc has joined #mlpack
abc is now known as Guest81359
Guest81359 has quit [Client Quit]
chick_ has quit [Quit: Connection closed for inactivity]
ankur has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
awhitesong has joined #mlpack
awhitesong has left #mlpack []
awhitesong has joined #mlpack
awhitesong has left #mlpack []
Rishabh has quit [Ping timeout: 240 seconds]
awhitesong has joined #mlpack
awhitesong has left #mlpack []
< anveshi> rcurtin: hi, Ryan
< anveshi> Unlike other tree implementations, code of cosine_tree isn't following template metaprogramming
< anveshi> Is there any possibility of refactoring the code ?
awhitesong has joined #mlpack
awhitesong has left #mlpack []
< Peng_Xu> I encounter same error in: http://www.mlpack.org/trac/ticket/287
< Peng_Xu> But set BOOST_ROOT doesn't work for me
awhitesong has joined #mlpack
awhitesong has left #mlpack []
awhitesong has joined #mlpack
cache-nez has quit [Ping timeout: 244 seconds]
< rcurtin> Peng_Xu: can you run 'VERBOSE=1 make' and take a look at the linker command to make sure it is linking against the right version of Boost>?
< Peng_Xu> Boost_LIBRARY_DIRS:FILEPATH=/usr/lib
< Peng_Xu> Boost_LIBRARY_DIRS:FILEPATH=/usr/lib
< Peng_Xu> Boost_LIBRARY_DIRS in CMakeCache.txt is /usr/lib. I think it should be /usr/local/lib
< Peng_Xu> I try to set BOOST_ROOT, BOOST_LIBRARYDIR by cmake option, but when I check the CMakeCache.txt, it doesn't change
anveshi has quit [Quit: Page closed]
anveshi has joined #mlpack
< rcurtin> Peng_Xu: you probably need to remove the CMakeCache.txt every time you reconfigure
< rcurtin> anveshi: the cosine tree code is not used for any of the dual-tree algorithms
< rcurtin> it might be interesting to implement cosine trees for dual-tree algorithms, but the implementation we have now is specifically for QUIC-SVD (src/mlpack/methods/quic_svd)
< rcurtin> it would not be easy to refactor into a way that would work for both the TreeType API and the QUIC-SVD class's needs
< rcurtin> if you have any ideas, I'm happy to help out, but this isn't a task that I have had any time to look into
< Peng_Xu> I remove the CMakeCache.txt every time, but it doesn't help
< rcurtin> hmm
< rcurtin> what is the CMake output? does it say that it is finding the correct version of Boost?
Stellar_Mind has joined #mlpack
< Peng_Xu> Yes, CMake find the correct version 1.55.0