verne.freenode.net changed the topic of #mlpack to: http://www.mlpack.org/ -- We don't respond instantly... but we will respond. Give it a few minutes. Or hours. -- Channel logs: http://www.mlpack.org/irc/
Stellar_Mind2 has quit [Ping timeout: 276 seconds]
Stellar_Mind has joined #mlpack
kirizaki has quit [Quit: Konversation terminated!]
agobin has quit [Quit: Connection closed for inactivity]
wasiq has quit [Ping timeout: 276 seconds]
Stellar_Mind has quit [Ping timeout: 246 seconds]
cache-nez has quit [Ping timeout: 248 seconds]
wasiq has joined #mlpack
dhna has joined #mlpack
Stellar_Mind has joined #mlpack
Nilabhra has joined #mlpack
rohitpatwa has joined #mlpack
Peng_Xu has joined #mlpack
Peng_Xu has quit [Client Quit]
Peng_Xu has joined #mlpack
Stellar_Mind has quit [Ping timeout: 240 seconds]
Nilabhra has quit [Remote host closed the connection]
Stellar_Mind has joined #mlpack
Stellar_Mind has quit [Ping timeout: 244 seconds]
tsathoggua has joined #mlpack
tsathoggua has quit [Quit: Konversation terminated!]
rohitpatwa has quit [Ping timeout: 252 seconds]
manumeral has joined #mlpack
manumeral has quit [Client Quit]
anveshi has joined #mlpack
govg has quit [Ping timeout: 244 seconds]
Rishabh has quit [Ping timeout: 250 seconds]
Peng_Xu has quit [Quit: This computer has gone to sleep]
Awcrr has joined #mlpack
Peng_Xu has joined #mlpack
cache-nez has joined #mlpack
rcurtin has quit [Ping timeout: 268 seconds]
rcurtin has joined #mlpack
wasiq has quit [Ping timeout: 244 seconds]
Stellar_Mind has joined #mlpack
chiragbhatia72 has joined #mlpack
Stellar_Mind has quit [Ping timeout: 244 seconds]
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
< Awcrr>
Awcrr has quit [Quit: (see you)]
anveshi has quit [Quit: Page closed]
Nilabhra has joined #mlpack
Peng_Xu has quit [Quit: This computer has gone to sleep]
cache-nez has quit [Ping timeout: 246 seconds]
vasanth has joined #mlpack
Awcrr has joined #mlpack
Awcrr has quit [Remote host closed the connection]
Awcrr has joined #mlpack
Awcrr has quit [Client Quit]
vasanth has quit [Quit: Bye]
ranjan123 has joined #mlpack
Rishabh has joined #mlpack
Rishabh has quit [Remote host closed the connection]
Rishabh has joined #mlpack
Rishabh has quit [Ping timeout: 260 seconds]
Rishabh has joined #mlpack
dnisarg13 has joined #mlpack
< dnisarg13>
Hello , How to Get Started Developing for mlpack.
Stellar_Mind has joined #mlpack
cache-nez has joined #mlpack
Stellar_Mind has quit [Ping timeout: 264 seconds]
dnisarg13 has quit [Ping timeout: 252 seconds]
dhna has quit [Quit: Page closed]
ranjan123 has quit [Ping timeout: 252 seconds]
cache-nez has quit [Ping timeout: 240 seconds]
agobin has joined #mlpack
kirizaki has joined #mlpack
dnisarg13 has joined #mlpack
dnisarg13 has quit [Ping timeout: 252 seconds]
cache-nez has joined #mlpack
cache-nez has quit [Ping timeout: 276 seconds]
chris___ has joined #mlpack
< chris___>
hi , i build the nes today and installed fceux and all other dependencies that was specified in the github link(for the nes system)
< chris___>
when i started fceux and loaded the super_mario_bros.lua program , im getting the following errors
< chris___>
Lua thread bombed out: ./server.lua:8: module 'socket' not found: no field package.preload['socket'] no file './socket.lua' no file '/usr/share/lua/5.1/socket.lua' no file '/usr/share/lua/5.1/socket/init.lua' no file '/usr/lib/lua/5.1/socket.lua' no file '/usr/lib/lua/5.1/socket/init.lua' no file './socket.so' no file '/usr/lib/lua/5.1/socket.so' no file '/usr/lib/lua/5.1/loadall.so'
< chris___>
im having lua 5.1 and lua 5.3 installed ,there seems to be no files in the /usr/lib/lua/5.1 folder
< chris___>
so i've copied files from 5.3 folder and it doesnt seem to work
< chris___>
anybody got any ideas?
< chris___>
*copied files from /usr/lib/lua/5.3 to /usr/lib/lua/5.1
kirizaki has quit [Ping timeout: 260 seconds]
< zoq>
chris___: I guess you are using lua-rocks to install 'luasocket'? You have to build lua-rocks with lua 5.1.
< zoq>
chris___: You can also build fceux against lua 5.1.
< chris___>
i used yaourt to install luasocket as i couldn't get it on pacman
< zoq>
chris___: hm, so I guess the easiest solution is to remove lua5.2 and to reinstall luasocket etc. Is that an option for you?
na1taneja2821 has joined #mlpack
< na1taneja2821>
@rcurtin Hi, I have commented on issue#553 on github. Please see it and verify so that I can go ahead with the implementation
lokesh has joined #mlpack
ana__ has joined #mlpack
agobin has quit [Quit: Connection closed for inactivity]
lokesh has quit []
kirizaki has joined #mlpack
na1taneja2821 has quit [Quit: Page closed]
ana__ has quit [Ping timeout: 244 seconds]
rohitpatwa has joined #mlpack
mtr_ has joined #mlpack
mtr_ has quit [Ping timeout: 252 seconds]
Awcrr has joined #mlpack
agobin has joined #mlpack
ranjan123 has joined #mlpack
< ranjan123>
hello rcurtin! how to validate the performance of parallel sgd ?
chick_ has joined #mlpack
wasiq has joined #mlpack
rohitpatwa has quit [Ping timeout: 240 seconds]
rohitpatwa has joined #mlpack
Awcrr has quit [Ping timeout: 248 seconds]
Awcrr has joined #mlpack
Awcrr has quit []
< rcurtin>
ranjan123: in what sense do you want to validate the performance?
< rcurtin>
are you trying to test that it's correct?
< ranjan123_>
yes
< rcurtin>
or are you trying to compare the speed of the optimizer with other optimizers?
< rcurtin>
okay
< ranjan123_>
no .
< rcurtin>
then it would probably be a good idea to take a look at the other optimizer tests (like sgd_test.cpp, lbfgs_test.cpp) and write a similar test
< rcurtin>
you could also look at, e.g., logistic_regression_test.cpp, and use your parallel SGD implementation to train a logistic regression model
< rcurtin>
and then check that the trained model is similar to a model trained with another optimizer
< ranjan123_>
I want to compare the speed of the optimizer
< ranjan123_>
ok
< ranjan123_>
I think I have to check it for logistic regression.
< rcurtin>
if you want to compare the speed of the optimizer, your best bet is probably to write a standalone program that trains a model on a large dataset using your optimizer
< rcurtin>
and then that trains a model on the same dataset with another optimizer
< rcurtin>
you should run this many times with different random seeds, since SGD has a randomized component and may converge differently with each training seed
< ranjan123_>
for testing with Generalized Rosenbrock Function the number of iteration is given zero
< rcurtin>
with the Hoeffding tree code, the key is that the user can use any FitnessFunction they want as long as the FitnessFunction class implements the Evaluate() method
< rcurtin>
this could be done through inheritance, but if you do it by having the FitnessFunction as a template parameter, then there is no need for virtual methods, which can be slower than non-virtual methods
< kirizaki>
ok, thanks
< rcurtin>
if I can clarify anything let me know... you could always design your code to use inheritance and I can help you change it later, if the policy-based design idea doesn't make sense
Peng_Xu has joined #mlpack
ranjan123_ has quit [Ping timeout: 252 seconds]
< kirizaki>
nah, I think better way will be if I'll try myself something for the beginning, later for sure I'll ask You for help ;)
< rcurtin>
okay, sounds good :)
vasanth has joined #mlpack
< kirizaki>
but I realize that without this knowledge and experience it's gonna be harder then I expected :P
< kirizaki>
but I'm happy that this problem gonna give me a lot of new skills xD
< kirizaki>
thanks again, going into deep
< rcurtin>
yeah, once you figure out templates they are nice to work with
< rcurtin>
the syntax can be a little unwield
< rcurtin>
unwieldy
< rcurtin>
and then there are weird things like "template template parameters" like in the Hoeffding tree code
< kirizaki>
yes, but I slightly can see what is befind this
< rcurtin>
for instance, here is an example of a template template parameter template parameter template parameter:
< kirizaki>
it's huge tool
< rcurtin>
template<template<template<typename> class A> class B>
< rcurtin>
ridiculous looking
< kirizaki>
yup
< rcurtin>
and absurd names
kev has joined #mlpack
< kev>
Hey! So during the proposal phase, can I only apply for one idea on the ideas page? So that I know if I should focus on a single idea out of the two I'm interested in.
< rcurtin>
you can apply for I think up to five ideas, but it's often better to focus on one idea only, instead of trying to split your time between multiple proposals
< rcurtin>
there are definitely cases where students have applied for multiple GSoC projects in the past and gotten accepted to both
< rcurtin>
so it's possible to write two good proposals, definitely, it's just time-consuming
< rcurtin>
actually one of our students from 2014 had that happen... fortunately for us, he picked mlpack instead of the other project :)
< kev>
No, I mean I'm interested in two of the mlpack ideas! :P
anveshi has joined #mlpack
< rcurtin>
yeah, I know, you can submit two proposals to mlpack too
< rcurtin>
it's the same type of situation
< rcurtin>
all I was saying by bringing up that situation was pointing out that it's possible to submit two good proposals, it's just time-consuming, that's all
< kev>
Ah. Understood. Thanks!
Peng_Xu has quit [Quit: This computer has gone to sleep]
anveshi has quit [Quit: Page closed]
Nilabhra has joined #mlpack
< kev>
rcurtin, where would you say the implementation of the dataset and experimentation tools project is, as far as priorities for mlpack go?
< Nilabhra>
rcurtin: Hi, I did some experiments on my own and it seems that the neighborhood based implementation of CF in mlpack is already way better than weighted nn based CF. I guess there is no point in implementing a inferior method. Instead I think I should go about working on implementing FM, FMM and SVD++ all of which are the state of the art for rec-sys. Let me know what you think.
< Nilabhra>
rcurtin: one more thing: Since FM's can be used for other purposes than CF, I think it should be implemented as separate method and cf can use it in the rec-sys setting. Is this approach okay?
archange_ has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
< zoq>
palashahuja: yes, actually I thought about your idea, to integrate the linear layer. Maybe we should do that but we should also give the user the option to use another layer for drop connect. So merge both ideas in a single layer. We can save a one copy operation, if we integrate the linear layer.
< Nilabhra>
rcurtin: Yes I agree. More on this tomorrow? cya for today
palashahuja has joined #mlpack
Nilabhra has quit [Read error: Connection reset by peer]
< palashahuja>
zoq, So, should I send the PR ?
< zoq>
palashahuja: yes, actually I thought about your idea, to integrate the linear layer. Maybe we should do that but we should also give the user the option to use another layer for drop connect. So merge both ideas in a single layer. We can save a one copy operation, if we integrate the linear layer.
< palashahuja>
So there should be another flag variable that decides whether to copy the layer or not
< zoq>
palashahuja: I guess we provide two constructors: DropConnectLayer() and DropConnectLayer(baseLayer). And then use a boolean 'ownsLayer' or something like that.
< palashahuja>
got it ..
< palashahuja>
but we could we can have any behavior for the layer, when integrated ..
< palashahuja>
Not necessarily a linear layer ..
< zoq>
palashahuja: I'm not sure what you mean. Maybe I should draw the interface that's in my mind?
< palashahuja>
No I do get what you are trying to say
< palashahuja>
The layer that you are integrating with dropconnect need not necessarily be linear is all that I am saying
< rcurtin>
q!
< rcurtin>
hello, this was the wrong window...
< zoq>
I guess, most of the users would use the dropconnect layer in combination with the linear layer. So, I think it's probably the best choice.
< palashahuja>
ok .. thank you ..
< zoq>
wq!
agobin has quit [Quit: Connection closed for inactivity]
< Peng_Xu>
I also try to set BOOST_ROOT both in CMakeLists.txt and FindBoost.cmake
< Peng_Xu>
Thank you, it will check it
< Peng_Xu>
i will
< zoq>
Peng_Xu: Let me know if this works for you.
< Peng_Xu>
My system is ubuntu. The version of boost installed by apt-get is too old. And I install the new version and set the path, but it still says that the boost version is 1.46.1
< rcurtin>
what did you set BOOST_ROOT to?
< Peng_Xu>
I try both CMakeList.txt and FindBoost.cmake
< rcurtin>
Peng_Xu: can you run 'VERBOSE=1 make' and take a look at the linker command to make sure it is linking against the right version of Boost>?
< Peng_Xu>
Boost_LIBRARY_DIRS:FILEPATH=/usr/lib
< Peng_Xu>
Boost_LIBRARY_DIRS:FILEPATH=/usr/lib
< Peng_Xu>
Boost_LIBRARY_DIRS in CMakeCache.txt is /usr/lib. I think it should be /usr/local/lib
< Peng_Xu>
I try to set BOOST_ROOT, BOOST_LIBRARYDIR by cmake option, but when I check the CMakeCache.txt, it doesn't change
anveshi has quit [Quit: Page closed]
anveshi has joined #mlpack
< rcurtin>
Peng_Xu: you probably need to remove the CMakeCache.txt every time you reconfigure
< rcurtin>
anveshi: the cosine tree code is not used for any of the dual-tree algorithms
< rcurtin>
it might be interesting to implement cosine trees for dual-tree algorithms, but the implementation we have now is specifically for QUIC-SVD (src/mlpack/methods/quic_svd)
< rcurtin>
it would not be easy to refactor into a way that would work for both the TreeType API and the QUIC-SVD class's needs
< rcurtin>
if you have any ideas, I'm happy to help out, but this isn't a task that I have had any time to look into
< Peng_Xu>
I remove the CMakeCache.txt every time, but it doesn't help
< rcurtin>
hmm
< rcurtin>
what is the CMake output? does it say that it is finding the correct version of Boost?
Stellar_Mind has joined #mlpack
< Peng_Xu>
Yes, CMake find the correct version 1.55.0