ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
pruvi007 has joined #mlpack
< pruvi007>
Hi Sir/Ma'an
< pruvi007>
*m
< pruvi007>
I have implemented many classical ML Algorithms in Python
< pruvi007>
An i am willing to implement those in C++
< pruvi007>
What all algorithms are needed to be implemented?
pruvi007 has quit [Quit: Page closed]
divyansh997 has joined #mlpack
huanyz0918 has joined #mlpack
< huanyz0918>
hi, can anyone tell me how to get start with this project? I'm very interested in GSoC this year, thanks!
huanyz0918 has quit [Quit: Page closed]
divyansh997 has quit [Ping timeout: 252 seconds]
divi has joined #mlpack
divi is now known as divyansh997
divyansh997 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
i8hantanu has joined #mlpack
shubhangi has joined #mlpack
< shubhangi>
Hello
< shubhangi>
Everyone
< shubhangi>
I got stuck in building mlpack on CPU at command 'make ' it built till 77% where it is building neural networks. My laptop's configurations are 4GB RAM, 64 bit processor. What should I do now should I shift to GPU or RAM is not enough for building?
shubhangi has quit [Ping timeout: 256 seconds]
Suryo has joined #mlpack
< Suryo>
zoq, rcurtin: I would like to discuss the API for constrained optimization problems in PSO with yoi
< Suryo>
*you
Suryo has quit [Client Quit]
roy_ has joined #mlpack
< roy_>
hello
roy_ has quit [Client Quit]
Viserion has joined #mlpack
< Viserion>
I am using python3.5 and I am able to import mlpack but when I run :output = mlpack.preprocess_split(input=dataset, input_labels=labels,test_ratio=0.3) it gives error: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'mlpack' has no attribute 'preprocess_split'
i8hantanu has quit [Quit: Connection closed for inactivity]
Viserion has quit [Ping timeout: 256 seconds]
favre49 has joined #mlpack
favre49 has quit [Client Quit]
pd09041999 has joined #mlpack
Viserion has joined #mlpack
< Viserion>
I am using python3.5 and I am able to import mlpack but when I run :output = mlpack.preprocess_split(input=dataset, input_labels=labels,test_ratio=0.3) it gives error: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'mlpack' has no attribute 'preprocess_split
< rcurtin>
Viserion: how did you install mlpack?
< rcurtin>
Suryo: sure, I can do my best
< Viserion>
builded from source as in the documentation
< rcurtin>
shubhangi: I'm really not sure what the issue there is. There's no way to use the GPU for building, and I've never seen the build get "stuck" like you say, so unfortunately I'm not sure I can help too much here
< rcurtin>
Viserion: ok, sounds good. did you do 'sudo make install' also?
< rcurtin>
Viserion: also, in your build directory, take a look in src/mlpack/bindings/python/mlpack/; there should be lots of .so files, including 'preprocess_split.so'; are those there?
< Viserion>
I can see many .hpp, .pxd, .pyx files none .so
tushar has joined #mlpack
Soonmok has joined #mlpack
< rcurtin>
Viserion: ok, I see. what happens if you type 'make python'? it'll be a lot of output, but if you can put it on pastebin it could help
tushar has left #mlpack []
< Viserion>
Writing make python gives : make: Nothing to be done for 'python'.
< rcurtin>
I see... try 'make clean && make python' (that will take a while)
< rcurtin>
and if you can capture the output it could be really helpful
pd09041999 has quit [Ping timeout: 240 seconds]
ayesdie has joined #mlpack
pd09041999 has joined #mlpack
< Viserion>
It shows , make: *** No rule to make target 'clean'. Stop.
< rcurtin>
what directory are you in?
< rcurtin>
this should be done from the main build directory
< rcurtin>
(sorry if I was unclear)
< rcurtin>
if you're not in the main build directory, just try 'make python' there and see what it does
< rcurtin>
no need to try 'make clean' yet
< rcurtin>
basically, to give a few more details, each of those .so files are what actually contain each mlpack Python method
< rcurtin>
so when you see 'module 'mlpack' has no attribute 'preprocess_split'', to me the first thing I wanted to check was whether those .so files were actually there
< rcurtin>
or if there was some build issue that caused them to not be generated
< rcurtin>
and so now that we know they don't exist we can try and figure out why :)
< Viserion>
Yes I get that. I have mlpack-
< Viserion>
i have mlpack 3.0.4 dir and build dir inside it
< Viserion>
when I write make python in there I get make: *** No rule to make target 'python'. Stop.
ayesdie has quit [Read error: Connection reset by peer]
< rcurtin>
okay, so the Python bindings were never built then?
ayesdie has joined #mlpack
< rcurtin>
let's try reconfiguring with CMake and seeing what the output is
< rcurtin>
can you 'rm -f CMakeLists.txt' and then redo the same cmake configuration command you used when you originally built the library?
< rcurtin>
and paste the output onto pastebin or something so I can look at it
Viserion has quit [Ping timeout: 256 seconds]
ayesdie has quit [Ping timeout: 245 seconds]
sreenik has joined #mlpack
Puranjay has joined #mlpack
< Puranjay>
Hi,
favre49 has joined #mlpack
< Puranjay>
I want to work on the Mlpack-Tensorflow translator, please help me get started with the initial steps
< favre49>
rcurtin:Assuming you guys have the time for it, where would I submit a proposal for review?
favre49 has quit [Client Quit]
Puranjay has quit [Client Quit]
Yashwants19 has joined #mlpack
< Yashwants19>
Hi rcurtin can you please with some unknown style issues and some conflicting file in PR #1717
Yashwants19 has quit [Client Quit]
Yashwants19 has joined #mlpack
< Yashwants19>
*please help me with
Yashwants19 has quit [Client Quit]
ayesdie has joined #mlpack
ayesdie has quit [Remote host closed the connection]
< rcurtin>
Yashwants19: yeah, there is a bit of a bug right now, the style checker only shows the most recent build... so if you push a commit then check the style checks page quickly it should show the results
Yashwants19 has joined #mlpack
< Yashwants19>
I have check the result.There is no style issue related to my commit
pd09041999 has quit [Ping timeout: 252 seconds]
Yashwants19 has quit [Quit: Page closed]
ayesdie has joined #mlpack
pd09041999 has joined #mlpack
pd09041999 has quit [Ping timeout: 252 seconds]
pd09041999 has joined #mlpack
kanishq24 has joined #mlpack
atulim has joined #mlpack
ayesdie has quit [Ping timeout: 255 seconds]
< atulim>
Sir, I early proposed of designing java binding for mlpack. I came to know that Yasmine was designing first java binding but she switched to go binding because of verbose issues. I am going to use jna without swig. Can you suggest me anything? @ryan @zoq
ayesdie has joined #mlpack
ayesdie has quit [Remote host closed the connection]
< rcurtin>
atulim: that sounds great, and Yasmine did indeed switch to the Go bindings, but I don't have any particular suggestions
kanishq24_ has joined #mlpack
< rcurtin>
if you're looking for somewhere to contribute, perhaps picking up the Go bindings PR and addressing some of the issues in it would be a way?
< rcurtin>
Yashwants19: ok, I'll take a look when I have a chance
< atulim>
okay. i already improved some whitespace bugs but i thought i could design java bindings. While designing java bindings I would look at the PR of Go bindings .It was great hearing from you again @rcurtin.
< rcurtin>
sure, sounds good. :) you could also look at the mailing list archives where I had discussions with Yasmine; that could be helpful too
< sreenik>
Hello, I am not being able to transfer compute over to the GPU for some reason, although nvblas is configured properly. My compile statement is g++ DigitRecognizerCNN.cpp -std=c++11 -lboost_serialization -lboost_program_options -usr/local/cuda/lib64/lnvblas.so -usr/local/lib/libmkl_rt.so -fopenmp -larmadillo -lmlpack. Am I doing this correctly? I have tried linking libnvblas.so instead of lnvblas.so, but to no avail
< rcurtin>
sreenik: I think you need to make sure libarmadillo.so is linked against nvblas when it's built
< rcurtin>
you can check what libarmadillo.so is linked against with `ldd /path/to/libarmadillo.so`
< sreenik>
Okay
< rcurtin>
alternately, you can comment out "ARMA_USE_WRAPPER" in armadillo_bits/config.hpp, and then in your call to compile DigitRecognizerCNN.cpp, you can link directly with nvblas
< rcurtin>
I think either of those things can work
< sreenik>
Okay I am checking. Will be a big relief, the CNN is really taking a lot of time on CPU
< sreenik>
You are right, armadillo is not linked with nvblas. Which cmake variable do I modify to rebuild?
atulim has joined #mlpack
pd09041999 has quit [Quit: Leaving]
< atulim>
@rcurtin If I may ask where can I find mailing list archives?
< sreenik>
Thank you @rcurtin for your time. I am sorry I am bothering you so many times but commenting out ARMA_USE_WRAPPER and linking directly didn't give a better result. I guess building armadillo with nvblas is the remaining option. Running cmake -LA gives a long variable list but I can't find anything obvious related to nvblas. Or maybe I am wrong somewhere else, I have spent a lot of time on this but I can't really understand where :(
< rcurtin>
you can check what is linked against what with `ldd`, and that should be able to tell you if you're linked against nvblas or not
< rcurtin>
but the thing is, depending on the computation (and the GPU), nvblas may choose to *not* do it on the GPU, meaning that there will be no speedup
< rcurtin>
nvblas has to estimate that the performance of the operation on the GPU would be better, *including* the time it takes to move the data to the GPU
< sreenik>
Oh I get it. Thanks :)
kanishq24 has quit [Remote host closed the connection]
kanishq24_ is now known as kanishq24
kanishq24b has joined #mlpack
kanishq24b has quit [Ping timeout: 255 seconds]
picklerick has joined #mlpack
sreenik has quit [Quit: Page closed]
< picklerick>
any changes we made in PROGRAM_INFO would be reflected in the mlpack_algo --h ??
< rcurtin>
picklerick: yep, exactly
< picklerick>
@picklerick but they weren't even after make mlpack_test
< picklerick>
everything is fine but considering we are changing options it's important to be there right ??
< Suryo>
What I understand based on this is that the user would have to decide how each constraint is being handled.
< Suryo>
And on this page, the two methods that are described basically correspond to having only 'feasible solutions' and using 'penalty function' (although, at some level, these are equivalent)
< Suryo>
Now, in the ideas page, one of the tasks in focus is to program variants of PSO for constrained optimization
< Suryo>
However, if I am to preserve the current API style, then the way each constraint is handled would not totally depend on the algorithm, but on the user.
< Suryo>
What I've done to handle constrained problems in PSO is basically introduce the constraints into the objective and follow the usual approach for feasible solutions. For this, I did not actually have to write a different PSO algorithm altogether - just had to make sure that the initial particles are feasible (as per the paper by Eberhardt)
< Suryo>
I would like some inputs regarding the API that would be suited. Would we like to program methods that can handle constraints in standard ways, or do we leave the constraint penalization to the users altogether, the way it is documented in the link that I just shared?
< Suryo>
Kindly let me know what you think. Thanks!
ayesdie has quit [Quit: Connection closed for inactivity]
< Suryo>
If the constraint handling is left to the users, then the amount of work at our end from a programming pov is a lot less.
< Suryo>
And if that's what you would want, then in my opinion, it would be wise to include other variants of PSO as a part of the GSoC end goal.
< Suryo>
There are other problems that I haven't taken care of. These are minor issues but would require testing. For example, constrained PSO relies on an initial set of feasible solutions. Now if the initial set is being sampled from an infeasible range itself, then the optimization will not be done. Though it sounds trivial, I believe that it's important to program timeouts for such things.
< Suryo>
And then, parallelization within ensmallen