ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
SinghKislay has quit [Ping timeout: 256 seconds]
ayesdie has joined #mlpack
huimin has joined #mlpack
huimin has quit [Client Quit]
sreenik has joined #mlpack
braceletboy has joined #mlpack
braceletboy has quit [Ping timeout: 245 seconds]
Scrot has joined #mlpack
Scrot has left #mlpack []
braceletboy has joined #mlpack
sreenik has quit [Ping timeout: 256 seconds]
ayesdie has quit [Quit: Connection closed for inactivity]
Habib has joined #mlpack
< Habib> Hello, anybody there?!
Habib has quit [Ping timeout: 256 seconds]
sreenik has joined #mlpack
< zoq> SinghKislay: Hey.
< zoq> Habib: Hello.
< zoq> Suryo: Thanks for the update, will take a look at the PR and comment there.
< zoq> favre49: See my comment to xain (IRC).
< zoq> sreenik: what you could also take the norm for each col; take a look at the conv net test for an example.
braceletboy has quit [Remote host closed the connection]
mulx10 has joined #mlpack
< mulx10> Hello
< mulx10> I need help regarding Memory checks in my PR.
< mulx10> I ran valgrind with same arguments. It works on my PC, however it fails in Jenkins.
mulx10 has quit [Client Quit]
andreim has joined #mlpack
Viserion has joined #mlpack
< Viserion> expected constructor, destructor, or type conversion before ‘(’ token mlpack::data::Load("/home/ans.csv", data, true); // The dataset itself.
< Viserion> how to resolve this?
Viserion has quit [Ping timeout: 256 seconds]
< sreenik> Viserion: Are you running the code globally, i.e., not inside any function?
favre49 has joined #mlpack
favre49 has quit [Client Quit]
favre49 has joined #mlpack
< favre49> zoq: Thanks, CoDeepNEAT looks really cool. I'll come back to you with some ideas soon.
favre49 has quit [Client Quit]
< sreenik> zoq: After norm, I increased the step size x10. That made the validation accuracy converge at around 97%. Normalising along the columns also give more or less the same result. Layer or node addition or dropout also doesn't increase it further. I don't think accuracy can be increased further without using CNNs. So I am trying to optimize the CNN version which originally has ~80% accuracy :)
Subrajaa has joined #mlpack
< Subrajaa> I am a student from India. I would like to know whom to contact in the organisation so that I can interact with them. I am a beginner and would like to contribute sincerely. Please help.
Subrajaa has quit [Quit: Page closed]
sonu628 has joined #mlpack
< sonu628> hey everyone
< sonu628> when i compiled and ran programs in folder where mlpack was installed
< sonu628> it did successfully, however when tried in other i got [FATAL] error
< sonu628> i thought it must issue with armadillo so installed it with its source code
< sonu628> now i got this error
< sonu628> usr/bin/ld: warning: libarmadillo.so.8, needed by /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libmlpack.so, may conflict with libarmadillo.so.9
< sonu628> any suggestions what should i do ?
sash has joined #mlpack
sash has quit [Client Quit]
kanishq244[m] has joined #mlpack
kanishq24 has joined #mlpack
< ShikharJ> sonu628: I guess this conflict is due to multiple versions of armadillo being installed. I think the better thing to do would be to remove one of the installations, and then try.
kanishq24 has quit [Quit: Leaving]
kanishq24 has joined #mlpack
shubham has joined #mlpack
shubham has left #mlpack []
manish__ has joined #mlpack
< sonu628> ShikharJ : yes you were right it was a colllision of armadillo. I have removed all versions armadillo and reinstalled through apt-get and again ran cmake and build, and unfortunately got this error
< sonu628> warning: libhdf5.so.101, needed by /usr/lib/x86_64-linux-gnu/libarmadillo.so, not found (try using -rpath or -rpath-link)
manish__ has quit [Client Quit]
< sonu628> after this there were many undefined references of what i think of components needed in libhdf5.so.101
< sonu628> on doing locate libhdf5.so.101 i got addresses installed in anaconda
manish__ has joined #mlpack
< sonu628> i would be grateful if anybody suggest any solution ?
< rcurtin> link your program against hdf5 with -lhdf5?
< manish__> Hi! i have newly joined this chat session. Are all here gsoc2019 students??
manish__ has left #mlpack []
kanishq24 has quit [Ping timeout: 245 seconds]
< sonu628> rcurtin : the error comes when try the make command after cmake
< sonu628> i dont think make -lhdf5 would work
< rcurtin> sonu628: the CMake configuration should be automatically linking against all dependencies of Armadillo
< rcurtin> so it's strange that it isn't
< rcurtin> but you say you have hdf5 installed via anaconda and not the system package manager?
< sreenik> rcurtin: Today I was setting up mlpack on a gcp instance. I have run into the same problem of hdf5, but this didn't happen in my machine at home.
< sreenik> In the gcp instance I have hdf5 installed via the package manager apt-get
< sreenik> In the cmakefile, at the very end there is a something like (autoDetect = false) as far as I remember. If I can get a fix I will post an update here
< rcurtin> yeah, if you have built Armadillo by hand you can configure with 'cmake -D DETECT_HDF5=OFF .' instead of './configure' (which is just a wrapper script for cmake)
andreim has quit [Ping timeout: 256 seconds]
sumedhghaisas has joined #mlpack
< sreenik> I think setting setting DETECT_HDF5=OFF while building armadillo has worked. Mlpack is currently in the build process and hasn't thrown any error till now :D
kanishq24 has joined #mlpack
manish__ has joined #mlpack
manish__ has quit [Client Quit]
sonu628 has quit [Ping timeout: 256 seconds]
shubhangi has joined #mlpack
< sreenik> Built successfully :)
< shubhangi> Hello Everyone,
< shubhangi> I wanted to contribute to Application of ANN Algorithms Implemented in mlpack.
< shubhangi> I started to build mlpack.
< shubhangi> I started to build mlpack on ubuntu machine 64 bit and it got stuck on running the `make -j4` command
< shubhangi> I ran it on CPU and it is not processing after 77% where it is building neural network. So, is there any issue with my machine or should I shift to GPU?
< rcurtin> shubhangi: hi there, is your system swapping?
< rcurtin> if it is running out of ram you can try with make -j2 instead
< rcurtin> and that may help
< shubhangi> At first I tried with -j4 then it stopped, then I did with j and still it is not proceeding further
riaash04 has joined #mlpack
< riaash04> zoq: In exploring how NEAT could be used to optimise arbitrary functions, I went through the implementation of CNE since it also does Neuroevolution. So for optimising arbitrary functions CNE considers the parameters as weights and optimises accordingly while evaluating based on the Evaluate method (basically GA).
< riaash04> But we can't do this with NEAT
< riaash04> Since NEAT also changes topology so extra wights will get added
< riaash04> So for using NEAT as an optimiser for arbitrary functions we could consider output values of neural networks as values to evaluate the functions (simliar to sequential tasks).
< riaash04> The input to the networks would be the initial values of the parameters.
< riaash04> Since NEAT is good at optimising NNs, this could be a way. (Maybe I am just stating the obvious or am completely out of line)
< riaash04> This is somewhat similar to what they have experimented in "Is Meta-EA a viable optimisation method"
shubhangi has quit [Ping timeout: 256 seconds]
< riaash04> Although, this way of optimisation would not be very useful. We would still be able to use it as an optimiser for sequential tasks ans meta-optimiser.
shubhangi has joined #mlpack
favre49 has joined #mlpack
< favre49> riaash04: The problem i see with the method you described is that it seems wasteful to use something like NEAT in that way when DE or CNE would suffice.
rajiv_ has joined #mlpack
< rajiv_> zoq: I have sent you an updated proposal idea... Please let me know what you think :)
rajiv_ has quit [Client Quit]
< riaash04> Yes that is true. This is just a way of using NEAT as an arbitrary function optimiser along with sequential task optimisation and meta-optimiser without having to add specialised methods.
< riaash04> favre49: sorry forgot to mention you in the previous message
Shady_ has joined #mlpack
Shady_ has quit [Client Quit]
< favre49> riaash04: You're right. To me, NeuroEvolutionary Meta-Optimizers seem like the most apt way to do this. There are problems with it though.
< favre49> For one, we need to give it a substrate, which would be problem specific. Also, the author stated it did not perform well on all functions.
< favre49> So, perhaps to revise my original statement, it doesn't seem as apt xD
favre49 has quit [Quit: Page closed]
riaash04 has quit [Quit: Page closed]
venom has joined #mlpack
venom has quit [Quit: Page closed]
kanishq24 has quit [Quit: Leaving]
sreenik has quit [Ping timeout: 256 seconds]
Viserion has joined #mlpack
< Viserion> After compiling by this command: g++ knn_example.cpp -o knn_example -O3 -std=c++11 -larmadillo -lmlpack -lboost_serialization i stll get the following error:tmp/ccQPHIsH.o: In function `main': knn_example.cpp:(.text.startup+0xd5): undefined reference to `bool mlpack::data::Load<double>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, arma::Mat<double>&, bool, bool)' collect2: error: ld returned
< rcurtin> Viserion: are you linking against the correct libmlpack.so? what have you tried to debug it?
< Viserion> I builded mlpack from source and then used the above command. Honestly I do not know how to check weather I am linking to correct libmlpack.so . Can you help me out?
< Viserion> My c++ file : #include <./mlpack/core.hpp> int main() { arma::mat data; data::Load("/home/viserion/ans.csv", data, true); }
shubhangi has quit [Ping timeout: 256 seconds]
< rcurtin> Viserion: sorry, I stepped out for a while. did you install mlpack from source?
< rcurtin> and are there two versions of mlpack installed on your system?
< rcurtin> you've probably seen this page--- http://mlpack.org/docs/mlpack-3.0.4/doxygen/build.html
< rcurtin> in the "Simple Linux build instructions" section, it says "On many Linux systems, mlpack will install by default to /usr/local/lib and you may need to set the LD_LIBRARY_PATH environment variable"
< rcurtin> that also means that if you are linking against mlpack, you may need to specify -L/usr/local/lib on the compiler command line, so that gcc will link against /usr/local/lib/libmlpack.so
< rcurtin> (which is the one you built and installed)
< zoq> riaash04: You are on the right track, I agree that CNE might be faster at least in some cases; but that depends on the initial topology, for more complex tasks NEAT should be able to find a solution faster. As Ryan pointed out, usually a new optimizer is tested on a single task/random seed, which shows some advantages over method x, but there is no guarantee that it will work across different task/seeds. So
< zoq> it might be useful to run some experiments at the end to get some more insights.
riaash04 has joined #mlpack
< riaash04> zoq: Ok, thanks. So I will build on this more. :) Also, I was thinking CoDeepNEAT could also be part of project. Like a meta-optimiser application of NEAT to optimise DNNs and it would be able to utilise NEAT's population and speciation (as it uses the same process).
riaash04 has quit [Quit: Page closed]
< zoq> riaash04: It could be part of the project, for now the focus is on NEAT including testing (big point) and documentation.
Viserion has quit [Ping timeout: 256 seconds]
donfreecss has joined #mlpack
< donfreecss> hi everyone,
< donfreecss> i'm looking for an example of cmakefile that can be used to build c++ projects that uses mlpack, i never used cmake before unfortunately
< donfreecss> i've built mlpack successfully on my machine but working with it by command-line based compilation .. i.e gcc -lmlpack -l.. -l.. etc
< zoq> donfreecss: You could checkout the models repo (https://github.com/mlpack/models) which uses cmake to find mlpack.
< zoq> donfreecss: https://github.com/zoq/nes is another project, the important file here is https://github.com/zoq/nes/blob/master/CMake/FindMlpack.cmake
< donfreecss> got a quick look and didn't get much :) think i need to read more about cmake and come back and go through these examples
< donfreecss> Thank you
< zoq> sounds good, let me know if I should clarifyy anything
donfreecss_ has joined #mlpack
petris_ is now known as petris
< rcurtin> donfreecss_: if you're just writing simple applications that link against mlpack, CMake may be overkill
< rcurtin> but if you're trying to write something that you hope to distribute to other people's systems that they should be able to configure and install, then CMake is probably the right tool for that
< rcurtin> the key file in the models/ repo is probably going to be CMake/FindMLPACK.cmake and CMake/FindEnsmallen.cmake, which can be used to help find those libraries with CMake
< rcurtin> oops, "key files". also CMake/FindArmadillo.cmake and related files can be really helpful too
donfreecss has quit [Ping timeout: 256 seconds]
donfreecss_ has quit [Ping timeout: 256 seconds]
donfreecss has joined #mlpack
donfreecss has quit [Client Quit]
donfreecss has joined #mlpack
donfreecss has quit [Client Quit]
msdey has joined #mlpack
msdey has quit [Ping timeout: 256 seconds]