ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
xiaohong has joined #mlpack
jeffin143 has joined #mlpack
< jeffin143> zoq : How to use properties of class LayerTraits , src/methods/ann/layer/layer_traits.hpp -- https://github.com/mlpack/mlpack/blob/b16cfd0653e1fd6ca59700bc57482472e5ea4e14/src/mlpack/methods/ann/layer/layer_traits.hpp#L35
< jeffin143> Did you override values anywhere, where is it used ?
jeffin143 has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
favre49 has joined #mlpack
< favre49> zoq I made some changes, and added the ability to load a starting genome.
< favre49> I tried using the smallest possible solution given in the paper and only change the weights, but it doesn't give us a good performance. I think it's either a problem with the network activation scheme or the environment's equation itself, I'll look into that
< favre49> Oddly, while the max fitness is increasing, the mean of the population does not change at all. Not sure what that's indicative of, though
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong_ has joined #mlpack
xiaohong has quit [Read error: Connection reset by peer]
xiaohong has joined #mlpack
xiaohong_ has quit [Ping timeout: 248 seconds]
favre49 has joined #mlpack
xiaohong_ has joined #mlpack
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Ping timeout: 245 seconds]
xiaohong_ has quit [Ping timeout: 245 seconds]
xiaohong has joined #mlpack
favre49 has joined #mlpack
< favre49> I keep getting this error when running make install -> https://pastebin.com/jRzf1AZP
< favre49> If I run make clean, it no longer gives this error, but the recipe for 'all' fails
< favre49> and then the error comes back
favre49 has quit [Remote host closed the connection]
xiaohong_ has joined #mlpack
xiaohong has quit [Ping timeout: 246 seconds]
KimSangYeon-DGU has joined #mlpack
< lozhnikov> jeffin143: You should write one general template and a specialization for every policy type that overrides default values.
xiaohong has joined #mlpack
xiaohong_ has quit [Ping timeout: 272 seconds]
< ShikharJ> zoq: Could you take a look at the Highway networks PR? I think it is ready for merging, but I wanted your approval :)
< ShikharJ> sakshamB: I didn't see your blog post? Can you push one sometime soon?
< lozhnikov> jeffin143: https://pastebin.com/SJ6hp6Et
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< jenkins-mlpack2> Project docker mlpack nightly build build #381: STILL UNSTABLE in 3 hr 55 min: http://ci.mlpack.org/job/docker%20mlpack%20nightly%20build/381/
rcurtin has quit [Ping timeout: 268 seconds]
rcurtin has joined #mlpack
favre49 has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
favre49 has quit [Remote host closed the connection]
< zoq> favre49: Right, I can see the same issue; I'm sure we can figure this out, the rest looks good so once fixed I think we have a pretty 'neat' solution.
< zoq> favre49: Does it work again if you comment the serialization part?
< zoq> ShikharJ: Ahh, yeah, will take a look later today.
< sakshamB> ShikharJ: yes pushed :)
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> georgedouzas/mlpack#3 (master - 35e51af : georgedouzas): The build has errored.
travis-ci has left #mlpack []
< zoq> favre49: The discrete RL tasks work right?
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
KimSangYeon-DGU has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< zoq> favre49: For the CartPole task did you run the same experiment against the python version?
xiaohong has quit [Remote host closed the connection]
< zoq> favre49: Sometimes, I there is a genome that is able to solve the task in the first generation, maybe there is something wrong with the task evaluation. We should run the code against the gym env and take a recording.
travis-ci has joined #mlpack
< travis-ci> georgedouzas/mlpack#4 (dev - c3b3101 : georgedouzas): The build has errored.
< travis-ci> Change view : https://github.com/georgedouzas/mlpack/compare/87cb72eee08c^...c3b31011cdef
travis-ci has left #mlpack []
favre49 has joined #mlpack
< favre49> zoq I did run it against CartPole, I think it was able to last 195 steps or so, but then I focused on the double pole cart experiment more
< favre49> I did run it against the python version, and it performed comparably very poorly, lasted like 15 steps. I have made changes since then, I'm yet to test them on the python version
< favre49> I'll be back home tomorrow evening, I'll try to run it again then. I'll also post a blog for this week and the last together, since I haven't gotten the time to write one. I've had to do a lot of traveling and visiting relatives.
< favre49> The discrete RL task worked, last I tried, but haven't tried it since we merged that PR which made changes to the codebase.
favre49 has quit [Remote host closed the connection]
toshal_ has joined #mlpack
travis-ci has joined #mlpack
< travis-ci> georgedouzas/mlpack#5 (master - b16cfd0 : Ryan Curtin): The build has errored.
travis-ci has left #mlpack []
KimSangYeon-DGU has joined #mlpack
toshal_ is now known as Toshal
sumedhghaisas has joined #mlpack
< zoq> favre49: Okay, thanks for the input; let's discuss once you are back.
< sumedhghaisas> KimSangYeon-DGU: I am ready now if you want to start early. :)
< KimSangYeon-DGU> sumedhghaisas: Hi Sumedh
< KimSangYeon-DGU> I'm ready :)
< KimSangYeon-DGU> I'm thinking and writing code for the stable computation of the covariance.
< sumedhghaisas> Hey.
< sumedhghaisas> I see.
< KimSangYeon-DGU> sumedhghaisas: I'm ready
< sumedhghaisas> I went through your videoa
< sumedhghaisas> the results seem good though
< sumedhghaisas> sometimes the mean diverges
< KimSangYeon-DGU> Right
< sumedhghaisas> I wanted to make couple of observations on the videos
< sumedhghaisas> in Video 8
< sumedhghaisas> 1 cluster converges but other diverges
< sumedhghaisas> correct?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Blue mean diverges
< sumedhghaisas> Could you reproduce the same results but with NLL and Constraint reported seperately?
< sumedhghaisas> I mean their individual values
< sumedhghaisas> for each cluster
< sumedhghaisas> I wanna understand why it is diverging
< sumedhghaisas> Am I making sense? :)
< KimSangYeon-DGU> Ah okay
< sumedhghaisas> same for Task 6
< sumedhghaisas> lets analyze what is happening with the optimization values
< sumedhghaisas> and try to understanding what is causing this
< sumedhghaisas> I think we can figure it out
< sumedhghaisas> Also regarding your variance comment
< KimSangYeon-DGU> Yeah!
< sumedhghaisas> if you sent the covariance to zero does it still train?
< KimSangYeon-DGU> Wait a moment, let me check
< sumedhghaisas> Sure.
< KimSangYeon-DGU> I remember it doesn't
< sumedhghaisas> ahh... then the clusters diverge?
< sumedhghaisas> Also it would be nice if we document this experiments and the findings :)
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> Sumedh, when I set the covariance to zero, It can't Cholesky decomposition
< sumedhghaisas> ahh no I mean without cholesky
< KimSangYeon-DGU> Yeah
< sumedhghaisas> if covariance is zero it is definitely positive semi defi nite
< sumedhghaisas> we don't need cholesky
< sumedhghaisas> just set the cluster variance as any diagonal matrix
< KimSangYeon-DGU> Yeah, full matrix can't, because we can't get the inverse of matrix
< KimSangYeon-DGU> Can we test with identify matrix?
< KimSangYeon-DGU> for calculating the inverse matrix
< sumedhghaisas> ummm... sure... just keep the means different
< sumedhghaisas> I mean for initialization
< KimSangYeon-DGU> Yeah
< sumedhghaisas> initialize the means differently and initialize the variance as identity matrix and see what happens
< KimSangYeon-DGU> Got it
< KimSangYeon-DGU> sumedhghaisas: Can you check it https://pastebin.com/E0kpD35d
< KimSangYeon-DGU> ?
< KimSangYeon-DGU> Also, I'll sent the video
< KimSangYeon-DGU> The lambda * approximation constraint is almost zero.
< sumedhghaisas> So in the new video is the variance set constant to identity?
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> initially
< sumedhghaisas> ahh but then it can change right?
< sumedhghaisas> I mean its updated?
< KimSangYeon-DGU> Let me check
< sumedhghaisas> maake sure its a tensorflow variable
< sumedhghaisas> I got confused cause in the video there is no variance thats why
< sumedhghaisas> also in the data that you sent I see NLL going negative even if the constraint is zero thats weitd
< sumedhghaisas> could you also generate NLL and constraint for a run that went smooth. I mean both clusters converged normally? I mean from your previous videos. If NLL is going negative we may have some problems
< KimSangYeon-DGU> Yeah
< KimSangYeon-DGU> I think it is my fault because I multiply negative 1 when printing it....
< KimSangYeon-DGU> I check the covariance, the covariance is NULL
< KimSangYeon-DGU> it hasn't any value
< KimSangYeon-DGU> I think there are some error in the initial value of covariance.
< sumedhghaisas> ummm... after the update you mean? thats strange...
< KimSangYeon-DGU> When I tested the QGMM, the initial value is critical to the performance. I found some paper that say EM mechanism is sensitive to the initial value of the parameter
< sumedhghaisas> okay I will be right back ...
< KimSangYeon-DGU> When I set the covariance to identity, it disappears
< KimSangYeon-DGU> Yeah
< sumedhghaisas> okay lets go step by step
< KimSangYeon-DGU> Yeah
< sumedhghaisas> In the videos that you sent
< sumedhghaisas> video 1 converges perfectly right?
< sumedhghaisas> could we see NLL and constraint for that?
< KimSangYeon-DGU> Yeah
< sumedhghaisas> I wanna see if NLL goes negative or not
< KimSangYeon-DGU> Yeah, But I find the initial values
< KimSangYeon-DGU> Let me try
< sumedhghaisas> ahh you mean for that specific run?
< KimSangYeon-DGU> The means of videos is generated randomly
< KimSangYeon-DGU> So I presume the values
< sumedhghaisas> ahh
< KimSangYeon-DGU> I should
< KimSangYeon-DGU> *
< sumedhghaisas> lets try running it again randomly and if you find a stable run lets document that in a file
< KimSangYeon-DGU> Got it
< sumedhghaisas> the start position
< KimSangYeon-DGU> Can we change the covariance?
< sumedhghaisas> the final and NLL and constraint graphs
< sumedhghaisas> what say?
< sumedhghaisas> then we will have some observations to work on
< KimSangYeon-DGU> I'm not sure "the final and NLL and constraint graphs"
< KimSangYeon-DGU> I'm not sure I understand the intention of "the start position the final and NLL and constraint graphs"
< KimSangYeon-DGU> Sorry
< KimSangYeon-DGU> sumedhghaisas: Would it be okay to set the covariance to stable values?
< sumedhghaisas> ahh I mean how NLL changes with training
< sumedhghaisas> and how constraint changes with training
< sumedhghaisas> graphs of that
< sumedhghaisas> yes stable values work
< KimSangYeon-DGU> sumedhghaisas: I sent the graphs
< KimSangYeon-DGU> And this is result logs: https://pastebin.com/vW6qRvPY
favre49 has joined #mlpack
< favre49> I recloned the repo, but I still get this error now -> https://pastebin.com/CHRi8R3F
favre49 has quit [Remote host closed the connection]
< zoq> favre49: Do you build with -DDEBUG=ON?
< KimSangYeon-DGU> sumedhghaisas: Hmm, even with one cluster converged, the NLL is converged
< KimSangYeon-DGU> And mean is diverged
favre49 has joined #mlpack
< favre49> zoq Yup
< sumedhghaisas> KimSangYeon-DGU: ohh by graphs I mean you can plot NLL values with each iteration
< sumedhghaisas> so x axis will be iterations
< sumedhghaisas> and Y axis can be NLL
< sumedhghaisas> the same with constraint
< sumedhghaisas> this way we can see how NLl behaves with iterations
< KimSangYeon-DGU> Ah okay
< KimSangYeon-DGU> sumedhghaisas: Also, I have some equestions, can you check the emails?
< sumedhghaisas> Sure. Can I come back o you in an hour?
< KimSangYeon-DGU> Okay
< sumedhghaisas> Thanks :)
< sumedhghaisas> But the results look amazing
< sumedhghaisas> the next task should be analyzing them in more depth
< KimSangYeon-DGU> Cool
< KimSangYeon-DGU> Yeah
< sumedhghaisas> Like putting such graphs and see if NLL goes negative or not
< KimSangYeon-DGU> Yeah
< sumedhghaisas> KimSangYeon-DGU: Hey SangYeon. I just relied to the mail. :)
< sumedhghaisas> For next task lets write down each aspect of the traning
< sumedhghaisas> and make a training run for it
< sumedhghaisas> first would be a stable run and its graphs
< sumedhghaisas> Graphs will include initial cluster positions and final
< sumedhghaisas> with NLL vs iterations
< sumedhghaisas> and constraint with iterations
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
< sumedhghaisas> Do you think there is something else we need to document?
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< favre49> zoq Thanks, that worked
< favre49> Is this an issue with my system or something I should open an issue for?
sumedhghaisas has quit [Ping timeout: 260 seconds]
favre49 has quit [Remote host closed the connection]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU> sumedhghaisas: Oops... my connection was broken.
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< KimSangYeon-DGU> sumedhghaisas: Your suggestion is good and I think it would be great to see the cos(phi) graph as well :)
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Changing host]
vivekp has joined #mlpack
ImQ009 has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
ImQ009 has quit [Quit: Leaving]
ShikharJ has joined #mlpack
< zoq> favre49: Sorry for the slow response; I think this is the first time I see the issue, did you made any system updates that might caused this issue?
< rcurtin> favre49: the existence of /usr/local/lib/libbfd.a seems a little strange to me, do you know how that file got there?
Abhi83 has joined #mlpack
< Abhi83> Hi. the cmake command is failing for mlpack.Can somebody help me ?
travis-ci has joined #mlpack
< travis-ci> robertohueso/mlpack#29 (mc_kde_error_bounds - 45e4c68 : Roberto Hueso Gomez): The build is still failing.
travis-ci has left #mlpack []
< zoq> Abhi83: Hello, happy to help, can you provide the output, e.g. post it on pastebin?
< Abhi83> shall I post the contents of CMakeOutput.log ?
< zoq> or the output of the 'cmake ..' command might give us enough information as well
< Abhi83> Here is the pastebin link. https://pastebin.com/KKrRCgrL
< Abhi83> Although I have boost installed it says unable to find boost.
< zoq> Which version?
< zoq> perhaps 1.70.0?
< Abhi83> Let me check
< Abhi83> I am building mlpack from beginning.Please wait.
abernauer has joined #mlpack
< Abhi83> Here is the output of CmakeOutput.log. https://pastebin.com/HeXR2Xkc
< abernauer> rcurtin: Will I need to write a separate main function in C? The compiler is having issues with order of the definitions of my linkage functions.
< Abhi83> Here is output of cmakeerror.log. https://pastebin.com/uhVCM1j8
< rcurtin> abernauer: you should be able to link against the mlpackMain() function that is provided by, e.g., pca_main.cpp
< Abhi83> Please help me find to find the issue zoq
< rcurtin> Abhi83: try specifying CMAKE_CXX_FLAGS="-pthread" when you configure with cmake; that may help
< rcurtin> it seems like your system is using a compiler provided by anaconda? I don't understand why that is
< abernauer> Also conceptually R is a C program as most of the interpreted R code is implemented in C code. Which is a concern I think Dirk had.
< Abhi83> I am using anaconda distribution for Python.That;s may be the reason of it. Anyone ways I will try ur suggestion rcurtin
< Abhi83> I am using the command cmake -D CMAKE_CXX_FLAGS=-pthread ../Please tell me if this is correct.
< zoq> That is correct.
< zoq> or CMAKE_CXX_FLAGS="-pthread"
< rcurtin> abernauer: you might want to take a look at the PR for the Go bindings
< rcurtin> I remember that some work had to be done to declare mlpackMain() with extern C linkage
< rcurtin> but I don't remember exactly what was done
< Abhi83> Sorry that didn't fixed it.
< rcurtin> I know that those bindings generate a .h and .cpp file (I think the code that generates those is called generate_h.cpp and generate_cpp.cpp, or something like this)
< rcurtin> you might consider taking a look at those files to see how it is done there, and then try replicating the same thing for R
< zoq> rcurtin Abhi83: Not sure if adding pthread will fix the boost not found issue.
< rcurtin> Abhi83: right, if boost is still not being found, that will be a separate issue
< rcurtin> but if it is still a compiler issue, you could also try forcing the use of the system compiler
< rcurtin> assuming you have, e.g., /usr/bin/g++ on your system, you could do cmake -DCMAKE_CXX_COMPILER=/usr/bin/g++ -DCMAKE_C_COMPILER=/usr/bin/g++ ../
< rcurtin> and that should avoid using the anaconda-provided compiler
< Abhi83> cmake command is giving this output in between. -- Successfully downloaded ensmallen into /home/abhishek/Documents/mlpack-3.1.1/build/deps/ensmallen-1.15.1/CMake Error at /usr/share/cmake-3.13/Modules/FindBoost.cmake:2100 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the
< abernauer> Yeah I will do that. Think we discussed that in April and May a bit. There are some R packages mentioned in CRANs writing extensions manual could look at the source code.
< Abhi83> root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers.Call Stack (most recent call first): CMakeLists.txt:390 (find_package)
< rcurtin> abernauer: sounds good. another idea is to check out the Go bindings branch, build the go bindings, then look in build/src/mlpack/bindings/go/ to find the generated .h and .cpp files
< rcurtin> abernauer: that may be easier than trying to read the source code that generates those .h and .cpp files
< rcurtin> Abhi83: can you tell us how you installed boost, and what the version of boost installed is?
< Abhi83> dpkg -s libboost-dev | grep 'Version' command gives me the output - Version: 1.67.0.1
< Abhi83> I installed through sudo apt-get install libboost-all-dev
< rcurtin> ok, great; maybe it is worth trying installing libboost-all-dev?
< rcurtin> ah
< rcurtin> ok, great
< rcurtin> what happens if you try with a system compiler instead of the anaconda compiler?
< rcurtin> I'm wondering if maybe the anaconda compiler has the wrong include search directories or something...
< Abhi83> I cannot figure out that by myself. Shall I show the contents of cmakeerror.log and cmakeoutput.log with the CMAKE_CXX_FLAGS="-pthread" command ?
< rcurtin> just the output directly from CMake is probably more useful in this case
< rcurtin> my suggestion would be to try running cmake and set CMAKE_CXX_COMPILER and CMAKE_C_COMPILER
< rcurtin> I see you're on Ubuntu or Debian, so you could always just install directly from apt, i.e., apt-get install libmlpack-dev mlpack-bin
< rcurtin> that will give you the C++ headers and the command-line programs
< rcurtin> (and if you're looking for the python bindings, 'pip install mlpack3' will get those installed in your current Python environment)
< Abhi83> Ok I will run apt-get install libmlpack-dev mlpack-bin
< Abhi83> and then pip install mlpack3
abernauer has quit [Remote host closed the connection]
< Abhi83> rcurtin can you tell me what value should I give to CMAKE_CXX_COMPILER and CMAKE_C_COMPILER.
< Abhi83> The above two commands were successful. (apt-get install libmlpack-dev mlpack-bin , pip install mlpack3 )
< Abhi83> but not cmake
< rcurtin> Abhi83: set CMAKE_CXX_COMPILER and CMAKE_C_COMPILER to the path of your compiler (maybe /usr/bin/g++? you may have to poke around to find it)
< rcurtin> in any case, if you have installed mlpack via apt, there's not a need to build it from source anymore---you can now just use the command-line programs from the command line
< rcurtin> i.e. try "mlpack_kmeans --help"
< rcurtin> and, from python, you should be able to use the mlpack Python bindings
< rcurtin> i.e. 'import mlpack' should work just fine
< Abhi83> Actually I wanted to contribute to mlpack.That's why I am building from source.
< rcurtin> ah, ok, I see---in this case installing via apt or pip probably isn't helpful
< Abhi83> I am getting this error as mentioned. https://github.com/mlpack/mlpack/issues/1351
< Abhi83> Can you help me with this.
< rcurtin> are you using the system compiler like I suggested?
< Abhi83> I used this command
< Abhi83> cmake CMAKE_CXX_COMPILER=/usr/bin/g++ CMAKE_C_COMPILER=/usr/bin/gcc ../
< rcurtin> and did cmake properly configure to use the system compiler instead of the anaconda compiler? you can look at the output of cmake to see
< zoq> also I think this shoudl be cmake -D CMAKE_CXX_COMPILER=
< Abhi83> cmake in the end says -- Configuring incomplete, errors occurred!
< rcurtin> okay, so if it did not configure correctly it will not build at all, which means that you can't have had the error in #1351 (no makefile will be generated)
< rcurtin> I have to step out for a while, sorry
< Abhi83> No Problem.
< zoq> can you add the -D on each and make sure the path to gcc and g++ is correct
< Abhi83> Yeah I ran cmake -D CMAKE_CXX_COMPILER=/usr/bin/g++ -D CMAKE_C_COMPILER=/usr/bin/gcc ../
< Abhi83> I fixed the issue only 1 issue remains.
< Abhi83> CMake Error at /usr/share/cmake-3.13/Modules/FindBoost.cmake:2100 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers.Call Stack (most recent call first): CMakeLists.txt:390 (
< Abhi83> find_package)
< Abhi83> I think it's unable to find boost path may be.
< zoq> so let's set BOOST_INCLUDEDIR as mentioned above
< zoq> e.g. -DBOOST_INCLUDEDIR=/path/to/boost/includes/ -DDBOOST_LIBRARYDIR=/path/to/boost/libs
< Abhi83> ok.Let me try.
< Abhi83> This may sound stupid but I am unable to find path of BOOST_INCLUDEDIR. Can you help me with it .
< Abhi83> DBOOST_LIBRARYDIR=/usr/bin . This I got.
< Abhi83> DBOOST_LIBRARYDIR=/usr/lib.
< zoq> The folder should contain the boost libs e.g. libboost_* a reasonable path might be /usr/lib/ or /usr/local/lib
< Abhi83> Yes DBOOST_LIBRARYDIR=/usr/lib but I am not getting DBOOST_INCLUDEDIR path. What files it is supposed to contain ?
< zoq> what about /usr/include is there a boost folder?
< Abhi83> No
< Abhi83> I think Boost got corrupted probabaly.
< zoq> what does 'whereis boost' return?
< Abhi83> boost:
< Abhi83> Nothing else
< zoq> I guess you are right about the boost installation.
< Abhi83> shall i again uninstall and install boost
< zoq> might be an option
< Abhi83> ok.Let me try
k3nz0__ has quit [Remote host closed the connection]
Abhi83 has quit [Remote host closed the connection]
abernauer has joined #mlpack
< abernauer> rcurtin: Just to clarify the go bindings are in an open pull request correct? Also I was referring to other R packages that are interfaces to C++ libraries in the previous comment e.g. gbm.
abernauer has left #mlpack []
< rcurtin> abernauer: right, the PR is still open