ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< jeffin143>
Did you override values anywhere, where is it used ?
jeffin143 has quit [Ping timeout: 260 seconds]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
favre49 has joined #mlpack
< favre49>
zoq I made some changes, and added the ability to load a starting genome.
< favre49>
I tried using the smallest possible solution given in the paper and only change the weights, but it doesn't give us a good performance. I think it's either a problem with the network activation scheme or the environment's equation itself, I'll look into that
< favre49>
Oddly, while the max fitness is increasing, the mean of the population does not change at all. Not sure what that's indicative of, though
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Read error: Connection timed out]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong_ has joined #mlpack
xiaohong has quit [Read error: Connection reset by peer]
xiaohong has joined #mlpack
xiaohong_ has quit [Ping timeout: 248 seconds]
favre49 has joined #mlpack
xiaohong_ has joined #mlpack
favre49 has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
favre49 has quit [Remote host closed the connection]
< zoq>
favre49: Right, I can see the same issue; I'm sure we can figure this out, the rest looks good so once fixed I think we have a pretty 'neat' solution.
< zoq>
favre49: Does it work again if you comment the serialization part?
< zoq>
ShikharJ: Ahh, yeah, will take a look later today.
< sakshamB>
ShikharJ: yes pushed :)
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
georgedouzas/mlpack#3 (master - 35e51af : georgedouzas): The build has errored.
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
KimSangYeon-DGU has quit [Remote host closed the connection]
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
xiaohong has quit [Remote host closed the connection]
xiaohong has joined #mlpack
< zoq>
favre49: For the CartPole task did you run the same experiment against the python version?
xiaohong has quit [Remote host closed the connection]
< zoq>
favre49: Sometimes, I there is a genome that is able to solve the task in the first generation, maybe there is something wrong with the task evaluation. We should run the code against the gym env and take a recording.
travis-ci has joined #mlpack
< travis-ci>
georgedouzas/mlpack#4 (dev - c3b3101 : georgedouzas): The build has errored.
< favre49>
zoq I did run it against CartPole, I think it was able to last 195 steps or so, but then I focused on the double pole cart experiment more
< favre49>
I did run it against the python version, and it performed comparably very poorly, lasted like 15 steps. I have made changes since then, I'm yet to test them on the python version
< favre49>
I'll be back home tomorrow evening, I'll try to run it again then. I'll also post a blog for this week and the last together, since I haven't gotten the time to write one. I've had to do a lot of traveling and visiting relatives.
< favre49>
The discrete RL task worked, last I tried, but haven't tried it since we merged that PR which made changes to the codebase.
favre49 has quit [Remote host closed the connection]
toshal_ has joined #mlpack
travis-ci has joined #mlpack
< travis-ci>
georgedouzas/mlpack#5 (master - b16cfd0 : Ryan Curtin): The build has errored.
< KimSangYeon-DGU>
The lambda * approximation constraint is almost zero.
< sumedhghaisas>
So in the new video is the variance set constant to identity?
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
initially
< sumedhghaisas>
ahh but then it can change right?
< sumedhghaisas>
I mean its updated?
< KimSangYeon-DGU>
Let me check
< sumedhghaisas>
maake sure its a tensorflow variable
< sumedhghaisas>
I got confused cause in the video there is no variance thats why
< sumedhghaisas>
also in the data that you sent I see NLL going negative even if the constraint is zero thats weitd
< sumedhghaisas>
could you also generate NLL and constraint for a run that went smooth. I mean both clusters converged normally? I mean from your previous videos. If NLL is going negative we may have some problems
< KimSangYeon-DGU>
Yeah
< KimSangYeon-DGU>
I think it is my fault because I multiply negative 1 when printing it....
< KimSangYeon-DGU>
I check the covariance, the covariance is NULL
< KimSangYeon-DGU>
it hasn't any value
< KimSangYeon-DGU>
I think there are some error in the initial value of covariance.
< sumedhghaisas>
ummm... after the update you mean? thats strange...
< KimSangYeon-DGU>
When I tested the QGMM, the initial value is critical to the performance. I found some paper that say EM mechanism is sensitive to the initial value of the parameter
< sumedhghaisas>
okay I will be right back ...
< KimSangYeon-DGU>
When I set the covariance to identity, it disappears
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
okay lets go step by step
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
In the videos that you sent
< sumedhghaisas>
video 1 converges perfectly right?
< sumedhghaisas>
could we see NLL and constraint for that?
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
I wanna see if NLL goes negative or not
< KimSangYeon-DGU>
Yeah, But I find the initial values
< KimSangYeon-DGU>
Let me try
< sumedhghaisas>
ahh you mean for that specific run?
< KimSangYeon-DGU>
The means of videos is generated randomly
< KimSangYeon-DGU>
So I presume the values
< sumedhghaisas>
ahh
< KimSangYeon-DGU>
I should
< KimSangYeon-DGU>
*
< sumedhghaisas>
lets try running it again randomly and if you find a stable run lets document that in a file
< KimSangYeon-DGU>
Got it
< sumedhghaisas>
the start position
< KimSangYeon-DGU>
Can we change the covariance?
< sumedhghaisas>
the final and NLL and constraint graphs
< sumedhghaisas>
what say?
< sumedhghaisas>
then we will have some observations to work on
< KimSangYeon-DGU>
I'm not sure "the final and NLL and constraint graphs"
< KimSangYeon-DGU>
I'm not sure I understand the intention of "the start position the final and NLL and constraint graphs"
< KimSangYeon-DGU>
Sorry
< KimSangYeon-DGU>
sumedhghaisas: Would it be okay to set the covariance to stable values?
< sumedhghaisas>
ahh I mean how NLL changes with training
< sumedhghaisas>
and how constraint changes with training
< sumedhghaisas>
graphs of that
< sumedhghaisas>
yes stable values work
< KimSangYeon-DGU>
sumedhghaisas: I sent the graphs
< sumedhghaisas>
KimSangYeon-DGU: ohh by graphs I mean you can plot NLL values with each iteration
< sumedhghaisas>
so x axis will be iterations
< sumedhghaisas>
and Y axis can be NLL
< sumedhghaisas>
the same with constraint
< sumedhghaisas>
this way we can see how NLl behaves with iterations
< KimSangYeon-DGU>
Ah okay
< KimSangYeon-DGU>
sumedhghaisas: Also, I have some equestions, can you check the emails?
< sumedhghaisas>
Sure. Can I come back o you in an hour?
< KimSangYeon-DGU>
Okay
< sumedhghaisas>
Thanks :)
< sumedhghaisas>
But the results look amazing
< sumedhghaisas>
the next task should be analyzing them in more depth
< KimSangYeon-DGU>
Cool
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
Like putting such graphs and see if NLL goes negative or not
< KimSangYeon-DGU>
Yeah
< sumedhghaisas>
KimSangYeon-DGU: Hey SangYeon. I just relied to the mail. :)
< sumedhghaisas>
For next task lets write down each aspect of the traning
< sumedhghaisas>
and make a training run for it
< sumedhghaisas>
first would be a stable run and its graphs
< sumedhghaisas>
Graphs will include initial cluster positions and final
< sumedhghaisas>
with NLL vs iterations
< sumedhghaisas>
and constraint with iterations
KimSangYeon-DGU has quit [Ping timeout: 260 seconds]
< sumedhghaisas>
Do you think there is something else we need to document?
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< favre49>
zoq Thanks, that worked
< favre49>
Is this an issue with my system or something I should open an issue for?
sumedhghaisas has quit [Ping timeout: 260 seconds]
favre49 has quit [Remote host closed the connection]
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
KimSangYeon-DGU has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas: Oops... my connection was broken.
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
< KimSangYeon-DGU>
sumedhghaisas: Your suggestion is good and I think it would be great to see the cos(phi) graph as well :)
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Changing host]
vivekp has joined #mlpack
ImQ009 has joined #mlpack
vivekp has quit [Read error: Connection reset by peer]
vivekp has joined #mlpack
vivekp has quit [Ping timeout: 248 seconds]
ImQ009 has quit [Quit: Leaving]
ShikharJ has joined #mlpack
< zoq>
favre49: Sorry for the slow response; I think this is the first time I see the issue, did you made any system updates that might caused this issue?
< rcurtin>
favre49: the existence of /usr/local/lib/libbfd.a seems a little strange to me, do you know how that file got there?
Abhi83 has joined #mlpack
< Abhi83>
Hi. the cmake command is failing for mlpack.Can somebody help me ?
travis-ci has joined #mlpack
< travis-ci>
robertohueso/mlpack#29 (mc_kde_error_bounds - 45e4c68 : Roberto Hueso Gomez): The build is still failing.
< abernauer>
rcurtin: Will I need to write a separate main function in C? The compiler is having issues with order of the definitions of my linkage functions.
< rcurtin>
abernauer: you should be able to link against the mlpackMain() function that is provided by, e.g., pca_main.cpp
< Abhi83>
Please help me find to find the issue zoq
< rcurtin>
Abhi83: try specifying CMAKE_CXX_FLAGS="-pthread" when you configure with cmake; that may help
< rcurtin>
it seems like your system is using a compiler provided by anaconda? I don't understand why that is
< abernauer>
Also conceptually R is a C program as most of the interpreted R code is implemented in C code. Which is a concern I think Dirk had.
< Abhi83>
I am using anaconda distribution for Python.That;s may be the reason of it. Anyone ways I will try ur suggestion rcurtin
< Abhi83>
I am using the command cmake -D CMAKE_CXX_FLAGS=-pthread ../Please tell me if this is correct.
< zoq>
That is correct.
< zoq>
or CMAKE_CXX_FLAGS="-pthread"
< rcurtin>
abernauer: you might want to take a look at the PR for the Go bindings
< rcurtin>
I remember that some work had to be done to declare mlpackMain() with extern C linkage
< rcurtin>
but I don't remember exactly what was done
< Abhi83>
Sorry that didn't fixed it.
< rcurtin>
I know that those bindings generate a .h and .cpp file (I think the code that generates those is called generate_h.cpp and generate_cpp.cpp, or something like this)
< rcurtin>
you might consider taking a look at those files to see how it is done there, and then try replicating the same thing for R
< zoq>
rcurtin Abhi83: Not sure if adding pthread will fix the boost not found issue.
< rcurtin>
Abhi83: right, if boost is still not being found, that will be a separate issue
< rcurtin>
but if it is still a compiler issue, you could also try forcing the use of the system compiler
< rcurtin>
assuming you have, e.g., /usr/bin/g++ on your system, you could do cmake -DCMAKE_CXX_COMPILER=/usr/bin/g++ -DCMAKE_C_COMPILER=/usr/bin/g++ ../
< rcurtin>
and that should avoid using the anaconda-provided compiler
< Abhi83>
cmake command is giving this output in between. -- Successfully downloaded ensmallen into /home/abhishek/Documents/mlpack-3.1.1/build/deps/ensmallen-1.15.1/CMake Error at /usr/share/cmake-3.13/Modules/FindBoost.cmake:2100 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the
< abernauer>
Yeah I will do that. Think we discussed that in April and May a bit. There are some R packages mentioned in CRANs writing extensions manual could look at the source code.
< Abhi83>
root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers.Call Stack (most recent call first): CMakeLists.txt:390 (find_package)
< rcurtin>
abernauer: sounds good. another idea is to check out the Go bindings branch, build the go bindings, then look in build/src/mlpack/bindings/go/ to find the generated .h and .cpp files
< rcurtin>
abernauer: that may be easier than trying to read the source code that generates those .h and .cpp files
< rcurtin>
Abhi83: can you tell us how you installed boost, and what the version of boost installed is?
< Abhi83>
dpkg -s libboost-dev | grep 'Version' command gives me the output - Version: 1.67.0.1
< Abhi83>
I installed through sudo apt-get install libboost-all-dev
< rcurtin>
ok, great; maybe it is worth trying installing libboost-all-dev?
< rcurtin>
ah
< rcurtin>
ok, great
< rcurtin>
what happens if you try with a system compiler instead of the anaconda compiler?
< rcurtin>
I'm wondering if maybe the anaconda compiler has the wrong include search directories or something...
< Abhi83>
I cannot figure out that by myself. Shall I show the contents of cmakeerror.log and cmakeoutput.log with the CMAKE_CXX_FLAGS="-pthread" command ?
< rcurtin>
just the output directly from CMake is probably more useful in this case
< rcurtin>
my suggestion would be to try running cmake and set CMAKE_CXX_COMPILER and CMAKE_C_COMPILER
< rcurtin>
I see you're on Ubuntu or Debian, so you could always just install directly from apt, i.e., apt-get install libmlpack-dev mlpack-bin
< rcurtin>
that will give you the C++ headers and the command-line programs
< rcurtin>
(and if you're looking for the python bindings, 'pip install mlpack3' will get those installed in your current Python environment)
< Abhi83>
Ok I will run apt-get install libmlpack-dev mlpack-bin
< Abhi83>
and then pip install mlpack3
abernauer has quit [Remote host closed the connection]
< Abhi83>
rcurtin can you tell me what value should I give to CMAKE_CXX_COMPILER and CMAKE_C_COMPILER.
< Abhi83>
The above two commands were successful. (apt-get install libmlpack-dev mlpack-bin , pip install mlpack3 )
< Abhi83>
but not cmake
< rcurtin>
Abhi83: set CMAKE_CXX_COMPILER and CMAKE_C_COMPILER to the path of your compiler (maybe /usr/bin/g++? you may have to poke around to find it)
< rcurtin>
in any case, if you have installed mlpack via apt, there's not a need to build it from source anymore---you can now just use the command-line programs from the command line
< rcurtin>
i.e. try "mlpack_kmeans --help"
< rcurtin>
and, from python, you should be able to use the mlpack Python bindings
< rcurtin>
i.e. 'import mlpack' should work just fine
< Abhi83>
Actually I wanted to contribute to mlpack.That's why I am building from source.
< rcurtin>
ah, ok, I see---in this case installing via apt or pip probably isn't helpful
< rcurtin>
and did cmake properly configure to use the system compiler instead of the anaconda compiler? you can look at the output of cmake to see
< zoq>
also I think this shoudl be cmake -D CMAKE_CXX_COMPILER=
< Abhi83>
cmake in the end says -- Configuring incomplete, errors occurred!
< rcurtin>
okay, so if it did not configure correctly it will not build at all, which means that you can't have had the error in #1351 (no makefile will be generated)
< rcurtin>
I have to step out for a while, sorry
< Abhi83>
No Problem.
< zoq>
can you add the -D on each and make sure the path to gcc and g++ is correct
< Abhi83>
Yeah I ran cmake -D CMAKE_CXX_COMPILER=/usr/bin/g++ -D CMAKE_C_COMPILER=/usr/bin/gcc ../
< Abhi83>
I fixed the issue only 1 issue remains.
< Abhi83>
CMake Error at /usr/share/cmake-3.13/Modules/FindBoost.cmake:2100 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers.Call Stack (most recent call first): CMakeLists.txt:390 (
< Abhi83>
find_package)
< Abhi83>
I think it's unable to find boost path may be.
< zoq>
so let's set BOOST_INCLUDEDIR as mentioned above
< zoq>
e.g. -DBOOST_INCLUDEDIR=/path/to/boost/includes/ -DDBOOST_LIBRARYDIR=/path/to/boost/libs
< Abhi83>
ok.Let me try.
< Abhi83>
This may sound stupid but I am unable to find path of BOOST_INCLUDEDIR. Can you help me with it .
< Abhi83>
DBOOST_LIBRARYDIR=/usr/bin . This I got.
< Abhi83>
DBOOST_LIBRARYDIR=/usr/lib.
< zoq>
The folder should contain the boost libs e.g. libboost_* a reasonable path might be /usr/lib/ or /usr/local/lib
< Abhi83>
Yes DBOOST_LIBRARYDIR=/usr/lib but I am not getting DBOOST_INCLUDEDIR path. What files it is supposed to contain ?
< zoq>
what about /usr/include is there a boost folder?
< Abhi83>
No
< Abhi83>
I think Boost got corrupted probabaly.
< zoq>
what does 'whereis boost' return?
< Abhi83>
boost:
< Abhi83>
Nothing else
< zoq>
I guess you are right about the boost installation.
< Abhi83>
shall i again uninstall and install boost
< zoq>
might be an option
< Abhi83>
ok.Let me try
k3nz0__ has quit [Remote host closed the connection]
Abhi83 has quit [Remote host closed the connection]
abernauer has joined #mlpack
< abernauer>
rcurtin: Just to clarify the go bindings are in an open pull request correct? Also I was referring to other R packages that are interfaces to C++ libraries in the previous comment e.g. gbm.