ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< chopper_inbound[> zoq: Ok! Looking into it. Thanks.
ImQ009 has joined #mlpack
< chopper_inbound[> I'am again facing issue while trying to build master branch. All RAM and swap memory used up. For configuring cmake, I used `cmake -D BUILD_PYTHON_BINDINGS=OFF -D BUILD_CLI_EXECUTABLES=OFF -D BUILD_JULIA_BINDINGS=OFF -D TEST_VERBOSE=ON ../`. Some how I managed to get screenshot (it took approximately a minute to process 😂) https://pasteboard.co/JdvlTCT.png
< rcurtin> hmm, how many cores are you using to build?
< rcurtin> 8GB RAM should definitely be enough
< chopper_inbound[> rcurtin: 2
< rcurtin> you could try using only one
< rcurtin> but it looks like, at the start of that graph, RAM usage is already 80%
< rcurtin> maybe there are some other processes running on your system that are eating up the RAM?
< rcurtin> I'd say, if you have 5-6GB of RAM free before you start building, 2 cores *should* work
< rcurtin> but definitely the RAM footprint of mlpack while compiling is not the best
< rcurtin> that's one of the dangers of templates :-D
< chopper_inbound[> apart from system monitor, nothing was running. And it worked just 2 days back!
< chopper_inbound[> let me check if something is eating my RAM in background
< chopper_inbound[> rcurtin: https://pasteboard.co/JdvtTCO.png after closing all programs, it is just 1.7 GB of RAM usage. And I don't think there is anything unnecessary in the background.
< rcurtin> chopper_inbound[: ok, that definitely looks like a good starting point---what happens if you try building with 2 cores now?
< chopper_inbound[> let me check that...
< chopper_inbound[> @rcurtin:matrix.org: it's again the same. All RAM and swap memory used up at 31% (async_learning_test)
< zoq> chopper_inbound[: What is the memory consumption before you start building?
< rcurtin> chopper_inbound[: huh; out of curiosity, are you currently working on async_learning_test?
< rcurtin> or are you just building the current master branch?
< zoq> chopper_inbound[: What you could do as a quick workaround, comment the tests that you don't need in - https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/CMakeLists.txt
< rcurtin> that could work too, it would be a lot quicker just to build the tests for sure :)
< chopper_inbound[> When build process was up to 20%, RAM usage was 1.8GB.
< chopper_inbound[> I am not working on async_learning 😁
< rcurtin> awesome, sounds like maybe a quick workaround is just to comment that test out :)
< chopper_inbound[> Yes, I was building current master branch
< zoq> wondering if there is some memory leak, so a restart would help I guess
< chopper_inbound[> That happens to some other tests as well (feed_forward, gan, dcgan)
< chopper_inbound[> zoq: restarting... :)
< chopper_inbound[> I mean if somehow, async_learning passes it might get stuck at feed_forward and so on...
< rcurtin> chopper_inbound[: if that's the case, you could always just build with one core?
< rcurtin> also, which compiler are you using?
< chopper_inbound[> gcc version 7.5.0
< rcurtin> hmm, yeah, the results you are seeing are strange to me; I haven't seen that much RAM usage before with gcc
< rcurtin> are there any special CXXFLAGS or anything set?
< chopper_inbound[> I'm not aware ... I need to check that.
< chopper_inbound[> Can it relate to recently merged go bindings?
< rcurtin> I don't think it should, if you are only building the tests, the Go bindings shouldn't be compiled when doing that
< rcurtin> (you can also specify -DBUILD_GO_BINDINGS=OFF if you like)
< chopper_inbound[> Wow...it is working after restart. Seems like there was memory leak. Thanks zoq and rcurtin for the help...
< rcurtin> huh, strange! glad that it's worked out now though
< chopper_inbound[> 😅
< jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): may be time for a mail for tomorrow meetup
< jeffin143[m]> Third Thursday right
< rcurtin> yes, thanks for reminding me... actually, do you want to send the email? :-D
< jeffin143[m]> Or is it second
< rcurtin> if not I can do it, I just haven't set up any cron job :)
< jeffin143[m]> I will write a script and give you , just add it as cron job
< jeffin143[m]> May be that would work :)
< rcurtin> sure, that can work :)
< rcurtin> I'll send the reminder email today though, after this meeting...
< jeffin143[m]> Sure :)
< jeffin143[m]> I will copy content from one of your mail :)
< jeffin143[m]> If that's ok
< rcurtin> yeah I think that's just fine :)
ImQ009 has quit [Ping timeout: 272 seconds]
ImQ009 has joined #mlpack
< rcurtin> jeffin143[m]: sent the video meetup reminder email :)
< jeffin143[m]> Just saw it :)
< jeffin143[m]> rcurtin (@freenode_rcurtin:matrix.org): what about a release during a meetup
< jeffin143[m]> Hi @walragatver:matrix.org : are you there
< rcurtin> jeffin143[m]: definitely it takes more than one hour to do the release, but I suppose it's a good a time as any to actually start one
< rcurtin> I've still been digging out from emails from last week...
< rcurtin> ahhhh, crap. there is another meeting that I can't miss at 1800UTC tomorrow :(
< rcurtin> sorry
< rcurtin> unless it gets cancelled at the last minute I'll have to sit it out
< jeffin143[m]> No issues enjoy :)
< RyanBirminghamGi> jeffin143: do we have a scheduled mlboard meeting now?
< jeffin143[m]> Umm @walragatver:matrix.org isn't here
< jeffin143[m]> And also I don't have any Agenda up my hand
< RyanBirminghamGi> Ok! At a glance, things look ok. I'll review your images PR in more detail soon. :)
< jeffin143[m]> Ryan Birmingham (Gitter): thanks will look forward to it
< shrit[m]1> @rcurtin I was able to serialize raw pointer. However, I had to use mak_unique to be able to free memory correctly
< shrit[m]1> std::unique_ptr<T> smartPointer = std::make_unique<T>(*this->localPointer)
< shrit[m]1> I am not sure of the cost for this, comparing to copy constructor, but copy constrcutor is not great hear anyway
< shrit[m]1> *jere
< shrit[m]1> *Here
< rcurtin> shrit[m]1: cool! so that works for serialization I guess?
< rcurtin> what about deserialization though? we could deserialize a unique_ptr and then try to get the value out of it to a raw pointer
< rcurtin> (also, I think make_unique<> is basically zero-cost, so no computational problem there)
< rcurtin> ohh, maybe we can just use unique_ptr.release() :)
< shrit[m]1> Considering the deserialization process it is more tricky
< shrit[m]1> Since there is no bug or segfault, the problem is related to the parsing of XML,
< shrit[m]1> But I am sure they are related
< shrit[m]1> this is the error I face for exmaple: XML Parsing failed - provided NVP (value) not foun
< shrit[m]1> This error is resolved, Actually we need to keep the same name for smart pointer in save and load
< rcurtin> yeah, I think usually the NVP (name-value pair) is generated just using the name of the variable
< rcurtin> typically the name is only stored for, e.g., JSON or XML though
< shrit[m]1> Actually the issue now how to recover the address of the pointer out of the serailization class,
< shrit[m]1> loading is fine, But I am not sure having a good idea of recovering the address
< rcurtin> maybe release() can be helpful for that?
< shrit[m]1> unless if I made the pointer as reference
< shrit[m]1> sure relase is good for raw pointer inside the class
< rcurtin> no, I mean, deserialize a unique_ptr<>, then just use release() to get the raw pointer
< rcurtin> I don't know the code well enough, so my suggestion may not be useful :)
< rcurtin> it is just a guess though
< shrit[m]1> I am going to make a push on github, it will be easier to discuss
< rcurtin> sounds good, if you want to start a discussion there I can try to answer it tonight (probably good to tag roberto too, he might have some input)
ImQ009 has quit [Quit: Leaving]
k3nz0__ has joined #mlpack
< KimSangYeon-DGU[> Saksham[m] , kartikdutt18 : I'm going to take an airplane tonight and it will take about 13 hours and 3 hours by a shuttle bus. I'll check the Darknet model PR after I'm back. Have a great day!
kyrre has quit [Ping timeout: 272 seconds]
kyrre has joined #mlpack
< zoq> This doesn't look like a minor style update to me: http://data.kurg.org/github-design-update.png
< HimanshuPathakGi> > `zoq on Freenode` This doesn't look like a minor style update to me: http://data.kurg.org/github-design-update.png
< HimanshuPathakGi> This is not looking good current design is much better :(
< zoq> HimanshuPathakGi: Not sure I like it or not, it's different for sure.
< HimanshuPathakGi> > `zoq on Freenode` Himanshu Pathak (Gitter): Not sure I like it or not, it's different for sure.
< HimanshuPathakGi> Yup, they are very different it will be like working on a new platform @zoq
< walragatver[m]1> jeffin143: birm: Sorry I forgot about the meet today. Totally slep from my mind.
< walragatver[m]1> Let's meet again on Sunday
< rcurtin> KimSangYeon-DGU[: have a good flight! are you leaving California? (I think that's where you still are) if so, sorry we never had a chance to meet up! all my travel out there has stopped for now :(
< KimSangYeon-DGU[> <rcurtin "KimSangYeon-DGU: have a good fli"> rcurtin : Yes, internship was finished two days ago, so I'm leaving California tonight. Right, it's so sorry we had a chance to meet... I didn't expect this weird situation...
< rcurtin> maybe another time :)
< rcurtin> hope the internship went well and that you enjoyed California!
< rcurtin> ...even though you were probably stuck inside for most of it :(
< KimSangYeon-DGU[> Yeah :) I enjoyed California
< KimSangYeon-DGU[> Very nice weather :)
< rcurtin> definitely :)
< KimSangYeon-DGU[> Yeah, the internship went well and it was great time to improve my abilities and network :)
< rcurtin> awesome to hear :)
< KimSangYeon-DGU[> :)
< KimSangYeon-DGU[> Oops, I found a typo. Let me fix it.
< KimSangYeon-DGU[> > It's so sorry we had *not* a chance to meet...
< rcurtin> no worries, I understood what you meant :)
< KimSangYeon-DGU[> :)