ChanServ changed the topic of #mlpack to: "mlpack: a fast, flexible machine learning library :: We don't always respond instantly, but we will respond; please be patient :: Logs at http://www.mlpack.org/irc/
< chopper_inbound[>
zoq: Ok! Looking into it. Thanks.
ImQ009 has joined #mlpack
< chopper_inbound[>
I'am again facing issue while trying to build master branch. All RAM and swap memory used up. For configuring cmake, I used `cmake -D BUILD_PYTHON_BINDINGS=OFF -D BUILD_CLI_EXECUTABLES=OFF -D BUILD_JULIA_BINDINGS=OFF -D TEST_VERBOSE=ON ../`. Some how I managed to get screenshot (it took approximately a minute to process 😂) https://pasteboard.co/JdvlTCT.png
< rcurtin>
hmm, how many cores are you using to build?
< rcurtin>
8GB RAM should definitely be enough
< chopper_inbound[>
rcurtin: 2
< rcurtin>
you could try using only one
< rcurtin>
but it looks like, at the start of that graph, RAM usage is already 80%
< rcurtin>
maybe there are some other processes running on your system that are eating up the RAM?
< rcurtin>
I'd say, if you have 5-6GB of RAM free before you start building, 2 cores *should* work
< rcurtin>
but definitely the RAM footprint of mlpack while compiling is not the best
< rcurtin>
that's one of the dangers of templates :-D
< chopper_inbound[>
apart from system monitor, nothing was running. And it worked just 2 days back!
< chopper_inbound[>
let me check if something is eating my RAM in background
< chopper_inbound[>
rcurtin: https://pasteboard.co/JdvtTCO.png after closing all programs, it is just 1.7 GB of RAM usage. And I don't think there is anything unnecessary in the background.
< rcurtin>
chopper_inbound[: ok, that definitely looks like a good starting point---what happens if you try building with 2 cores now?
< chopper_inbound[>
let me check that...
< chopper_inbound[>
@rcurtin:matrix.org: it's again the same. All RAM and swap memory used up at 31% (async_learning_test)
< zoq>
chopper_inbound[: What is the memory consumption before you start building?
< rcurtin>
chopper_inbound[: huh; out of curiosity, are you currently working on async_learning_test?
< rcurtin>
or are you just building the current master branch?
< shrit[m]1>
I am not sure of the cost for this, comparing to copy constructor, but copy constrcutor is not great hear anyway
< shrit[m]1>
*jere
< shrit[m]1>
*Here
< rcurtin>
shrit[m]1: cool! so that works for serialization I guess?
< rcurtin>
what about deserialization though? we could deserialize a unique_ptr and then try to get the value out of it to a raw pointer
< rcurtin>
(also, I think make_unique<> is basically zero-cost, so no computational problem there)
< rcurtin>
ohh, maybe we can just use unique_ptr.release() :)
< shrit[m]1>
Considering the deserialization process it is more tricky
< shrit[m]1>
Since there is no bug or segfault, the problem is related to the parsing of XML,
< shrit[m]1>
But I am sure they are related
< shrit[m]1>
this is the error I face for exmaple: XML Parsing failed - provided NVP (value) not foun
< shrit[m]1>
This error is resolved, Actually we need to keep the same name for smart pointer in save and load
< rcurtin>
yeah, I think usually the NVP (name-value pair) is generated just using the name of the variable
< rcurtin>
typically the name is only stored for, e.g., JSON or XML though
< shrit[m]1>
Actually the issue now how to recover the address of the pointer out of the serailization class,
< shrit[m]1>
loading is fine, But I am not sure having a good idea of recovering the address
< rcurtin>
maybe release() can be helpful for that?
< shrit[m]1>
unless if I made the pointer as reference
< shrit[m]1>
sure relase is good for raw pointer inside the class
< rcurtin>
no, I mean, deserialize a unique_ptr<>, then just use release() to get the raw pointer
< rcurtin>
I don't know the code well enough, so my suggestion may not be useful :)
< rcurtin>
it is just a guess though
< shrit[m]1>
I am going to make a push on github, it will be easier to discuss
< rcurtin>
sounds good, if you want to start a discussion there I can try to answer it tonight (probably good to tag roberto too, he might have some input)
ImQ009 has quit [Quit: Leaving]
k3nz0__ has joined #mlpack
< KimSangYeon-DGU[>
Saksham[m] , kartikdutt18 : I'm going to take an airplane tonight and it will take about 13 hours and 3 hours by a shuttle bus. I'll check the Darknet model PR after I'm back. Have a great day!
< HimanshuPathakGi>
This is not looking good current design is much better :(
< zoq>
HimanshuPathakGi: Not sure I like it or not, it's different for sure.
< HimanshuPathakGi>
> `zoq on Freenode` Himanshu Pathak (Gitter): Not sure I like it or not, it's different for sure.
< HimanshuPathakGi>
Yup, they are very different it will be like working on a new platform @zoq
< walragatver[m]1>
jeffin143: birm: Sorry I forgot about the meet today. Totally slep from my mind.
< walragatver[m]1>
Let's meet again on Sunday
< rcurtin>
KimSangYeon-DGU[: have a good flight! are you leaving California? (I think that's where you still are) if so, sorry we never had a chance to meet up! all my travel out there has stopped for now :(
< KimSangYeon-DGU[>
<rcurtin "KimSangYeon-DGU: have a good fli"> rcurtin : Yes, internship was finished two days ago, so I'm leaving California tonight. Right, it's so sorry we had a chance to meet... I didn't expect this weird situation...
< rcurtin>
maybe another time :)
< rcurtin>
hope the internship went well and that you enjoyed California!
< rcurtin>
...even though you were probably stuck inside for most of it :(
< KimSangYeon-DGU[>
Yeah :) I enjoyed California
< KimSangYeon-DGU[>
Very nice weather :)
< rcurtin>
definitely :)
< KimSangYeon-DGU[>
Yeah, the internship went well and it was great time to improve my abilities and network :)
< rcurtin>
awesome to hear :)
< KimSangYeon-DGU[>
:)
< KimSangYeon-DGU[>
Oops, I found a typo. Let me fix it.
< KimSangYeon-DGU[>
> It's so sorry we had *not* a chance to meet...
< rcurtin>
no worries, I understood what you meant :)