<kintel>
J24k Do you have any other text torture examples? The manifold folks fixed the performance issue and is working on fixing the zebra issue. I just want to be ready to test even worse examples :)
<InPhase>
Okay, no change in runtime on a fresh build.
<kintel>
that's the one
<InPhase>
Well the preview sure slowed down.
<InPhase>
1:12 on preview, and 1.9GB residence in RAM.
<InPhase>
Wow that manifold RAM eating...
<InPhase>
Hmm... It hit 55.9GB RAM usage, then the program closed.
<InPhase>
I should try again... It should not have crashed from running out of RAM at that level.
<InPhase>
I have 64GB of RAM and 64GB of nvme swap setup.
<InPhase>
This time I will close firefox and chrome first.
<InPhase>
And slack and thunderbird and syncthing...
<InPhase>
Okay... It went over 128GB and died...
little_blossom has quit [Ping timeout: 255 seconds]
<InPhase>
I'll just whip up another 384GB swap file... That'll bring it up to 512GB of RAM capacity, and then I'll try again. The flowers must render!
<kintel>
Hehe, anything to not reduce N below 500 !
<InPhase>
There's a principle at stake!
<InPhase>
Actually I'm morbidly curious at this point how far it actually wants to take this.
<InPhase>
I assume the triangulator bug fix will wildly reduce this RAM consumption?
<InPhase>
Okay, running again.
<InPhase>
It took a bit to even generate a swap file that size...
<InPhase>
Oh. I see the threading is going wild in Manifold.
<InPhase>
It was running at 1600%.
<InPhase>
Hmm...
<InPhase>
It crashed out again, but I missed seeing the swap consumption number grow.
<InPhase>
I've only been logging resident memory consumption, because the virtual memory consumption number is usually meaninglessly inflated on Linux.
<InPhase>
Hmm. This is an instability crash.
<InPhase>
It got to 155GB of total RAM consumption, then it gradually dropped back down, THEN it crashed.
<InPhase>
It was not actually RAM limited.
<InPhase>
Hmmm.. I checked syslog. The early ones, OOM killed it. Then I increased the RAM with swap to 512GB, and it did NOT trigger the OOM, but instead the kernel logged a segfault: kernel: [21899.642920] QThread[59809]: segfault at 10 ip 000076c1f6de7894 sp 000076c1da8b5268 error 4 in libtbb.so.12.5[76c1f6dce000+20000] likely on CPU 4 (core 2, socket 0)
<InPhase>
Same thing the first run with 512GB: kernel: [21447.313374] QThread[59659]: segfault at 10 ip 0000716b61ded894 sp 0000716a881b9518 error 4 in libtbb.so.12.5[716b61dd4000+20000] likely on CPU 12 (core 6, socket 0)
<InPhase>
kintel: What do you think, is that a real problem? It does not segfault for me with lower N values.
<InPhase>
That certainly does not make for a good bug report testcase...
<kintel>
It's not unheard of that multi-threaded software may have race conditions : /
<InPhase>
"TBB is a library that helps you leverage multi-core processor performance without having to be a threading expert."
<kintel>
I don't think new triangulator optimization is very relevant here either; it would run way before any of the CSG ops
<InPhase>
I would uh, hope a package with that sales pitch does not have race conditions. ;)
<kintel>
Well, you could run the same with symbols and see what the stack trace says..
<InPhase>
Somebody trying to run a large bioinformatics run, and tbb flaked out, but nobody else could reproduce it.
<InPhase>
It's late though, time for bed.
kintel has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<J24k97>
kintel now only came along that one my accident but i assume the same behavior happens on svg or polygons - at least there was until it was fixed.
<J24k97>
inPhase from your wonderful text ball example - is there a reason for _ = rands(0,1,0,seed); ?
<J24k97>
There is a Zebra and a Tree in that font that uses unexpected amounts of points - even preview fonts (in windows) slows down drastically
J24k97 has quit [Quit: Client closed]
J24k has joined #openscad
little_blossom has joined #openscad
erectus has quit [Remote host closed the connection]
erectus has joined #openscad
<InPhase>
J24k: Reproducibility.
<J24k>
what?
<InPhase>
J24k: The seeds are sticky between rands calls.
<InPhase>
J24k: Slightly unintuitive, but it really makes a lot of things easier.
<J24k>
you just could add the seed in the first rands or
<J24k>
I think that 3144 is terrible as you can't use unseeded rands anymore if a rands is seeded .. but i worked around that
<InPhase>
The lack of it made seeded procedural generation almost impossible to write.
<J24k>
it is super easy you just create a seed list
<InPhase>
And increment a global iterator? ;)
<InPhase>
It's a functional language, so procedural generation goes deeply recursive, leaving no sensible way to distribute things to get generation which is both random and seeded.
<J24k>
I know you did this because of your extreme Chrismas tree - but know simple things are impossible as an undef seed is now defined
<InPhase>
I'm not sure what you mean by an undef seed is defined.
<J24k>
you can't define a seed as undef to make it random again - it will now always be sticky
<J24k>
if this would only apply within the scope and below but even a seed below will stick a rands above
<InPhase>
Oh, you mean the between runs problem. I think some other change induced that later, but I never tracked down what it was.
<InPhase>
I remember seeing that behavioral difference appear at some point later.
<J24k>
I just never use unseeded rands anymore cuz it is literally impossible now.
<InPhase>
Didn't we have an issue where I suggested we should be random seeding at the start of each run?
<InPhase>
There's no conflict between this and the original change at 3144.
<J24k>
unreproducible results are also questionable to have .. but i did that a lot before
<InPhase>
So did I. Including with the Christmas tree in question, which is how I know it worked fine with 3144. :)
<J24k>
still hate this sticky this - it should at least have an option to deactivate
<J24k>
inphase besides https://bpa.st/ZLAA the first run it doesn't stick only after the second (flushing cache doesn't change this)
<J24k>
So you design it .. refresh and see a change .. think you are good .. but NOOOO it sticks after the second refresh forever
peepsalot has quit [Ping timeout: 260 seconds]
peepsalot has joined #openscad
<InPhase>
The thing you dislike is not the behavior that was changed.
<InPhase>
They're separate things, involving the same bit of code.
<InPhase>
Like, fully separable things.
<J24k>
in my example except for the first run i always get the same 4 numbers .. I expect a random number if no seed is given (or at least the option to give an undef/rand seed)
<J24k>
InPhase how would you make this that you using seeds but for one parameter you get a different result every time you hit F5/F6
<J24k>
this is new - Ü, kind of cheating to change the source.. thanks!
<InPhase>
If someone desires they could add in an undef parameter following that same template, which would undo a prior seed by having the logic be "when undef received as seed, call initialize_rng". The template is there.
<J24k>
at the moment undef as seed results in a warning
<InPhase>
The only tricky part was sorting out the right logic for reseeding when someone is hammering the preview button like a crazy person. But the initialize_rng I wrote handles that scenario fine, so it would also work for a bunch of undef calls in rapid succession.
<J24k>
rands() parameter could not be converted: argument 3: expected number, found undefined (undef)
<J24k>
seems seeds are never undef but predevined by some random system seed
<InPhase>
But, it's late, so I'm not going to try to figure out how one would go about parsing undef there. This at least gives access to the mix of seeded and unseeded, and is logic others could use.
<InPhase>
Night.
<J24k>
nighty
peepsalot has quit [Ping timeout: 268 seconds]
peepsalot has joined #openscad
ooxoo has joined #openscad
Junxter has quit [Read error: Connection reset by peer]
teepee_ has joined #openscad
teepee has quit [Ping timeout: 260 seconds]
teepee_ is now known as teepee
ccox_ has quit [Ping timeout: 252 seconds]
ccox has joined #openscad
r2ro has joined #openscad
R2robot has quit [Ping timeout: 260 seconds]
mmu_man has joined #openscad
J24k has quit [Quit: Client closed]
J24k has joined #openscad
snaked has quit [Quit: Leaving]
GNUmoon2 has quit [Ping timeout: 260 seconds]
GNUmoon2 has joined #openscad
SamantazFox_ is now known as SamantazFox
<pca006132>
InPhase: interesting, did you try to build openscad and manifold with tbb disabled?
<pca006132>
we know that tbb will do something weird... and memory usage with parallelization enabled may be a bit higher as well
hyperair has joined #openscad
<InPhase>
pca006132: I didn't know that was an option.
<InPhase>
Although it was a little grueling to run that extreme test. :)
J24k has quit [Quit: Client closed]
J24k has joined #openscad
TheAssassin has quit [Remote host closed the connection]
TheAssassin has joined #openscad
TheAssassin has quit [Remote host closed the connection]
TheAssassin has joined #openscad
<pca006132>
it will probably become the recommended configuration
<pca006132>
tbb makes things about 4-5x faster, if your computer is beefy enough
<pca006132>
but on some systems (most systems currently), onetbb combined with pstl will have memory leak
<pca006132>
but we still want to use pstl to replace thrust, because we don't like nvidia :)
<pca006132>
so users have to be careful about their system configuration in order to enable tbb, and we recommend users to not enable tbb when they are not sure about it
<pca006132>
you either need older version of tbb, or onetbb + GCC 13 or newer (llvm with libc++ works, but this is not an option for linux AFAIK, e.g. cpython is not happy about bindings linking with libc++)
<pca006132>
I like stress testing the library and I wish we can do more about memory consumption, but it is a hard problem