<peepsalot>
InPhase: the first 3 lines there are very interesting, basically 24.1GB of temporarily allocated values in sizes of 8,32, and 48 bytes
<peepsalot>
apparently very short lived since the peak never goes above 100MB for any of those bucket
<peepsalot>
oh, and a lot of 20k allocs: normal 41: 15.9 mb 8.9 gb 8.9 gb 80.3 kb 20.0 kb 467.1 k not all freed!
snakedLX has quit [Ping timeout: 245 seconds]
Jack21 has quit [Ping timeout: 256 seconds]
snakedLX has joined #openscad
snakedGT has joined #openscad
LordOfBikes has quit [Ping timeout: 245 seconds]
snakedLX has quit [Ping timeout: 245 seconds]
LordOfBikes has joined #openscad
<InPhase>
peepsalot: Well 32B could be the value variants?
<InPhase>
peepsalot: Maybe 48B is something in CGAL.
<InPhase>
Or wait, didn't your work get rid of those shared_ptr's?
<InPhase>
I don't know... 356 million is sure a lot of something.
<InPhase>
It's a fractal design, but not THAT fractally. 4550 vertices, 9351 edges, and 3961 facets resulting, with maybe another 30% shaved off.
<InPhase>
Or half shaved I guess.
<InPhase>
Still, not in the ballpart of 356 million.
<InPhase>
I wonder if the CGAL project even has somebody who looks at these sorts of performance aspects? They do seem to be more math-focused people.
<peepsalot>
sorry, stepped out to grab some food. yeah there are no shared_ptr for most of the values. that scad isn't really expression heavy either. pretty sure its all just insane numbers of gmp temporary values during cgal ops
<InPhase>
I was trying to find CGAL discussions about trying to use doubles. But these discussions don't seem to exist under any variety of terms I can think of.
<peepsalot>
oh, the quotes got removed when i used it
<InPhase>
The only rationale given is, "Like with all number types with finite precision representation which are used as approximations to the infinite ranges of integers or real numbers, the built-in number types are inherently potentially inexact. Be aware of this if you decide to use the efficient built-in number types: you have to cope with numerical problems. For example, you can compute the intersection
<InPhase>
point of two lines and then check whether this point lies on the two lines. With floating point arithmetic, roundoff errors may cause the answer of the check to be false."
<InPhase>
I mean yeah, you wouldn't do it that way using floating points. But you can still do it.
<InPhase>
Turns out intersections are evaluated all the time with floats.
<InPhase>
I don't see how they'd be getting that exact under rotation anyway. Bignum rationals don't represent an infinite series well.
<InPhase>
You'd need a symbolic algebra engine with an extensive knowledge of trig rulse, and even then it would probably still fail sometimes, given the mixed reliability I've seen from unguided symbolic algebra engines.
<InPhase>
Alternatively you could calculate appropriate threshold values to leave the user over a dozen useful orders of magnitude, but still manage all conceivable floating point errors.
<peepsalot>
the example i recall which makes a bit more sense to me, is if you have an axis-aligned square face, then rotate it some arbitrary amount and then check for planarity it would likely fail
<peepsalot>
the transformations are all multmatrix by the time they get to CGAL anyways, so its always going to be double approximations from those trig functions, but the (questionably) more important thing is that THAT matrix can be applied consistently to all points.
<peepsalot>
that's how i reason about it in my head anyways. don't know what the actual consequences of such inexact calculations would be in practice
<InPhase>
I'm more a physicist than a mathematician... But the precision a double reaches under calculation is about equal to one hydrogen atom radius out of alignment across a 1km long rod. The real world never meets these thresholds, and the real world doesn't fall apart. :)
<peepsalot>
yeah i get that. but i think the issue is if vertices somehow end up slightly off, then you can end up with non-watertight mesh, where the graph connectedness becomes all screwy
<InPhase>
Well we have to convert them all the doubles anyway when they get out into the real world.
<InPhase>
s/all the/all to/
<InPhase>
One should simply not check for planarity exactly, just like one does not check float equality exactly.
<InPhase>
IsPrettyDamnPlanar()
<peepsalot>
yep, interval arithmetic makes a lot more sense to me conceptually for this sort of stuff
<InPhase>
Physics thinking would never think to write down "exactly flat" for anything, because reality does not conform to this.
<peepsalot>
where you calculate an error bound alongside your actual result. and you make a higher precision calculation until error bounds are within needed range
<peepsalot>
which, CGAL does apparently also support... but its MPFI which is again based on MPFR -> GMP
<InPhase>
Sometimes you can cap it and skip the propagation, but yeah, there are also solid propagation procedures.
<InPhase>
One obvious propagation method when all else fails, is for double d, do all calculations on d+eps and d-eps.
<InPhase>
Most binary ops also only require calculating one pair of these.
<InPhase>
Hmm. I'm trying to ascertain how much of what we do can be done with this Interval type, and I see, "The problem is that we have to change the rounding mode very often"... But why?
<peepsalot>
i guess proper handling would mean you should never round down an error bound, but maybe want to normally round the approx result
<InPhase>
"we are forced to use a workaround which slows down the code, but is only useful when the intervals can overflow or underflow" So slow everything down for additional accuracy in case values exceed the 10^-308 and 10^308 range?
<InPhase>
This is obsessive level. If we do this interval thing, we should also turn THAT off.
<InPhase>
How you round an error bound doesn't matter. The error bound is literally your fuzz factor, and is fuzzy on purpose.
<InPhase>
It just needs to stay in the ballpark, because you picked it with a safe margin to begin with.
<InPhase>
I think they've missed the core concept of this.
<InPhase>
Perhaps they were using this initially for math theory journal articles, and latched onto exact-everything as a core design goal. But this is not an important real world calculation goal.
<peepsalot>
yeah idk math people be crazy
<InPhase>
Looks like this will not function.
<InPhase>
Their definitions of the comparison functions are nuts.
<InPhase>
Basically, no calculated intervals will ever compare equal, and so trying to check if intervals are equal will throw an exception.
<InPhase>
There is a perfectly logical approach from a statistician mindset of declaring them equal if they overlap, and less then if they are exclusively non overlapping and in that direction, and so on.
<InPhase>
Then they would actually do what intervals are supposed to give to solve the problems that they say won't work with floats. But it looks like they didn't do that on their type, so it won't work in algorithms like the intersecting line test, if I'm understanding these docs correctly.
<InPhase>
I'm willing to have it proven that I've misunderstood these docs though. It would be a massive speed boost I think if we could do all calculations with this sort of thing.
<InPhase>
Well there is a do_overlap check... Maybe if they use that instead of the comparisons for the logic?
gunnbr has joined #openscad
<peepsalot>
going back to mimalloc for a sec, i got an overall 20% speedup over the existing test suite, using: time ctest -C All -R cgal &> output.log
<peepsalot>
so just cgal specific tests, and including "All" so that "Heavy" tests were in there too
<peepsalot>
4m22.586s vs 5m29.035s
Scopeuk has quit [Quit: Ping timeout (120 seconds)]
Scopeuk has joined #openscad
ur5us has joined #openscad
ur5us has quit [Client Quit]
ferdna has joined #openscad
snakedLX has joined #openscad
snakedGT has quit [Ping timeout: 252 seconds]
gunnbr has quit [Ping timeout: 252 seconds]
arebil has joined #openscad
Non-ICE has joined #openscad
ferdna has quit [Quit: Leaving]
arebil has quit [Quit: My keyboard has gone to sleep. ZZZzzz…]
lastrodamo has joined #openscad
Jack21 has joined #openscad
aiyion has quit [Remote host closed the connection]
aiyion has joined #openscad
qeed_ has quit [Read error: Connection reset by peer]
qeed_ has joined #openscad
Jack21 has quit [Ping timeout: 256 seconds]
arebil has joined #openscad
teepee_ has joined #openscad
teepee has quit [Ping timeout: 276 seconds]
teepee_ is now known as teepee
arebil has quit [Quit: My keyboard has gone to sleep. ZZZzzz…]
alban771 has joined #openscad
ali1234 has joined #openscad
alban771 has quit [Ping timeout: 245 seconds]
alban771 has joined #openscad
Jack21 has joined #openscad
<Jack21>
about this high precision thing - now i need to repeat the first point of a calculated polygon also as a last to ensure they are the same - i got the impression the old version had a higher precision for calculations then later used for rendering
<Jack21>
now both seem to be the same which end in these super small gaps
<Jack21>
as example i calculate a gear and everything is fine but when i make an offset for clearance i got a slit but only for certain number of teeth as there a flootingpoint issue of ~ 1e-15 exist which is amplified by the offset
<Jack21>
so i would suggest that all modules run with a lower precision than the math modules.
<InPhase>
Jack21: Can you give an example of what went wrong creating a "slit"?
<Jack21>
simplified it seems that sin(0) are not the same point as sin(360) - but the number is calculated so for some teeth number there is only 359.9999999...
<Jack21>
i could solve this by giving the path for that polygon so if one path isn't closed it didn't connect to the second path
alban771 has quit [Ping timeout: 245 seconds]
<peepsalot>
echo(sin(0) == sin(360)); // true
<Jack21>
peepsalot: yes it should but not if you calculate the 360 as number of teeth×degreeperteeth
<Jack21>
the annoying thing is if i echo(points) and then copy them to make a polygon - that slit is gone because that little difference is not in the echo output
<peepsalot>
it would help a lot if you could narrow down your problem to a small reproducible (self-contained) script and pastebin it.
snakedGT has joined #openscad
snakedLX has quit [Ping timeout: 252 seconds]
gunnbr has joined #openscad
<InPhase>
Jack21: It is a design flaw to expect teeth*degreeperteeth to yield exactly 360 in a way that the design depends on this. It's officially the "wrong way" to approach floating point calculations. But we can't advise well on correcting it without a specific test case for it.
<InPhase>
All I can tell you at this point, from having designed around this sort of issue for many many decades, is that a solution certainly exists for doing it differently such that this will not arise.
<InPhase>
And, it's no shame to hit this problems. I learned the ways around them from hitting them and figuring out and studying the alternative mindsets to designing numerical algorithms to avoid it, then using them until it sinks in and those are my first thought when it comes time to implement.
<InPhase>
It requires tripping over it a bunch of times.
<InPhase>
I will note also that almost everyone goes through a phase of thinking these are system flaws and trying to design systems level ways to avoid it. But ultimately the systems level avoidances only ever solve specific cases. In general it's an intrinsic problem of finite representational math with no general solution other than careful algorithmic design.
gunnbr has quit [Ping timeout: 265 seconds]
<InPhase>
One random example, rather than i*360/teeth_per_loop, if the alignment needs to be exact, you calculate a teeth_per_loop sized vector and index it with [i%teeth_per_loop].
snakedGT has quit [Ping timeout: 260 seconds]
<InPhase>
Or, equivalently, (i%teeth_per_loop)*360/teeth_per_loop, which will consistently loop by capping the integer values to be cyclic.
<InPhase>
When you need the same value out, you need to switch "this should be mathematically the same" to "these are exactly the same input values in the same equation", or, store the value.
snakedGT has joined #openscad
<Jack21>
great now this seems to be a caching error
<Jack21>
how anoying that a list of points could be different in the cache
<Jack21>
when rendering i get the slit in both - however as both path within a polygon are closed it should not happen - now * disable the first
<Jack21>
the "pointsECHO" still show the slit - now delete cache. And it is gone
pie_ has quit [Quit: pie_]
pie_ has joined #openscad
<Jack21>
now enable the "Allpoints" polygon again - and the slit is gone there too. now delete cache again and it is there in both
<Jack21>
you can also move the first with the "t" variable - the slit vanished only by moving it
<InPhase>
I don't see a slit rendering this, so the first step is not happening reproducibly. (Unless there is a special trick to seeing it?)
snakedLX has joined #openscad
<Jack21>
if i open this in version 21.01 i don't have the slit but with 21.10.04
snakedGT has quit [Ping timeout: 245 seconds]
<Jack21>
and yes i can prevent this when i ensure the step size is not a float - but then have to adapt this to the number of teeth (also defining paths prevent this)
arebil has joined #openscad
<Jack21>
so with the floating point problem i can not make two gears with different teeth but the same resolution per tooth
<InPhase>
Where is the non-integer step size appearing?
<InPhase>
I see only integer step sizes in the version you pasted.
<InPhase>
(Which is not slowing a slit for me.)
<peepsalot>
Jack21: if you are modeling a polygon with a hole in it, you should specify the path indices separately: polygon(points=Allpoints, paths=[ [for(i=[0:1:len(cPoints)-1]) i], [for(i=[len(cPoints):1:len(Allpoints)-1]) i] ]);
<Jack21>
step1=360/fn2 - but fn2 is the resolution per tooth × 2 × number_of_teeth
<Jack21>
peepsalot: yes as i said before this solves the problem - but it wasn't a problem in earlier versions
<InPhase>
Jack21: Ah, I see it now. And does it resolve if you make that cos((i%fn2)*step1), sin((i%fn2)*step1), and sin((i%fn2)*(step2+step1)) and for the other cos?
<Jack21>
InPhase with z=10 and fn=5 step1 is 3.6
<Jack21>
inphase: but i and fn2 are both integer how should this help?
<InPhase>
Jack21: That's WHY it helps. :)
<InPhase>
It means i=fn2 turns to i=0, which means it's equal.
<Jack21>
ah ok
<InPhase>
If you prefer you can do: pointsEX=[for(i=[0:fn2])let(i=i%fn2)[(r+r2)*cos(i*step1)-r2*cos(i*(step2+step1)),(r+r2)*sin(i*step1)-r2*sin(i*(step2+step1))]];
<InPhase>
I'd prefer that with some spaces for readability, but keeping your formatting. ;) There's a let in there.
<InPhase>
If you need both, maybe make that an iw for iwrapped.
gunnbr has joined #openscad
<InPhase>
And use iw in your trigs.
<InPhase>
This is probably visibly cleaner to use iw, so nobody gets confused.
<InPhase>
If I see i and i is the loop variable, I might not notice it's changed. The ability to change those and not break the loop is OpenSCAD specific.
<Jack21>
.. deleting cache ... yes that also solved it
<InPhase>
Jack21: Here's an example of how to make these more readable, to help your debugging: https://bpa.st/JWCA
<InPhase>
I messed up...
<InPhase>
But I can see it now that I expanded it.
<InPhase>
There are a few varieties in how to line them up, but something along those lines.
<InPhase>
This also lets you put a ton of stuff inside the let, like a long calculation sequence, and it stays readable.
<Jack21>
i ll try - normally after it works i think "i ll never have to read this"
<InPhase>
Until it doesn't again! :)
<Jack21>
yeah total pain in the ass when i open old scad files i wrote some years ago
<InPhase>
Formatting up front though makes it easier to make in the first place, as you can iteratively create it in stages.
<Jack21>
somehow i think it is faster if i keep it as short as possible, but probably that isn't true
<Jack21>
InPhase - still it is strange that the cache takes the values if they are equal to the 5 decimal and doesn't care if there are changes later
<InPhase>
Jack21: The cache is using a text representation of the values... This is a problem for your specific case.
<InPhase>
Someone made the decision that this was reasonable because we output as ascii stl anyway at the same precision used in the cache.
<InPhase>
Of course we don't output as ascii stl anymore, and it can make bugs in the scad design like you hit intermittent.
<Jack21>
and also doing it the other way around if i calculate x=1.000005 and the have a vector [x,5] then a vector [1,5] is stored as [1.000005,5] in cache
<InPhase>
The correct resolution is not obvious. One option is to cache with fully expanded text representation of doubles in the keys, although this would significantly enlarge those cache keys.
<Jack21>
i was already hit by the change with the customizer - that all varibles are fix when you make only a checkbox change in the customizer - so you cant't work with both simultaniously anymore
rogeliodh has quit [Quit: Ping timeout (120 seconds)]
sinned6915 has quit [Quit: Ping timeout (120 seconds)]
sinned6915 has joined #openscad
rogeliodh has joined #openscad
rogeliodh has quit [Quit: Ping timeout (120 seconds)]
<InPhase>
Jack21: There, your issue into a test-case. ^^
<InPhase>
I fed it a perfect storm example of this problem.
<InPhase>
Probably 999 times out of 1000, our current caching approach should not be an issue.
<peepsalot>
i don't think openscad explicitly claims to support weakly simple polygons like that, it just happens to work out ok (but it puts an added requirements on the script writer to get the arithmetic exact)
pie_ has quit [Quit: No Ping reply in 180 seconds.]
pie_ has joined #openscad
<peepsalot>
InPhase: btw I'm looking into possibly using homogeneous kernels instead of cartesian. if i understand correctly, it should have an advantage of D+1 allocations per point (each coord component can be stored as a single arb precision integer) as opposed to 2*D (each component being a rational)
<Jack21>
peepsalot: yes you are right - maybe we shouldn't allow polygons with holes without a path in future as with the floating point precision change it seems you need sophisticated knowledge about it to prevent issues with those weakly simple polygons
<InPhase>
peepsalot: This sounds like something important to understand but I don't get it yet from your description.
<peepsalot>
another thing is that Polyhedron_3 (not Nef) i think has a lot looser requirements, where maybe plain double would be fine for the places we use that
<peepsalot>
i am trying to work out if there's a way to reference types that would work for either CGAL_Kernel2 typedef
<peepsalot>
i spent many hours slowly making a little more sense of the kernel/numeric types in CGAL late last night. its tough with the way requirements and explanations are scattered around the docs
<InPhase>
This is giving me flashbacks to some of the poorly arranged math books I had... I'm looking at you, Springer Linear Algebra book.
<InPhase>
It's very difficult to make sense of what each statement means for use, because you don't know 100% of the implementation internals.
<InPhase>
e.g., "Since the homogeneous representation does not use divisions, the number type associated with a homogeneous representation class must be a model for the weaker concept RingNumberType only." Spectacular random factoid to memorize... And that means one can and cannot use it for what exactly?
<peepsalot>
i think they are just trying to say the model is less restrictive, but its very oddly worded
<peepsalot>
i basically read that part as "the number type associated with a homogeneous representation class only requires a model for the weaker concept RingNumberType." if that makes more sense?
<peepsalot>
My "favorite" nonsense circular definition is "This extended geometry concept serves two purposes. It offers functionality for changing between standard affine and extended geometry. At the same time it provides extensible geometric primitives on the extended geometric objects."
<peepsalot>
2 paragraphs before that "Let R be an infinimaximal number..." my brain refuses to parse basically any of that
<peepsalot>
entire pargaraph
arebil has quit [Quit: My keyboard has gone to sleep. ZZZzzz…]
<InPhase>
"infinimaximal"?
<InPhase>
This is a fake word...
<InPhase>
Yep, sure enough google points to only CGAL.
<InPhase>
In one subset of the docs they define it as "A finite but very large number."
<InPhase>
There was no need to make up a comic book quality term for this.
<InPhase>
Size check:
<InPhase>
infinimaximal
<InPhase>
very large
<InPhase>
Yep, very large would have been shorter.
ali1234 has quit [Remote host closed the connection]
ali1234 has joined #openscad
<InPhase>
Does it run any faster?
<Jack21>
i just wonder why the customizer is showing 7 decimals .. well only for certain values like test=1.005;
<peepsalot>
InPhase: i was thinking maybe a simple conversion would be to make the D+1 component something like 2^63, and multiply the other components by that
<Jack21>
it is adding 0000 for no reason
<peepsalot>
InPhase: it might be a little faster, hard to say since so many tests fail currently
<InPhase>
Jack21: If I recall correctly a Qt issue was being resolved where it wouldn't let you edit or alter past a certain precision if you set it to display in a more truncated form.
<InPhase>
Jack21: Functionality, elegance, or Qt. Choose 2.
<peepsalot>
InPhase: 6m13.562s vs 7m2.217s user time, on "time ctest -j12" for me
<Jack21>
FUBAR
<InPhase>
peepsalot: How about a select test?
<peepsalot>
give me one
<InPhase>
peepsalot: Like your favorite example024?
<InPhase>
Unless that failed...
<peepsalot>
would need to be a test with exclusively integer only coordinates
<InPhase>
lol, ok.
<Jack21>
just a large grid of cubes?
<InPhase>
Yeah, working on it.
<peepsalot>
there seems to be some API for converting cartesian to homogeneous, but its just not automagically used, and not sure what would be needed to implement it across the whole project
<InPhase>
From 29.2s to 26.0s, with an additional boost over the old rotate removal from having also pushed it to integers. So I guess keeping all your coordinates integers speeds up CGAL a little bit too.
<Jack21>
26s .. i need 1:18.2 for that to render
<InPhase>
I invested a bit in this laptop, and just got it this year. Things have been going a little faster since I did.
<peepsalot>
InPhase: cartesian: user 0m23.062s, homogeneous: user 0m22.727s
<InPhase>
So close to negligible difference.
<Jack21>
but how - it is still running on one core and your clock isn't 10GHz
<InPhase>
peepsalot: 1.5%
<InPhase>
Jack21: Cache lines maybe? The processor is at 4.3GHz when it's running.
<Jack21>
probably all about memory speed
<InPhase>
"64 GB (2x 32GB) 3200MHz CL22 Samsung" The RAM in it.