<teepee>
I have one now, that's probably plenty for quite a while
<InPhase>
teepee: What's the surface material? Metal or a polymer?
<InPhase>
teepee: I also while contemplating this a bit ago thought about longevity of those fine-detail surfaces under use.
<InPhase>
Maybe a glass one would last a while. They last very long for optical purposes, but it feels like a different thing printing onto it and ripping the print off.
<teepee>
they call it PEY but no explaination what that is
<JordanBrown_>
Might want to zoom out a little, and maybe pan a little left. And I need to fully comment it.
<JordanBrown_>
And I think the group of three in the pink teacup is spinning it too fast.
<JordanBrown_>
Hmm. I just noticed that switching to the dark scheme (DarkOcean?) also made the default object color be white. The peoples' heads and arms are default-color; I figured that if that was the usual yellow then nobody would give me trouble about being too lily-white. But if they're all white, that
<JordanBrown_>
h them back to the yellow.
<JordanBrown_>
might be a problem. Might need to explicitly switc
Non-ICE has quit [Ping timeout: 264 seconds]
Non-ICE has joined #openscad
Non-ICE has quit [Excess Flood]
Non-ICE has joined #openscad
Non-ICE has quit [Excess Flood]
Non-ICE has joined #openscad
Non-ICE has quit [Excess Flood]
<JordanBrown_>
#f9d72c, if anybody was wondering. ColorMap.cc line 43.
Non-ICE has joined #openscad
Non-ICE has quit [Excess Flood]
Non-ICE has joined #openscad
Non-ICE has quit [Excess Flood]
<JordanBrown_>
It seems slightly unfortunate that Cornfield isn't present in the color-scheme directory, at least for documentation purposes. Best might be if it's normally read from there, falling back to the compiled-in default only if the selected scheme can't be found in the directory.
<pca006132>
there were substantial improvements to other models (up to 2x improvement I think), but the most surprising result is that minkowski-of-minkowski-difference.scad which took ~43s previously now took ~1s
<pca006132>
I guess a ton of time is spent on the Reindexer and conversion between different internal representations
abff has quit [Quit: everybody gets one]
abff has joined #openscad
<pca006132>
not sure if there are similar improvements to other cases using minkowski
pca006132 has quit [Remote host closed the connection]
misterfish has joined #openscad
teepee has quit [Remote host closed the connection]
teepee has joined #openscad
misterfish has quit [Ping timeout: 276 seconds]
misterfish has joined #openscad
guso78k has joined #openscad
guso78k has quit [Client Quit]
guso78k has joined #openscad
<guso78k>
pca006132 this is impressive. did not consider this effect when proposing the new data storage. my intent was only better quality geomtry data.
<JordanBrown_>
Substantial improvements in minkowski performance will make a big difference in how people build rounded models. Excellent!
<JordanBrown_>
Are you sure you didn't accidentally use cached results?
misterfish has quit [Ping timeout: 255 seconds]
pca006132 has joined #openscad
<pca006132>
I am pretty sure, I measured the time in my terminal
<pca006132>
I don't think we have persistence cache for now
<JordanBrown_>
But you measured independent CLI runs, rather than in the GUI?
<pca006132>
yes
<JordanBrown_>
And no, there's no caching from one CLI run to the next, only from one run to the next inside the GUI. (And, I suspect, when animating from the CLI.)
<pca006132>
we don't have that much improvement for other cases though
<pca006132>
I think this minkowski-of-minkowski-difference is just something that does many conversion and makes the previous PolySet unhappy
<pca006132>
with the tracy integration, I see that a majority of time is spent on CGAL doing the convex decomposition, and the slowness probably comes from the Epeck kernel which is exact
<pca006132>
not sure if it is easy to change to Epick, I that code is quite complicated
<JordanBrown_>
No clue. That part is all black magic to me :-)
<pca006132>
I think for the *normal* minkowski usages, there will only be slight performance improvement
<JordanBrown_>
The people are dirt simple, and the teacups are pretty simple. Positioning and rotating is just simple math and stacked rotations and translations.
<JordanBrown_>
There's not even any trig in the program. (Of course there's trig in rotate().)
<pca006132>
First, 0.01 should be fine. Second, I don't think mesh simplification should produce this step like behavior, it should merge the faces and get a smooth surface
<JordanBrown_>
But J23k43 that's a really cool image.
<guso78k>
JordanBrown_ yes, the rotation and translation and the people are also easy to me, but i am impressed about the idea, this also makes up an artist - not my strength ...
<JordanBrown_>
InPhase had said something about doing a ballerina dancing, and was concerned with the complexity of managing state. I pointed out that if the pattern was geometrical, like Disneyland teacups, you didn't have to manage state - you could just calculate where everything should be at a particular time.
<JordanBrown_>
, and they came out better than I expected.
<JordanBrown_>
So that led to playing with modeling the teacups
<J23k43>
JordanBrown_ thanks tried to make cell noise but this wasn't the right approach for it
misterfish has joined #openscad
<JordanBrown_>
I spent some time looking at Disneyland teacup pictures and video to get the timing and some color/design stuff. I did *not* go so far as to paint designs on the sides of the teacups like they do. (I could, but it would be really tedious.)
<othx>
JordanBrown_ linked to "My House, with furniture by jordanbrown" on thingiverse => 4 IRC mentions
<JordanBrown_>
One thing that I did not do with the teacups was to try to make them be at any particular scale. I think I started out with the radius of a teacup being 10, and everything else was based on that.
<JordanBrown_>
Anyhow, bedtime.
misterfish has quit [Ping timeout: 246 seconds]
teepee_ has joined #openscad
ccox has quit [Ping timeout: 276 seconds]
teepee has quit [Ping timeout: 240 seconds]
teepee_ is now known as teepee
ccox has joined #openscad
cart_ has joined #openscad
snaked has quit [Remote host closed the connection]
snaked has joined #openscad
misterfish has joined #openscad
guso78k has quit [Quit: Client closed]
guso78k has joined #openscad
teepee has quit [Remote host closed the connection]
teepee has joined #openscad
misterfish has quit [Ping timeout: 268 seconds]
mmu_man has joined #openscad
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #openscad
misterfish has joined #openscad
Non-ICE has joined #openscad
snaked has quit [Quit: Leaving]
<guso78k>
@kintel, i am interested to get the debug modifier working in python(!,#, %), however when parsing python code I learn about the modifier AFTER the primitive/expression was already built. No i am trying to alter the tags in ModuleInstantiation but AbstraceNode defines them as const.
<guso78k>
when trying to remove the const keyword from node.h I literally have to remove most of the const keywords in all openscad code. is there another approach ?
<teepee>
why removing it? more const is better :)
<guso78k>
teepee what are my options to alter the tags in ModuleInstantiation AFTER the primitive was already created ?
<teepee>
it is possible to exclude some properties from the general const but that's obviously not the best solution
<teepee>
so there's no way of getting that information before creating the moduleinstantiation?
<guso78k>
of course its possible to pass the debug information like a = cube([1,2,3], debug=1) but this is not as writing: a = !cube([1,2,3]) e.g.
<guso78k>
for 1st approach I would know ahead
<pca006132>
maybe save the csg tree in the python object, construct the csg tree when you call output?
<pca006132>
i.e. you don't immediately construct the openscad cube object when you call cube in python
<pca006132>
they are kind of lazy evaluated
<pca006132>
this is actually what we do in manifold, but for the purpose of performance optimization
<guso78k>
pca006132 yes, this is an option, but i would have to handle each of the primitives twice and keep them in sync ...
<guso78k>
this is a very painful solution to overcome a single const in the code '=(
<pca006132>
you can do it in python
<pca006132>
no need to do it in c++
<guso78k>
python decorator ?
<pca006132>
yeah, decorator may help, I don't have the concrete solution for now though
<guso78k>
i like the decorator approach, it will turn the preceding debug character into a function parameter
<InPhase>
guso78k: I haven't looked at the exact spot in question, but the solution to const data is to not modify it, but to construct a replacement and swap it into place atomically.
<InPhase>
guso78k: This will behave better overall when we finally get proper threading in OpenSCAD, as swapping const data that is write-once read-often is the way to get performant multithreaded code without deadlocks and blocking.
<InPhase>
guso78k: If this requires too much knowledge for the section of code in question, generalize it and move the routine that should do the modification closer to where that data lives.
<pca006132>
but the problem is that there is not a lot of places to do multithreading in OpenSCAD?
<pca006132>
the evaluation of scad script can be multithreaded, but you will need to store every output into a temporary buffer and then reorder them to match the linear order to match the single threaded semantics
<pca006132>
but as soon as you add python (or any imperative stuff) into the mix, you cannot do this
<InPhase>
pca006132: Well Manifold already supports multithreading. Our compute process is actually highly eligible for multithreading because of the declarative nature of the language (provided we don't throw that out to fix that use<> reevaluation), and most importantly, the gui needs proper multithreading so we can have modern features like a working "stop evaluating" button...
<InPhase>
CGAL would have never cooperated with us to get this working, which led me to previously speculate we needed interprocess communication to achieve it. But I suspect we can get this going via cooperation with Manifold.
<InPhase>
It was the impossibility of doing this with CGAL that demotivated me from even starting working on threading improvements the last few times I seriously thought about it.
<pca006132>
yeah but multithreaded evaluation will mess up with side effects
<InPhase>
What side-effects?
<pca006132>
echo, or call to python modules
<InPhase>
You mean echo?
<pca006132>
echo can be fixed, but calls to python modules cannot...
<InPhase>
echo would be easy to handle. Python modules is not something that we should let get in the way of other progress.
<InPhase>
If using Python makes parallelized Python elements out of order, that's for the users of Python to deal with I think. It's not something we should offer guarantees on at the cost of core performance.
<pca006132>
yes, this should be documented
<InPhase>
Python introduces all sorts of issues previously discussed, chief among them being a whole world of security hazards that the scad community is not accustomed to.
<pca006132>
like the order of evaluation is undefined...
<InPhase>
And hence the need for warnings/confirmation at Python execution.
<pca006132>
I feel that before thinking about parallel evaluation of scad code, we should really refactor the current interpreter structure
<InPhase>
I do not object. :)
<pca006132>
the architecture makes it slow...
<pca006132>
I am thinking about implementing a scad interpreter for manifold, so I guess I can use it to experiment for possible performance things before doing a real refactoring here
<pca006132>
the major hurdle to implementing scad interpreter for manifold is minkowski, there is no way of implementing it efficiently without spending a lot of time dealing with convex decomposition in manifold...
<InPhase>
First priority for threading in my view would be to insert a core threading architecture (I have done this many times elsewhere) that the rest of the code can use, get the gui using it properly, and then get render termination working. Then merge that. Performance improvements using threading are probably something that would come in multiple stages. But actual responsivity of the interface using an
<InPhase>
extensible threading architecture is sort of the foundation of other stuff.
<pca006132>
even if the time for convex decomposition is ignored, constructing the Nef polyhedron from manifold output is just too slow
<guso78k>
what I really like in openscad is the fast option of switching between F5(goldeather) for fast view and F6 rendering. this is my absolute reason to stay with openscad
<InPhase>
pca006132: Did the fast-csg work use the exact kernel for minkowski?
<InPhase>
pca006132: That exact kernel stuff was a major part of the performance overheads.
<pca006132>
you must use Nef polyhedron for convex decomposition, and Nef polyhedron must use the exact kernel
<pca006132>
there is no choice here
<InPhase>
pca006132: With the library you mean?
<InPhase>
pca006132: I am of the belief that math continues to work without fractions. :)
<pca006132>
yes, with the library
<InPhase>
I am a physicist, whereas CGAL was written primarily by mathematicians who adhered to the view that if you don't use fractions the answers are "wrong".
<pca006132>
not necessarily continues to work, for the case of polyhedron the points may no longer be coplanar if you don't work with triangular mesh exclusively
<InPhase>
None of our final results are strictly coplanar anyway, so that ship has sailed.
<pca006132>
iirc ochafik tried implementing all the necessary interfaces for inexact kernel, the Nef polyhedron thing just exploded
<InPhase>
I took advantage of the predictability of non-coplanar results for one of the examples in the manual of why the overlap rule is fundamentally required.
<InPhase>
Thus it is very wasteful for us to do calculations trying to preserve coplanarity that will then disappear.
<pca006132>
yes
<pca006132>
the main problem with those algorithm is that they are not designed to handle inaccuracies... this is something the manifold library emphasize about
<InPhase>
I haven't spent much time thinking about how to actually do minkowski optimally. But maybe someone in the literature has already thought about a more performant approach to this.
<pca006132>
we have a long issue about minkowski in manifold, and we read some papers about this
<InPhase>
I sort of stopped thinking about minkowski and just banned it from most of my code because it had been too slow. ;) But we do need to support it as a core feature.
<pca006132>
we think 3D offset does not need minkowski
<InPhase>
Ah, you're also contributing to Manifold?
<pca006132>
yes
<InPhase>
*thumbsup*
<pca006132>
actually I contribute to openscad because I contribute to manifold
<InPhase>
Good, I like the cross-coordination that we get with Manifold. :) It's a welcome improvement to our process, and it has been such a boost in development progress.
<teepee>
from the github mails pca006132 is like 78.34% of the manifold dev team lately :)
<pca006132>
this is just because I generate a lot of noise ;)
<pca006132>
I know very little about those geometry algorithms
<guso78k>
pca006132 being the "manifold guy" its hard to believe your last statement
<teepee>
all the debug and performance work is pretty impressive
<pca006132>
if you look at the PRs you will know it is true
<pca006132>
I mainly do refactoring to make things faster
<guso78k>
yes, but for doing refactor you need to have a deep technical knowledge base ..
<pca006132>
I know the basics, I am not majoring in math
<pca006132>
my research direction is actually static analysis, type system and algorithms for speeding up those analysis
<pca006132>
so I kind of view myself as a high level optimizer when I do some refactoring
<pca006132>
with that you need minimal technical knowledge base to do refactoring
<InPhase>
pca006132: I've been meaning to ask... Can you explain the handle "pca006132"? ;)
<pca006132>
yeah everyone who knows me will eventually ask me this question
<InPhase>
Or as I like to call you in my head, pca[tab]
<pca006132>
it is just my student ID back when I was a primary school kid
<InPhase>
lol, ok.
<pca006132>
yes, pca is fine
<pca006132>
this ID is nice because there is no collision
<pca006132>
just like UUID
<InPhase>
pca are your initials?
<pca006132>
no, I have no idea what pca means as well
<InPhase>
Oh. :)
<pca006132>
just used this for 10+ years, so no plan to change it
<pca006132>
naming is hard
<pca006132>
and I am a chinese, we have many name collisions...
<InPhase>
It is. And at least you are distinctive and marked clearly as some sort of technical person.
<InPhase>
pca006132: I think you should slow down your Manifold contributions that teepee reports just a very small amount, and try to become 61.32% of the contributions.
<pca006132>
hard to get such precise :)
<InPhase>
Biofeedback.
rawgreaze has quit [Ping timeout: 252 seconds]
<InPhase>
pca006132: And it sounds like given your background, you might be well suited for refactoring the compute evaluation system as you were mentioning.
<pca006132>
yeah, I was thinking about that
rawgreaze has joined #openscad
<InPhase>
pca006132: By background I'm a CS/physics undergrad and physics PhD, and do a lot of numerical algorithm development and work on trying to make robust software architectures. Like, I currently write code to electrically stimulate human brains, which is code that should very much not do the wrong things, and I also make up new algorithmic approaches for analyzing brain data. These are the sorts of
<InPhase>
things I try to wedge into my OpenSCAD contributions as time permits.
<pca006132>
I think something like bytecode or some other forms of IR is much better for optimization
<pca006132>
indeed, but I always wonder about how people write robust code for numerical alglorithms
<InPhase>
Well, those are two separate tasks really. :) So you just have to bring them together into one.
<pca006132>
numerical things are just too hard to test (typical coverage metrics are not very useful), and often have many implicit assumptions about the data
<InPhase>
Well, you achieve it with layered validation, like with other code.
<pca006132>
and there are also numerical stability concerns
<pca006132>
make sense
<InPhase>
Numerical stability is best handled at the algorithm design stage. Don't rely on stability, but assume the real world data is full of uncertainties, and design around that.
<InPhase>
My current primary project is trying to rescue a massive amount of brain data that was acquired with an incorrect electrical setup, resulting in horrendously noisy data. But it just happens to have the slightest sliver of signal amidst the noise, that needs to be extracted and utilized to do the recovery and match it up to other data. So sometimes you have to assume the data is VERY unstable. :)
<InPhase>
I also have some statistical background, but I have found no OpenSCAD applications for it so far. :)
<pca006132>
this reminds me of Kalman filter and such
<pca006132>
re. statistics, I actually thought about running some extensive benchmark to figure out some better thresholds to determine when to switch between parallel and single threaded algorithms for manifold
<pca006132>
currently we just use a fixed threshold of number of items for everything
<pca006132>
but it is really far from optimal
<InPhase>
That is a tricky determination when near the threshold.
<pca006132>
we don't really need the optimal value, optimal value is likely dependent on the scheduling behavior, actual machine, and such
<pca006132>
but I feel like some tweaking can get us >10% of improvement for larger workload
<InPhase>
It is probably optimal though to err on the side of single threaded and keep a little away from the threshold when it is close, as this will work better on systems with heat dissipation issues and ones with other tasks running.
<InPhase>
Dev systems are probably significantly more optimal at handling multithreaded code than real user systems.
<InPhase>
Real users are going to be impeded by layers of cat hair in the fans, and Chrome sessions with 3000 tabs running.
<InPhase>
So, I'd say abandon that 10% in favor of single, and the real world performance will average out better.
<pca006132>
and also I discovered that manifold is a bit slower on Windows comparing to macos and linux
<pca006132>
but I don't have a physical machine running windows, and no one complained about it, so I will just leave it for now
<InPhase>
For a long time I had issues with trig functions in Windows being ridiculously slow. I haven't benchmarked these in a while though. It's possible they still haven't fixed it.
<pca006132>
interesting
<InPhase>
I assume you use a lot of them. :)
<pca006132>
yes
<peeps>
funny thing is WSL often runs faster than native in my experience
<pca006132>
I thought trig functions usually compile to hardware instructions
<InPhase>
I had some tight loops of trig calculations in Windows that were like a factor of 10 laggier on Windows. I had to rewrite a whole bunch of code when I figured that out, just to do fewer trig calculations.
<pca006132>
iirc there are x86 instructions for them
<InPhase>
Yes, well, you would think.
<InPhase>
Maybe platform-specific compiler settings would have fixed it. I didn't look into it at the time.
<peeps>
there are x86 ops, but iirc the accuracy is lacking in many cases, no idea how often they get used directly by compilers
<InPhase>
Hmm. Maybe this is why JordanBrown_ gets slower renders than me with Manifold, rather than his computer being a 386 like I tease him about.
<InPhase>
pca006132: I suppose maybe do some benchmarking or something if you can get someone with dual boot to assist. Worst case you could probably insert an alternate implementation of the trig functions from one of the Linux libraries that actually does it optimally.
<pca006132>
yeah I think I need to profile this on windows
<pca006132>
manifold supports the tracy profiler, https://github.com/wolfpld/tracy, which you can just compile manifold with tracy support enabled
<pca006132>
and it will automagically collect the trace when you run manifold tests
<pca006132>
will just wait for someone to complain and send me the traces, and pretend the problem does not exist in the mean time
<pca006132>
debugging windows stuff is painful
<InPhase>
pca006132: Also, I don't know what system the author of that blog post tested on, but none of my sine functions have those sorts of inaccuracies near pi. I would certainly have noticed that, given how much trig work I have done over recent decades.
<InPhase>
pca006132: And, I just retested in C++, Python, and numpy.
<pca006132>
maybe your code is not directly using that hardware instruction?
<InPhase>
The author reports, "glibc is just a thin wrappers around fsin and this instruction is painfully inaccurate when the input is near pi"
<InPhase>
But I've primarily used glibc all this time, and never experienced that.
<pca006132>
interesting
<InPhase>
Although I haven't used the 32-bit version of glibc in a long time I guess.
<InPhase>
I did used to use it however.
<InPhase>
I don't remember ever seeing something that far off.
<pca006132>
> I ran my tests on the 2.19 version of glibc (Ubuntu 14.04) and found that it no longer uses fsin. Clearly the glibc developers are aware of the problems with fsin and have stopped using it.
<InPhase>
32-bit floats with 64-bit glibc test fine as well.
<InPhase>
Oh. I missed that.
<InPhase>
Was it just an Intel issue? I'm also using AMD. But I had plenty of Intel processors around that era, and I do not recall seeing this issue.
<pca006132>
seems so
<peeps>
also, openscad does our own wrapping of trig functions for exact zero values etc since we input degrees which exactly represent 90,180 etc
<teepee>
well, at least it's not a mesh geometry kernel :)
rawgreaze has quit [Ping timeout: 252 seconds]
rawgreaze has joined #openscad
<pca006132>
manifold is indeed a common name, too mathematical
<InPhase>
teepee: Be right back, working on a Manlyfold project to add structural supports to models.
<InPhase>
I have to finish that soon so I can start on my Minifold project.
<JordanBrown_>
Isn't your day job working on Mindfold?
<peeps>
just let me know when moneyfold comes out
<InPhase>
peeps: Now accepting bitcoin donations for our kickstarter.
<pca006132>
oops, accidentally optimized your money away
<JordanBrown_>
teepee do you want a .webp for the teacups, or do you want to create that locally? And do you want two, one for the calendar and one for the popup?
mmu_man has quit [Ping timeout: 256 seconds]
guso78k71 has joined #openscad
mknst has joined #openscad
mknst has quit [Client Quit]
guso78k has quit [Ping timeout: 250 seconds]
mknst has joined #openscad
Non-ICE has quit [Ping timeout: 252 seconds]
mknst has quit [Client Quit]
<JordanBrown_>
guso78k: one answer to your debug-modifer-const problem might be to have a debug modifier wrap the object in a new CSG node, and put the modifier on that new node.
mmu_man has joined #openscad
<guso78k71>
yeahh, also a very good solution.
guso78k71 has quit [Quit: Client closed]
L29Ah has quit [Ping timeout: 256 seconds]
mmu_man has quit [Ping timeout: 256 seconds]
<JordanBrown_>
guso78k71 the more I think about it, the more I think that that's the correct CSG-ish answer. CSG operations never change the underlying objects; they just wrap operations around those objects.
<JordanBrown_>
teepee I just pushed the teacups. Didn't hook them into the JavaScript.
rockingpenny4 has joined #openscad
rockingpenny4 has quit [Client Quit]
mmu_man has joined #openscad
<teepee>
JordanBrown_: currently only a single image/webp is supported by the web stuff