<d1b2>
<azonenberg> And here we go, Y axis label layout code ported over from glscopeclient
<d1b2>
<azonenberg> There's definitely things I want to tweak but it's coming along nicely
<d1b2>
<azonenberg> it looks and feels enough like glscopeclient that i find myself starting to click UI elements or right click and expect them to do things
<d1b2>
<azonenberg> the grid in the plot areas was such a simple thing to draw but makes it feel so much closer to completion lol
<d1b2>
<azonenberg> Still getting 300-400 FPS with vsync off
<azonenberg>
Not the 500 - 1K FPS we were getting with a trivial empty window, but still plenty of room to add more stuff :p
<d1b2>
<louis> Looking sweet !
<azonenberg>
Working on making Y axis dragging give niec looking results
<_whitenotifier-7>
[scopehal-apps] azonenberg e251c91 - Initial version of ChannelPropertiesDialog
<azonenberg>
ehntoo: how's your refactoring of that PR going? ready for me to look at it yet?
<d1b2>
<ehntoo> Not yet, I'm afraid, things got busy yesterday. I'll be wrapping things up tonight, though.
<d1b2>
<ehntoo> All the changes I've made so far have been pretty straightforward, just need to find another half hour or so for another review and for docs.
<azonenberg>
Ok just checking in to make sure i'm not forgetting anything :)
<azonenberg>
on that note
<azonenberg>
lain: how's the renderer work coming?
massi has quit [Remote host closed the connection]
<lain>
steady progress, should have something buildable soon
<azonenberg>
Excellent. I've got the channel properties dialog coming along and can do the most critical things like setting channel attenuation/coupling, gain, and offset
<azonenberg>
going to add some help text to the dialog then probably move on to actual waveform acquisition
<azonenberg>
by end of today or tomorrow i should be able to actually acquire waveforms and throw them into the void because there's no way to render them yet :p
<_whitenotifier-7>
[scopehal] azonenberg closed issue #690: VulkanInit: detect valid memory types for textures and only allocate textures from those types - https://github.com/glscopeclient/scopehal/issues/690
<d1b2>
<louis> How far out are we from getting the Vulkan rendering rewrite landed?
<d1b2>
<louis> I would like to add support for rendering sparse waveforms w/o interpolation and w/o assuming samples end at the beginning of next, but don't want to hack around on that if it's about to be replaced
<azonenberg>
louis: days at most
<azonenberg>
the shaders are building under vulkan already
<azonenberg>
i'm hoping we'll have rendering by end of week if not sooner
<azonenberg>
and by the time we have it working in glscopeclient, i expect to have the supporting infrastructure in ngscopeclient done
<azonenberg>
so we should get waveform rendering in ngscopeclient within the next day or two after it lands in glscopeclient
<azonenberg>
i'm going to be working on triggering and history management tonight
<d1b2>
<louis> Sounds good. I will stay away from making any more changes.
<d1b2>
<louis> I will shortly PR my changes re: no-interpolate flag
<azonenberg>
hold off on even that for the moment
<azonenberg>
just apply them to the new renderer
<azonenberg>
i dont want to merge anything to the old renderer at this point
<d1b2>
<louis> OK
<azonenberg>
Anyway, stepping back a bit, one of the things i have to think about is the pipeline of waveform / filter processing. in particular, new waveforms come off instruments and are pushed into a queue
<azonenberg>
then we pop them off the queue into the instrument and run the filter graph on the current waveforms
<azonenberg>
then we run the compute shader on the final updated waveforms to turn them into a texture we can display in the compositor
<azonenberg>
so the question is, what happens if this takes >1 frame?
<azonenberg>
given that we will also have things like cursor overlays that update live
<azonenberg>
i think it might be possible to get desynchronized, in a state where different parts of the display are referring to different waveforms
<azonenberg>
this may or may not be a problem if it only lasts a frame or two before the compute shaders catch up
<azonenberg>
essentially, the UI elements like cursors that work live on waveform data in a very low overhead fashion might update faster than the compute shaders which on a large waveform could take several frames
<azonenberg>
the other question of course is how the compute shaders competing for GPU time will interfere with rendering on other vulkan queues
<azonenberg>
i dont know if a single shader invocation can time share with other stuff or not
<azonenberg>
So we may want to break up large render calls into a few successive invocations if that is the case
<azonenberg>
anyway, things to think about in a couple days
<azonenberg>
the other thing to consider is if i can do anything funky with u/v coordinates
<azonenberg>
if we are zooming vs acquiring a new waveform
<azonenberg>
can i render a stretched version of the old waveform until the shader finishes, then update to the actual full res image?
<azonenberg>
Lots of fun LOD possibilities there
<azonenberg>
This of course assumes we're drawing a huge waveform that takes >1 frame at 60fps to render
<azonenberg>
with really small waveforms from a fast instrument, we might have the opposite problem
<azonenberg>
waveforms coming in more than once per video frame
<azonenberg>
Which is not something we currently handle at all
<d1b2>
<louis> For my use, I think I always want the displayed waveforms to be products of the same capture from the scope / same execution of the filter graph.
<d1b2>
<louis> re: more than one waveform per frame, I think I'm happy to have them dropped but always want the most recent one displayed (modulo filter graph processing time)
<azonenberg>
and i'm not talking about displayed waveofrms
<azonenberg>
so, there's a couple of different entities involved
<d1b2>
<louis> I'm not sure what the use case would be for the other possible behaviors there. Seems only useful if you're capturing a periodic signal
<azonenberg>
there's the abstract sampled data of the waveform off the scope
<azonenberg>
there's the abstract sampled data of each filter's output
<azonenberg>
there's the grayscale density map of each waveform
<d1b2>
<louis> similar consideration applies for cursor; i don't know why I would want the cursor to refer to anything but the displayed waveform I percieve myself to be placing it on
<azonenberg>
and there's the RGBA pixel values I display
<azonenberg>
This isn't a question of what's best from a UX perspective ,it's practicalities
<azonenberg>
Let's say it's frame zero, the system is static
<azonenberg>
everything is in sync
<azonenberg>
I begin dragging a cursor, now it's frame 2. the cursor has moved slightly. overlay displays the value of the same waveform at that point
<azonenberg>
frame 3, cursor is still moving. scope triggers. New waveform is put into the queue. display still fully reflects the old waveform and is consistent
<azonenberg>
between frames 3 and 4: background thread for the filter graph sees a new frame is ready to process. it pops the queue into the active waveform set on the instrument
<azonenberg>
then begins evaluating the filter graph in the background to avoid bogging down the display
<azonenberg>
none of the compute shaders for rendering have been called because the filter graph is still running, say it's some complex protocol decode
<azonenberg>
frame 4: cursor is still moving, but the current waveform on the scope is now the new waveform
<azonenberg>
visualized plots show the old waveform still, but when we ask the scope for its current value we get the value of the new waveform
<azonenberg>
between frames 4 and 5: filter graph completes running, we kick off the compute shaders
<azonenberg>
frame 5: cursor moves more, visualized plot still shows old waveform, cursor is again displaying values from the new waveform
<azonenberg>
between 5 and 6: compute shaders for rendering finish
<azonenberg>
frame 6: we run the quick color mapping shader to convert the new waveform density map to RGBA, render that plus the cursors, display is once again consistent
<azonenberg>
this is a somewhat contrived example, *hopefully* most of these steps would take <1 frame and we don't get these lags
<azonenberg>
but it can definitely happen especially w/ deep memory
<azonenberg>
one possible way around this is for the GUI to keep a pointer to the *displayed* waveform around, even if it's no longer the current waveform being processed
<azonenberg>
the challenge there is, now we have to worry about object lifetime and avoid a use-after-free. also, to avoid excessive memory allocations most filters will reuse their output buffer for consecutive invocations
<azonenberg>
so the old data may simply no longer exist
<azonenberg>
We can, i suppose, cache the displayed cursor value during this transition period and stop pulling data from the hardware
<azonenberg>
but this then adds the problem that as you are dragging the cursor the value is no longer displayed / is displayed from a previous X coordinate
<azonenberg>
We could also consider prefetching values a few pixels left/right of the cursor during each frame we render, and if the waveform changes and the cursor moves slightly render the cached prefetch values
<azonenberg>
essentially a local copy of the window of the waveform right around the cursor assuming that we'll finish rendering the new waveform before we get off the edge of that window
<azonenberg>
You see how this becomes a difficult problem?
<azonenberg>
i'm open to solutions but it's not trivial
<d1b2>
<louis> Yes, that certainly grows involved
<azonenberg>
also consider the case of spawning a cursor during this transition region. do we simply delay drawing it for the first time until everything syncs up?
<azonenberg>
hard numbers: it takes something like 30ms iirc to render a single 128M point waveform, fully zoomed out, on my 2080 Ti last time i benchmarked
<azonenberg>
So if you have two channels that's 60ms for the two compute shaders to execute
<d1b2>
<louis> My immediate thoughts are (1) at the least we should draw a icon or something to indicate this case if we think it's significant
<azonenberg>
And at 60 FPS you're at 16ms oer frame
<azonenberg>
so it's very possible to have a delay of about 4 frames just from waveform showing up until the rendered pixels being ready
<d1b2>
<louis> (2) am I missing something or isn't this solved with a (rather-complex) double-buffering scheme for all waveforms that are being displayed?
<azonenberg>
So, we already sort of double buffer for the grayscale vs color buffer
<azonenberg>
the compute shaders draw waveforms into the grayscale buffer, then the second shader tone maps that to RGBA
<azonenberg>
it's trivial to delay the second shader invocation until all of the rasterization is done
<d1b2>
<louis> where each filter has two associated output buffers that get swapped between by the filter graph code once a complete execution has finished, and the alternate buffer is rendered + cursor-value'd from
<azonenberg>
so we can update all views simultaneously once the waveforms and filter graph are in sync
<azonenberg>
So, double output buffers is possible
<azonenberg>
but that comes at a huge cost of doubling the ram used by the actual waveform data
<azonenberg>
video ram is a fairly scarce resource
<azonenberg>
256M points at fp32 is already a gigabyte per channel
<azonenberg>
with four channels that's 4G of data. double that for no good reason and you're at 8GB
<azonenberg>
then add in some more for filter channels and very soon the 11GB of RAM on my 2080 Ti is all gone
<azonenberg>
and you blew past the capabilities of laptop GPUs a long time ago
<d1b2>
<louis> Another thought is drastically downsample the back buffer or keep only that section that is viewport-visible (since that's all you could place a cursor into)
<azonenberg>
well, waveform rendering is only slow when you're drawing hundreds of megapoints all on screen at once
<azonenberg>
if you're zoomed in and only seeing e6 or less points per plot, rendering is a millisecond or less
<azonenberg>
and this lag is no longer worth worrying about
<d1b2>
<louis> Yeah, that's very thorny.
<azonenberg>
Honestly i think having the old text rendered for a couple of frames is no big deal
<azonenberg>
you can't read that fast anyway
<d1b2>
<louis> For the immediate-term if we're worried about it in ngscopeclient I think having an icon indicating the filter graph is refreshing + not displaying cursor value when out of sync with rendered waveform would be a major improvement. But TBH this is not an issue I run into a lot currently since I am usually not acquiring giant waveforms AND acquiring them continously as fast as possible
<azonenberg>
Exactly
<azonenberg>
for offline processing it's a non issue
<d1b2>
<louis> I have to go AFK now, but I'll keep this issue in mind
<azonenberg>
this would only occur during the few frames immediately after a huge waveform arrives
<azonenberg>
if you are actively moving a cursor during that gap
<azonenberg>
The point is, as we move to a more asynchronous UI flow this is the kind of thing we have to be cognizant of
<azonenberg>
We may have to make architecturla changes to deal with it
<azonenberg>
in glscopeclient we trigger rendering when events happen
<azonenberg>
in ngscopeclient we render at a fixed 60FPS for the most part
<azonenberg>
so we have to consider the scenario of what happens if we render midway between events when things are in an inconsistent state
<azonenberg>
also consider that with most scopes these giant waveforms take a long time to show up
<azonenberg>
we're not pushing a 256M point waveform at 20 WFM/s
<azonenberg>
That would be... 40 Gbps of data for a single channel of data updating at that rate, assuming 8 bit ADC codes
<d1b2>
<ehntoo> Skimming through the scrollback, double+ buffering could be interesting on the Apple silicon machines where you have unified memory.
<azonenberg>
ehntoo: Yes. The other issue being that it would need refactoring of a ton of code throughout the whole project that assumes the concept of a "current waveform" being displayed
<azonenberg>
we'd have to explicitly pass around waveform indexes, manage lifetimes of buffers
<azonenberg>
as waveforms go into history we might reallocate them or make the same object refer to a different waveform (we already recycle waveform objects to avoid expensive gpu memory allocations)
<azonenberg>
i'm not saying it can't happen but it would likely be a refactoring of the same complexity, or more, than the data model revamp we just did
<azonenberg>
For the near term i plan to go with eventual consistency
<azonenberg>
where you might have a frame or two of things desyncing, but after you stop the trigger everything is guaranteed to refer to the same waveform eventually
<azonenberg>
i think with realistic workloads where we aren't using ultra deep waveforms updating at super high rates, this will not lead to problems. and will be much more user friendly than hanging the UI during filter graph updates
<azonenberg>
we can add a refresh-in-progress icon somehwere
<azonenberg>
We can look into this more but my gut feeling is that full double buffering would be an unacceptable overhead in memory usage *especially* for the large waveforms where filter graph + rendering won't complete in a single frame
<d1b2>
<ehntoo> Considering how poky the scopes I've tried with glscopeclient have been about fetching waveform data I think it's mostly academic for the moment and with our assortment of instruments you'll be the very first to find it a pain point if anyone does. ;-)
<azonenberg>
Lol
<azonenberg>
I mean, with any scope you would still get this if memory is deep enough
<azonenberg>
e.g. my LeCroy WaveRunner might take 10+ seconds to download a 128M point waveform
<azonenberg>
then you'd have say 3 frames of inconsistency between cursors and displayed waveforms
<azonenberg>
then 10 sec later you get a new waveform and 3 more inconsistent frames
<azonenberg>
But that is a small enough window i am not concerned about it impairing overall UX
<azonenberg>
the PicoScope could likely hit this more frequently. Once we get waveform rendering working in ngscopeclient i'll do some benchmarking and find out
bvernoux has quit [Read error: Connection reset by peer]
<azonenberg>
In other news, I finally found time to populate the new PT5 prototype
<azonenberg>
this is the same board rev I tested earlier, but with a different (lower cutoff frequency) filter which will hopefully improve flatness on the high end of the operating band
<azonenberg>
Board is still cooling off in the oven and I need to attach the tip resistor, ground, and mounting foot after that
<azonenberg>
probably won't have time to fully characterize until after work