azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
Degi has quit [Ping timeout: 252 seconds]
Degi has joined #scopehal
azonenberg has quit [Ping timeout: 244 seconds]
azonenberg has joined #scopehal
bvernoux has joined #scopehal
tiltmesenpai has quit [Quit: Ping timeout (120 seconds)]
tiltmesenpai has joined #scopehal
bvernoux1 has joined #scopehal
bvernoux has quit [Ping timeout: 244 seconds]
massi has joined #scopehal
<d1b2> <ehntoo> azonenberg - haven't had the time to finish my gpd-3303s power supply PRs up yet, got busy with some schedule-critical board bringup at work and haven't had the energy at end-of-day to do much extra programming. Should get back to it this weekend, though!
<azonenberg> ah ok no worries
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 2 commits to master [+2/-0/±8] https://github.com/glscopeclient/scopehal-apps/compare/49a0c073399f...80e5d3dff366
<_whitenotifier-7> [scopehal-apps] azonenberg a028d31 - Initial skeleton of timebase properties dialog. Doesn't actually do anything useful yet.
<_whitenotifier-7> [scopehal-apps] azonenberg 80e5d3d - Finished initial work on timebase properties dialog
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±6] https://github.com/glscopeclient/scopehal-apps/compare/80e5d3dff366...db1002473a50
<_whitenotifier-7> [scopehal-apps] azonenberg db10024 - Added color picker to channels properties dialog, various help text improvements to timebase properties dialog
bvernoux1 has quit [Quit: Leaving]
bvernoux has joined #scopehal
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±5] https://github.com/glscopeclient/scopehal-apps/compare/db1002473a50...6dc13fab4c10
<_whitenotifier-7> [scopehal-apps] azonenberg 6dc13fa - Initial quick and dirty immediate mode waveform renderer for testing. Does not scale to more than a few kpoints.
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±3] https://github.com/glscopeclient/scopehal-apps/compare/6dc13fab4c10...69da43551177
<_whitenotifier-7> [scopehal-apps] azonenberg 69da435 - Fixed channel properties dialog calling expensive instrument APIs every frame. Fixed MainWindow sometimes closing the wrong waveform group.
massi has quit [Remote host closed the connection]
<azonenberg> what do you know, we have a bug in the pico bridge that i never notiecd until now lo
<_whitenotifier-7> [scopehal-pico-bridge] azonenberg pushed 1 commit to master [+0/-0/±1] https://github.com/glscopeclient/scopehal-pico-bridge/compare/c9a5817cfb08...35a54feb42b7
<_whitenotifier-7> [scopehal-pico-bridge] azonenberg 35a54fe - Fixed bug causing memory depth queries to fail when 10/12 bit ADC depth is enabled
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±4] https://github.com/glscopeclient/scopehal-apps/compare/69da43551177...1b8e65d9aace
<_whitenotifier-7> [scopehal-apps] azonenberg 1b8e65d - Fixed a bunch of issues around waveform drag and drop
<d1b2> <azonenberg> So I wanted to make more progress on UI dev and decided not to wait for lain :p
<d1b2> <azonenberg> I threw together quick and dirty immediate mode waveform renderer in ten lines of code
<d1b2> <azonenberg> No alpha blending, horribly slow since all of the geometry transformation is done on the CPU every frame, and crashes with a divide-by-zero error if you throw too many points at it or something (not sure on the exact conditions)
<d1b2> <azonenberg> but it gives me something to look at while debugging other stuff lol
<azonenberg> ehntoo: very interested if you can get this working on mac
<azonenberg> it should work out of the box with the demo scope. the display is going to be ugly because this is *not* by any means a final renderer
<d1b2> <ehntoo> ooooh. I'll give it a go.
<azonenberg> nor is it intended to be
<azonenberg> there are artifacts at some zoom levels and it crashes with divide-by-zero given long input
<azonenberg> this is "enough of a mockup i can develop the rest of the UI around it"
<azonenberg> to for example make sure cursors are acting as intended
<d1b2> <johnsel> looks good buddy
<d1b2> <johnsel> I wasn't sure if the blue default theme would be ugly as fuck but it looks not bad at all really like this
<d1b2> <ehntoo> Haven't had much luck. The demo scope gets me an impenetrable exception from "Unknown/Just-in-Time compiled code", and a quick attempt with my MSO5074 gave me the stack trace attached. Trying some different setup got me a SIGABRT from a stack check failure in RigolOscilloscope::AcquireData(), but that one's probably a separate issue.
<d1b2> <johnsel> I wasn't sure if the blue default theme would be ugly as fuck but it looks not bad at all really like this
<d1b2> <johnsel> m1?
<d1b2> <ehntoo> yeah. I'll try and debug a little further tonight. 🙂
<d1b2> <johnsel> I'd suspect the unified memory is the culprit here
<d1b2> <johnsel> probably something weird memory referencing issue because it points to the same memory from both cpu and gpu context
<d1b2> <johnsel> but that's my fairly uneducated guess
<azonenberg> So right now we have the opposite problem
<azonenberg> we ignore unified memory and allocate two blocks, one CPU side and one GPU side
<azonenberg> and copy between them
<azonenberg> if they happen to be the same address space, the copy is faster
<azonenberg> but we haven't yet added logic to recognize that scenario and eliminate the redundant allocations
<azonenberg> ehntoo: your crash seems to be the divide by zero i saw before
<azonenberg> i believe it's caused by creating a segment in a polyline that rounds to zero pixels in length
<azonenberg> it's trying to find a normal or something and chokes
<azonenberg> This is not something i intend to put any time into debugging because the new renderer should be landing any day now
<d1b2> <johnsel> EXC_BAD_ACCESS is an exception raised as a result of accessing bad memory
<d1b2> <johnsel> doesn't seem like a divide by 0 to me
<azonenberg> it's doing 1.0 / sqrt(x)
<azonenberg> if x == 0 this will die
<azonenberg> i cant see why that would display as an access violation
<d1b2> <johnsel> exactly
<d1b2> <johnsel> that's my point
<azonenberg> let me see what the crash does for me on linux...
<azonenberg> (with asan)
<d1b2> <ehntoo> in my (admittedly somewhat limited experience), EXC_BAD_ACCESS can be a little misleading on macOS. I mostly do embedded work, but on the rare occasion I've been writing stuff for macs I've seen it crop up in unusual places.
<d1b2> <johnsel> When that block of memory is no longer mapped for your application or, put differently, that block of memory isn't used for what you think it's used, it's no longer possible to access that chunk of memory. When this happens, the kernel sends an exception (EXC), indicating that your application cannot access that block of memory (BAD ACCESS).
<d1b2> <johnsel> as per some random tutorial on it
<d1b2> <johnsel> it's just a general "this memory can't be accessed" error which in the context of accessing x (which is the only place it can come from) seems to indicate memory corruption from something under the hood (perhaps Vulkan optimizes something some way it shouldn't)
<d1b2> <johnsel> it'd be very helpful to have a linux vm on m1 too when I think about it
<azonenberg> hmmmm
<d1b2> <ehntoo> I have several, but there's no vulkan passthrough. Best you can do for GPU acceleration at this moment is a pretty barebones OpenGL context last I was looking.
<azonenberg> ok so h/o this might not be a divide by zero
<d1b2> <johnsel> yep good point
<azonenberg> asan shows stack overflow
<d1b2> <johnsel> see, I'm not that dumb
<d1b2> <johnsel> I can read 🙂
<azonenberg> ... oh
<azonenberg> lol
<d1b2> <david.rysk> I can't recall if violating W^X will cause EXC_BAD_ACCESS on macOS
<azonenberg> ok i see whats going on
<azonenberg> so there's two different problems
<azonenberg> one is the divide by zero i pointed out before
<azonenberg> here's the other
<azonenberg> ImVec2* temp_normals = (ImVec2*)alloca(points_count * ((use_texture || !thick_line) ? 3 : 5) * sizeof(ImVec2)); //-V630
<azonenberg> it's trying to alloca() and when you give it enough points in a polyline it runs out of stack space
<azonenberg> i think i can fix both by drawing discrete lines vs a polyline
<d1b2> <ehntoo> that would certainly do it, lol
<d1b2> <louis> As a general design question, is there a reason that we don't have a way to get access to the samples buffer of a waveform w/o dynamic_casting it to a concrete type?
<d1b2> <louis> There are filters that act only pointwise and so don't care if it's sparse or dense in theory. E.g. subtract, clip, window
<azonenberg> louis: Yes, there is a reason. The type of the sample data is not known a priori
<azonenberg> it might be boolean, analog, or protocol type
<d1b2> <louis> Yes, but with a ->GetSampleBuffer() and ->GetSampleWidth() you can still act on them if all you're doing is moving them around
<d1b2> <louis> I guess I don't know if that's actually ever the case except in window and offset filters
<d1b2> <louis> Anyway viz. cpp uint8_t* out = GetAlignedSamplesPointer(cap); uint8_t* a = GetAlignedSamplesPointer(in); memcpy(&out[0], &a[start_sample * in->GetSampleSize()], (end_sample - start_sample) * in->GetSampleSize());
<azonenberg> so the problem there is also, you have cpu and gpu buffers in AcceleratorBuffer
<azonenberg> and AcceleratorBuffer is itself a templated class
<azonenberg> you can't have a base AcceleratorBuffer without a type, at least in the current object model
<azonenberg> It might be possible to get what you're asking for with a giant mess of multiple inheritance
<d1b2> <louis> making it available to the cpu is already solved with the pure virtual PrepareForCpuAccess on WaveformBase*
<azonenberg> but more realistically, you can just create an AcceleratorBuffer<float>& samples = udata ? udata->m_samples : sdata->m_samples
<azonenberg> the dynamic_cast is still needed once at the top level but i don't see that being a problem since it's outside the inner loop
<d1b2> <louis> (This is actual code I wrote for the window filter after adding a generic ->GetSamplesBuffer() and ->GetSampleSize().)
<azonenberg> and filters that ignore sample timestamps are comparatively rare so i don't see the need to optimize for it
<d1b2> <louis> But I'm wondering if it's actually going to be used anywhere other than in a window and offset filter
<azonenberg> yeah i think not
<azonenberg> maybe a few vertical measurements
<azonenberg> what might make more sense is to add this as a helper method to the Filter class
<azonenberg> with a precondition that the input is a uniform or sparse analog waveform
<azonenberg> But it's still doing the cast under the hood, you just don't have to put it in each derived filter
<d1b2> <louis> this = static float* GetSamplesPointer(WaveformBase*)?
<azonenberg> i would still prefer to return an AcceleratorBuffer<float>& or something
<_whitenotifier-7> [scopehal] azonenberg pushed 1 commit to master [+0/-0/±2] https://github.com/glscopeclient/scopehal/compare/d4102829bd60...8318e5904267
<_whitenotifier-7> [scopehal] azonenberg 8318e59 - PicoOscilloscope: added CanInterleave()
<_whitenotifier-7> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±2] https://github.com/glscopeclient/scopehal-apps/compare/1b8e65d9aace...dc58e7777b82
<_whitenotifier-7> [scopehal-apps] azonenberg dc58e77 - WaveformArea: use lines vs polylines for rendering to avoid imgui stack size limitation (https://github.com/ocornut/imgui/issues/5704)
<d1b2> <louis> I think generally the udata ? udata->m_samples : sdata->m_samples and related cpp if (auto uaw = dynamic_cast<UniformAnalogWaveform*>(in)) DoSomething(uaw); else if (auto saw = dynamic_cast<SparseAnalogWaveform*>(in)) DoSomething(saw); seems like line noise to me when the functionality could be pushed into a pure virtual on WaveformBase*, but I don't have a huge C++ background so I don't know if that's a bad pattenr.
<azonenberg> So the problem is specifically that WaveformBase is not a template on the data type
<azonenberg> we'd need to add a derived class WaveformBase<T>
<azonenberg> then we run into the problem that SparseWaveformBase and UniformWaveformBase are *not* templates and would somehow have to derive from it
<d1b2> <louis> Yes, that sounds like a cluster
<azonenberg> unless we wanted to have completely separate inheritance chains for the samples and the metadata
<azonenberg> it could be done but i dont think the overhead and engineering time justifies it
<d1b2> <louis> For taking a window though you only need a samples buffer pointer and size for doing that memcpy.
<d1b2> <louis> But I think you're right that that's not a common use-case
<azonenberg> yes. and again, ideally you dont want the windowing to be CPU based memcpy only
<azonenberg> you'd want to window the GPU memory with a GPU-to-GPU copy and the CPU memory with a memcpy
<d1b2> <louis> Yes
<azonenberg> and not have to cross pcie except for the pointers
<azonenberg> That's an optimization for the future, but we should architect with that in mind
<azonenberg> not the quick and dirty non-accelerated initial implementation
<azonenberg> ehntoo: crash should be fixed btw
<azonenberg> its going to be very slow with more than a few thousand samples
<azonenberg> consider this a debug visualization not a production renderer
<bvernoux> Amazing I have built latest ngscopeclient
<bvernoux> it is so fast
<bvernoux> even in full screen
<bvernoux> in demo
<bvernoux> with glscopeclient it was something like 16fps with laggy menu on my PC
<bvernoux> here it is rock stable 20fps (limited by the demo waveform generator I think)
<bvernoux> also the GUI have so faster response
<bvernoux> like when pausing ...
<bvernoux> even with 5 pending waveforms it is amazingly fast and fluid
<azonenberg> Yes the demo is cpu bound and not super fast
<azonenberg> i will need to optimize/rewrite it using a faster PRNG etc to hit higher waveform rates to be a proper stress test
<azonenberg> with high waveform depths the current proof-of-concept renderer will be very slow
<azonenberg> we're still waiting on lain to finish the proper accelerated shader
<bvernoux> the demo is already a lot faster than what we have with glscopeclient
<bvernoux> it is ultra smooth on my PC
<bvernoux> and full screen work perfectly ;)
<bvernoux> My PC GFX card does a lot less noise too
<azonenberg> Well, like i said this is still with the temporary renderer
<azonenberg> the new renderer will be more GPU heavy and use almsot no CPU
<bvernoux> yes but it is already a big success
<bvernoux> I'm impatient to see it with filters ;)
<bvernoux> Do you need something specific to add filters or you just need to write the menu and link everything ?
<azonenberg> I need to write the menu, hook up that graph editor widget you found and try it out
<azonenberg> and do the rendering for all of the different types of waveform
<bvernoux> ha yes the rendering was different on glscopeclient
<bvernoux> anyway ngscopeclient has already reached a big step and it is visually very impressive even on an old PC
<azonenberg> Well hopefully the real intensity graded renderer isnt much slower
<bvernoux> I'm pretty sure it can even work on my integrated HD4000 GFX card ;)
<azonenberg> long term, i want to build multiple renderers
<azonenberg> optimized for speed vs graphical quality on different generations of CPU
<azonenberg> of GPU*
<bvernoux> ha yes good idea
<azonenberg> for example, one with no intensity grading for lower end hardware
<azonenberg> and another with full antialiasing for when you want things to be pretty
<bvernoux> what is not clear is how to group again waveform together
<bvernoux> with drag&drop
<bvernoux> as it is very clear at have different Waveform Group but I do not know how to group again together some waveform
<bvernoux> at->to
<bvernoux> I imagine it is WIP
<bvernoux> we can even super-impose waveform woo
<azonenberg> Re-grouping is not currently implemented
<azonenberg> you can split, and you can put waveforms onto a single plot
<azonenberg> you cannot currently reorder plots, or create a new plot within a group
<azonenberg> That's still pending
<azonenberg> It will happen, i just have to work out the detailed UX
<azonenberg> does the splitting UI work well?
<azonenberg> I tried to mirror the imgui docking workflow as much as i could
<bvernoux> the splitting work fine and it is very smooth and logical
<azonenberg> despite the fact that waveforms are not actually imgui windows, i wanted them to act as much like it as possible
<bvernoux> even to super-impose waveform on an other waveform very nice
<azonenberg> Yep
<bvernoux> it was not possible to do that on glscopeclient
<bvernoux> and it is very interesting
<azonenberg> Correct. glscopeclient had the concept of a primary waveform in an area
<azonenberg> that had to be analog, and there could only be one
<azonenberg> that no longer exists
<azonenberg> one thing i still have to account for is when you move waveforms to an area with a signficiantly different vertical scale
<azonenberg> there needs to be a warning or something
<azonenberg> because you might suddenly change v/div by a huge amount
<azonenberg> and that could damage instrument inputs
<azonenberg> (all channels in one plot have the same vertical scale)
<d1b2> <ehntoo> as you were saying, it's not going to win any speed competitions, but it does seem to work. 😄
<azonenberg> Awesome :D
<azonenberg> Very much looking forward to getting the final shader deployed
<d1b2> <ehntoo> small bug - I can't seem to double-click and bring up the timebase or vertical properties dialogs once the waveform is displayed. I wonder if it's related to a lower framerate and getting multiple events per frame or something
<azonenberg> hmmm interesting. i'll look into that. imgui should queue up events across framess to ensure that things like click and release of a button are not perceived to happen simultaneously
<azonenberg> even at lower fps
<azonenberg> but that is an optional setting and idk if the default on mac is different or something?
<bvernoux> yes events are queued
<azonenberg> anyway, something to keep an eye out for and see if it still happens in the future
<bvernoux> to do not loose anything like in old imgui
<bvernoux> but IIRC the queue has a limit
<azonenberg> anyway the gui is still very much in flux and i expect major changes to persist for the next few weeks\
<d1b2> <ehntoo> Was playing with it a little more and my Mac crashed hard, so that's fun. 😅
<d1b2> <ehntoo> I'm going to take that as a sign I should install the OS updates that I've been ignoring
<azonenberg> i mean low level GPU code always has potential to trigger bugs, but i make heavy use of the vulkan validation layers which should catch gross issues
<azonenberg> So i expect stability to be much improved vs glscopeclient whieh we have no way to validate
<azonenberg> and i know glscopeclient makes occasional invalid opengl calls because i get GL_INVALID_OPERATION every so often
<azonenberg> and i have no idea what i am doing to trigger it
<azonenberg> because GL doesnt have the kind of error feedback vulkan does
<bvernoux> yes GL is an old crap ;)
<bvernoux> Vulkan is pretty amazing with all the details on API check
<bvernoux> what is fun is Vulkan is supported on Linux on my HD4000 but it is not supported on Windows10
<azonenberg> yeah opencl at least has oclgrind
<azonenberg> GL has ~nothing l ol
<bvernoux> Vulkan is also clearly at a lower level than OpenGL
<bvernoux> Lot of things are pretty hard to understand in Vulkan but very well documented
<azonenberg> Coming from the GPU compute world with CUDA etc i'm quite comfortable
<azonenberg> i could see it being harder to someone used to the GL 2.x API especially
<bvernoux> On my side I have no any knowledge about anything in 3D so it explain I do not understand all concept behind vulkan (or even OpenGl)
<bvernoux> I was doing CUDA compute but running C code compiled to obtain something fast (without branch...) on GPU but it was clearly not related to any GPU/3D things it was pure C code algorithm ;)
<bvernoux> it was fun anyway as I was pushing some data in texture memory to speedup some stuff ;)
<bvernoux> and it was not to display anything of course just to do pure computation not related to GFX/3D
bvernoux has quit [Read error: Connection reset by peer]
<d1b2> <ehntoo> I think perhaps draining the socket on connection may be needed as a robustness improvement. Looks like my MSO5074 kept on spewing some previous waveform data at a new instance of ngscopeclient after a previous crash, and scopehal was not amused by it as a response to *IDN?
<azonenberg> it's a nice idea, but i'm not sure how to do that reliably
<azonenberg> i have seen some other scopes do that
<azonenberg> the tricky bit is, how long do you wait to be sure that it's done spamming you
<azonenberg> this will directly slow down startup
<d1b2> <ehntoo> yeah... I'll do some experimentation on it along with some other Rigol 5000 series robustness work once I've wrapped up my power supply driver, see if I can come up with something reasonable.
<azonenberg> (that said, we should also not crash on a bad IDN response, if we see garbage we should gracefully terminate or something)
<azonenberg> or ideally disconnect cleanly
<azonenberg> that will take more work to do though
<azonenberg> in general i want to work on better robustness and handle things like the scope vanishing midway through a session gracefully
<azonenberg> (as well as allowing you to seamlessly go on/offline)
<d1b2> <zyp> I'd argue that if you get garbage in response to IDN, you should discard it and receive again
<azonenberg> zyp: soooo
<azonenberg> that has its own set of problems
<d1b2> <zyp> probably; I'm not super familiar with SCPI, how are responses framed?
<d1b2> <zyp> are they delimited at all?
<azonenberg> its a line based ascii protocol but there are no tags to match request to response (unless there is something like that provided higher up in a framing protocol like VICP or LXI... raw SCPI does not)
<azonenberg> Which means if you send *IDN? five times
<azonenberg> then you get a valid reply
<azonenberg> you might have four more ID strings coming your way
<azonenberg> Or the device might have been busy and ignored the first four, then replied to the fifth
<azonenberg> or anything in between
<azonenberg> there's a lot of ways things can go sideways when things desync
<azonenberg> *especially* if the device fails to correctly clear state when a socket connects and reconnects
<azonenberg> Which is a surprisingly common failure mode
<azonenberg> i.e. if you disconnect midway through downloading a waveform then reconnect you might get the other half of the data
<d1b2> <zyp> indeed, but a mitigation that doesn't handle all potential states is still better than nothing
<d1b2> <zyp> I'd argue that in most situations where there's still data in the pipe, that's likely not IDN responses, so if you send an IDN and then discard received data until you get a valid IDN response, you've handled most of the issue without introducing a connection delay, since you can still receive a valid IDN response immediately
<azonenberg> yes. the issue is that some devices (Tek MSO4/5/6 off the top of my head, as well as many Siglent) will completely ignore any command you send them if they're "busy"
<azonenberg> (and there is often no way to determine the "busy" state externally)
<azonenberg> the scpi official way is to send *OPC? but that doesnt always work
<azonenberg> So if you send IDN and discard data until you get a reply, you might wait forever
<azonenberg> the other issue is that this requires you be able to predict what a valid IDN reply is
<d1b2> <zyp> I figured you'd have a timeout as well
<azonenberg> i mean you can use some heuristics, like it has to be no more than X bytes, have a bunch of comma separated fields, and consist entirely of printable ascii characters
<azonenberg> but i could see that false triggering on a few MB of binary waveform data fuzzing it
<d1b2> <zyp> are you using this to autodetect which kind of device you're connected to?
<d1b2> <zyp> I figured each supported device would have an expected pattern
<azonenberg> We do not autodetect driver based on IDN string currently
<azonenberg> Most if not all drivers currently do not validate the vendor name, there is actually a bug filed for iirc a crash when using the R&S driver to connect to a rigol scope or vice versa
<azonenberg> Improving robustness of the detection so it will complain when you connect to a "wrong" instrument would be good to have
<d1b2> <zyp> so if you know what device you're connecting to, you can know how the IDN response should look
<azonenberg> assuming you have a sufficiently large corpus of hardware or pcaps
<azonenberg> yes
<azonenberg> you can get at least a rough idea
<d1b2> <zyp> and that might in the future also tie into autodetecting it
<d1b2> <zyp> until you run into overlapping patterns 🙂
<azonenberg> the biggest challenge would be in retrofitting it
<azonenberg> unless you have a bunch of scopes by, say, siglent to test against
<azonenberg> you dont necessarily know that you aren't alienating older firmware or something
<d1b2> <zyp> true enough
<azonenberg> So any changes that result in the driver rejecting a connection need to be made very carefully
<d1b2> <zyp> but you could treat the missing IDN response/timeout as a warning condition, not fatal error
<azonenberg> adding new features that you don't use unless it's X model is much safer and lower risk
<azonenberg> and yes you could
<azonenberg> In general this is all "here be dragons" territory
<azonenberg> we had somebody in here a while back where things broke with a recent siglent firmware update
<d1b2> <zyp> what's IDN currently used for if you don't know how the response is gonna look?
<azonenberg> i dont know what, the pcaps looked fine to me, but he said he stopped seeing waveforms
<azonenberg> The response is defined to be comma separated vendor, model, and i think firmware and hardware revisions
<azonenberg> the content of each field is not specified in the standard
<azonenberg> each vendor can format them however they want in a given product
<azonenberg> I store them in members that are used for display
<azonenberg> and additionally, some but not all drivers compare the model against known lists of devices to implement quirks or tables of valid memory depths/sample rates etc
<azonenberg> these are generally not exact matches, they're things like "model starts with SDS2" etc
<azonenberg> So we cannot trivially enumerate all legal model names/numbers this way