azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
<_whitenotifier> [scopehal] azonenberg pushed 1 commit to master [+0/-0/±1] https://github.com/glscopeclient/scopehal/compare/a77bcd3bb517...4dbc318e8913
<_whitenotifier> [scopehal] azonenberg 4dbc318 - LeCroyOscilloscope: fixed bug where UART trigger with zero length pattern would hang waiting for a reply from the scope that was not going to come
<_whitenotifier> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±6] https://github.com/glscopeclient/scopehal-apps/compare/dc8a3788377f...5a30762a3f0e
<_whitenotifier> [scopehal-apps] azonenberg 5a30762 - Added support for configuring trigger parameters. See #513.
<_whitenotifier> [scopehal] tech2077 opened pull request #740: Fix SampleOnRisingEdgesBase to use SampleOnRisingEdges - https://github.com/glscopeclient/scopehal/pull/740
<_whitenotifier> [scopehal] azonenberg closed pull request #740: Fix SampleOnRisingEdgesBase to use SampleOnRisingEdges - https://github.com/glscopeclient/scopehal/pull/740
<_whitenotifier> [scopehal] azonenberg pushed 2 commits to master [+0/-0/±2] https://github.com/glscopeclient/scopehal/compare/4dbc318e8913...91aee034c159
<_whitenotifier> [scopehal] tech2077 f454a95 - Fix SampleOnRisingEdgesBase to use SampleOnRisingEdges
<_whitenotifier> [scopehal] azonenberg 91aee03 - Merge pull request #740 from tech2077/rising-edge-fix Fix SampleOnRisingEdgesBase to use SampleOnRisingEdges
Degi_ has joined #scopehal
Degi has quit [Ping timeout: 256 seconds]
Degi_ is now known as Degi
chris_99 has joined #scopehal
<chris_99> Hey, i'm wondering if anyone has any suggestions for supported devices, that might be good for side channel analysis, ideally something that supports streaming, either via usb3/4, or ethernet. I've got a rigol scope at the moment, but that seems to have only a v. small amount of memory, even when using triggering and deep memory
<azonenberg> chris_99: PicoScope 3000 or 6000 series would be an excellent choice for that
<azonenberg> theyh ave very deep memory and high performance streaming
<azonenberg> the 6000E series can go up to gigapoints of memory (on a single channel I think it can actually exceed the 1 Gpoint architectural limit of scopehal right now)
<chris_99> cool, will have a look at those then, ta
<azonenberg> and in live tests we've pushed ~2.5 Gbps of triggered waveform data
<azonenberg> Continuous gap-free streaming is supported by Pico's API but not in our software yet
<azonenberg> but you can do normal triggered waveform acquisition at pretty high rates
<chris_99> ah, so at the moment it would be a 'batch' type process?
<chris_99> capturing data from the memory of the scope , rather than screaming
<azonenberg> Yes. but the download is pretty fast so there's not a huge gap between waveforms as long as you aren't maxing out bandwidth somewhere in the pipe and backing up
<chris_99> *streaming even
<chris_99> gotcha
<azonenberg> If you are trying to maximize performance, our experimental next-gen GUI (ngscopeclient) may be worth trying out. It has significantly less overhead than glscopeclient and works with the same backend
<azonenberg> but currently lacks file load/save functionality so not suitable for general use (it's still an incomplete WIP)
<chris_99> gotcha, cool. Are all decoders written in C++ out of interest, or do you use a scripting language too?
<azonenberg> We currently top out at 60 WFM/s (or whatever your display update rate is) so to move as much data as possible you'll want to adjust memory depth to not go over that
<azonenberg> the reason being that we run waveform acquisition into a FIFO that's popped by the rendering thread which is locked to vsync and (currently) only pops one waveform at a time
<azonenberg> we plan to extend that with some alpha blending so you can have waveforms exceed the vsync rate but the picoscope and thunderscope are so far the only instruments that can push data fast enough for this to be a problem
<azonenberg> so it hasn't been our top dev priority
<chris_99> you're using opengl iirc for display stuff?
<azonenberg> No
<azonenberg> We've transitioned to Vulkan
<chris_99> oh
<azonenberg> there is a small amount of opengl in glscopeclient that was hard to remove and is one of the reasons we're phasing it out as soon as ngscopeclient is finished (they'll share an upward compatible file format so all work done in glscopeclient can move over)
<azonenberg> ngscopeclient is pure vulkan
<azonenberg> anyway, as far as decodes and filter blocks go, there is no scripting language. Not really practical with our emphasis on maximum performance
<azonenberg> they're about 98% C++ with the remainder being GLSL for GPU accelerated implementations of a few key blocks like FIR filtering
<azonenberg> FFT, de-embed, channel emulation, equalizers, etc are all GPU accelerated number crunching and we want that list to grow
<azonenberg> We also use AVX vector intrinsics for acceleration in some cases but a lot of that is being replaced with GPU implementations
<chris_99> oh very fancy, didn't know it would support GPU acceleration for decoding type stuff
<chris_99> re. GPU acceleration is that using CUDA/OpenCL?
<azonenberg> No it's vulkan compute shaders
<azonenberg> So far it's mostly for numerically intensive analog processing vs actual protocol work
<azonenberg> because that tends to be more parallel
<azonenberg> but e.g. the eye pattern filter is one i'd like to GPU-ify eventually
<azonenberg> We removed all of the other GPU APIs and went pure vulkan for everything since that was the best way to support apple platforms
<azonenberg> (through moltenvk which translates it to Metal)
<azonenberg> this wasn't a priority for me but some of the folks funding development had it as a priority internally and wanted to throw money at it
<azonenberg> so we found someone who wanted to pick up the contract and made it happen
<chris_99> nice
<azonenberg> It happened at the right time because i wanted to leave GTK for performanc reasons at the same time
<azonenberg> so it was a good excuse for a major frontend rewrite/refactoring
<monochroma> chris_99: tl;dr OpenGL/OpenCL are getting less and less useable/industry support
<azonenberg> and i was also moving away from our OpenCL FFT library that was buggy and unreliable
<azonenberg> Yeah. CUDA is great if you stay in nvidia land
<azonenberg> but we don't have the engineering resources to maintain multiple acceleration stacks
<azonenberg> and right now vulkan is the most universally supported
<azonenberg> my one complaint is that the maximum single memory allocation possible is 4GB with current APIs
<azonenberg> which means that you cannot have an analog waveform more than 1 billion 32-bit floating point samples in size (digital samples are 1 byte each so digital waveforms can be up to 4G points)
<chris_99> is that due to 32 bit pointers or something?
<azonenberg> 32 bit size field in the allocator i think. you can allocate >4GB of GPU memory
<azonenberg> it just has to be in multiple chunks
<azonenberg> i.e. they may not be consecutive virtual addresses
<azonenberg> This can in principle be worked around by chaining multiple buffers in some kind of linked list but that means every access to sample memory you now have to figure out what block it's in first, which adds overhead
<azonenberg> so far it hasn't been a problem, a gigapoint is a LOT of sample data
<azonenberg> most scopes don't even let you go that high
<azonenberg> i know exactly two (picoscope 6000e and lecroy wavepro HD) that do
<azonenberg> And we're holding off because i fully expect some extension to come around that allows >4GB allocations eventually
<azonenberg> and that will in all probability be before we need it urgently :p
<azonenberg> chris_99: for background, wrt GPU acceleration
<azonenberg> one of my personal goals is a fully open hardware/software stack (scope, driver layer, and PC GUI) capable of saturating 10G Ethernet with sample data
<azonenberg> So far the closest we've got is 7.1 Gbps of realtime data coming off a prototype of the ThunderScope over Thunderbolt PCIe
<azonenberg> ngscopeclient was able to keep up with that, rendering fully intensity graded waveforms on the GPU live
<chris_99> cool, hadn't heard of ThunderScope before, just looking it up
<azonenberg> yeah we've played closely with them because they're quite open source friendly, i forget if some details like mechanical enclosure design are open
<azonenberg> (but i havent looked)
<azonenberg> the project is substantially all open source
<azonenberg> anything not published probably they havent got around to pushing yet :p
<azonenberg> i've had a prototype on my desk for a while and did some signal integrity testing on the ddr3 interface
<d1b2> <Aleksorsist> Yup! We're trying to be as transparent as possible, enclosure is a Hammond 1455L1201BK (I don't think that's been posted anywhere yet, my bad)
<d1b2> <Aleksorsist> Also new FPGA module for you soon!
<azonenberg> Awesome :D
<azonenberg> i've got 2 weeks off $dayjob at the end of the month for holidays
<azonenberg> so if you can get it to me by then i'll be able to spend a fair bit of time playing with it
<d1b2> <Aleksorsist> Sweet!! And a more on topic question to scopeclient, I want to demo USB decode and FFT at the same time (USB midi and a synthesizer audio output) would I be able to do that on current ngscopeclient?
<azonenberg> Yes. You can arbitrarily chain filters and have as many running as your CPU can keep up with. What would actually be better for that, but is not supported in ngscopeclient yet
<azonenberg> (glscopeclient can do it fine)
<azonenberg> is the spectrogram filter
<azonenberg> which gives you a 2D plot with frequency along the Y axis, time along X, and intensity/color denoting amplitude
<azonenberg> so you can see horizontal lines as tones in the audio output come and go
<azonenberg> This filter block can be instantiated in ngscopeclient, but we haven't ported the shader to actually draw the plot
<azonenberg> so it's pretty useless :p
<azonenberg> Also while we can decode USB up to the packet layer we do not currently have a decode for the MIDI device class
<azonenberg> But that would not be hard to write if someone wanted to put an hour tor two into it
<d1b2> <Aleksorsist> Oh that's really cool! I'll do the demo in glscopeclient for now then. As for USB, my understanding is two passive probes -> subtract filter -> threshold -> USB decode?
<azonenberg> i think that sounds right but i havent checked the code in a little bit. USB is actually a family of decoders
<azonenberg> for various layers
<azonenberg> so you can see the individual J/K symbols etc as well as higher level
<azonenberg> i think it might actually work directly on the two analog legs
<azonenberg> since USB isn't purely differential
<azonenberg> (usb 1.x / 2.x that is)
<d1b2> <Aleksorsist> Ah, I'd be using an isolator to force USB into 1.1 full speed mode
<d1b2> <Aleksorsist> I'll give it a try and maybe take a look at the midi decode if it's easy enough to build
<d1b2> <Aleksorsist> I'm not software savvy so if it looks like a week's adventure I'll settle for packet level
<azonenberg> If we already had a MIDI waveform type defined for raw digital MIDI then writing a block to convert a stream of USB packets to that would be easy
<azonenberg> but we don't
<azonenberg> There's an open feature request for it
<_whitenotifier> [scopehal] miek opened pull request #741: USB2PacketDecoder: fix DecodeSetup behaviour when the ACK/NAK packet is missing - https://github.com/glscopeclient/scopehal/pull/741
<_whitenotifier> [scopehal] miek commented on issue #723: Protocol Analyzer: doesn't list all USB SETUP transactions - https://github.com/glscopeclient/scopehal/issues/723#issuecomment-1337616608
bvernoux has joined #scopehal
<chris_99> Aleksorsist , thunderscope is your project ? just wondering if there's a rough cost at the moment, looks very nice (and my laptop has some version of thunderbolt)