azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
Degi_ has joined #scopehal
Degi has quit [Ping timeout: 260 seconds]
Degi_ is now known as Degi
<_whitenotifier> [scopehal-docs] azonenberg pushed 1 commit to master [+0/-0/±1] https://github.com/glscopeclient/scopehal-docs/compare/5e3de4ccc3f8...b5080183deb7
<_whitenotifier> [scopehal-docs] azonenberg b508018 - Split IQ squelch vs squelch filter documentation
<d1b2> <azonenberg> note that glscopeclient and ngscopeclient can import VCD and CSV waveforms (among others)
<azonenberg> So you can grab data from a simulation or an unsupported LA and then use all of our analysis tools to postprocess it
<d1b2> <abhishek> Cool
<azonenberg> But if you want to build a LA on the beaglebone, it would be pretty easy to interface to scopehal. what i'd suggest is using the ethernet interface and running two socket servers, one that does control operations like trigger setup and controlling memory depth etc, the other for waveform data
<azonenberg> look at the RemoteBridgeOscilloscope class and some of the derived drivers like the Pico and Digilent drivers
<azonenberg> most of our USB attached scopes we interface to via socket servers that provide network transparency
<azonenberg> the idea is, we don't want you to have to install pico's driver and SDK layer if you don't own a picoscope
<azonenberg> So you'd install the base scopehal package which knows how to talk over a TCP socket to our pico server
<azonenberg> and you'd then install that server (which depends on pico's libs) if you needed it
<azonenberg> We used a common base protocol for all of those bridges, then added instrument specific features on top
<azonenberg> if you were building your own instrument, you could probably reuse a lot of both the server and driver side code
<azonenberg> (this was our intent)
<d1b2> <abhishek> azonenberg - check out github.com/abhishek-kakkar/BeagleLogic
<d1b2> <abhishek> I do have a TCP frontend written in Go, but the TCP interface/protocol probably needs a bit of overhaul to be more robust
<_whitenotifier> [scopehal] azonenberg pushed 2 commits to master [+2/-0/±6] https://github.com/glscopeclient/scopehal/compare/7c63e7e44ffd...c4ff70829bd7
<_whitenotifier> [scopehal] azonenberg d339a3e - Renamed existing squelch filter to IQ Squelch. Added new scalar squelch filter.
<_whitenotifier> [scopehal] azonenberg c4ff708 - ClockRecoveryFilter: improve handling of gating/squelch
<d1b2> <abhishek> And yes, I agree that a two socket control/data plane is better. Currently it's both over 1 socket and I have seen some mix-ups.
<_whitenotifier> [scopehal] azonenberg pushed 1 commit to master [+0/-0/±1] https://github.com/glscopeclient/scopehal/compare/c4ff70829bd7...0856535e38b3
<_whitenotifier> [scopehal] azonenberg 0856535 - Removed some debug prints left in by accident
<d1b2> <abhishek> Also, how's Thunderscope dev coming along? I think it's probably the future of how high speed instruments are going to be connected to computers.
<azonenberg> Personally I'm rooting for 10G - or higher - Ethernet (although there is talk of making an alternate PC interface for thunderscope that will have this instead of thunderbolt)
<azonenberg> But it seems to be proceeding nicely, we've been collaborating with them
<d1b2> <azonenberg> Last I heard a new hardware rev was supposed to be heading my way to play with soon
<d1b2> <abhishek> Nice. I mostly follow your Twitter for updates around the high speed analog things that you do - probes and all.
<d1b2> <azonenberg> (You should probably follow my mastodon then as i've moved almost entirely there)
<d1b2> <azonenberg> here's the previous thunderscope hardware rev on my bench
<d1b2> <azonenberg> checking pcie signal quality
<d1b2> <abhishek> Also, heads up - I discovered DRA821 as an aarch64 processor that'd do 2.5G Eth, 4-lane PCIe Gen3 and 2xCortex-A72 in a single package
<d1b2> <abhishek> This is a TI part
<d1b2> <abhishek> Could be a nice starting point to put a device like Thunderscope on the network
<_whitenotifier> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±2] https://github.com/glscopeclient/scopehal-apps/compare/0230f4d52323...41278ea9912f
<_whitenotifier> [scopehal-apps] azonenberg 41278ea - Updated submodules
<d1b2> <azonenberg> oh cool. But too slow for something like a thunderscope
<d1b2> <azonenberg> any ethernet interface for them would need to be pure FPGA
<d1b2> <azonenberg> just an extra SERDES lane right to a SFP+
<d1b2> <abhishek> I see
<d1b2> <abhishek> Trying to jump from 4-layer MCU based designs to 6-layer and possibly larger and complex designs involving FPGAs, processors like these and DDR. Lots to learn
<d1b2> <abhishek> The last one I designed in 2017 was this - https://theembeddedkitchen.net/announcing-beaglelogic-standalone/694 . It's technically a SBC, but I used a SiP so didn't have to care about DDR routing back then. But it's something I would not like to skip now
<d1b2> <azonenberg> ah one of the octavo parts
<d1b2> <abhishek> Yep
<d1b2> <azonenberg> So far my trick has been to use a full sodimm. it makes a lot of the termination layout etc easier since all of that is done on the dimm
<d1b2> <azonenberg> and no 0.8mm bga fanout. just a nice simple socket
<d1b2> <azonenberg> assuming of course you can afford the area and fpga pins / need the bandwidth to justify a full 64 bit memory bus
<d1b2> <abhishek> Yep, it's cheap for the capacity you get vs single ram chips; plus what you just said
<d1b2> <azonenberg> And yes, cost is the other big thing
<d1b2> <azonenberg> dimms get economies of scale. single ddr chips on digikey are $$$$
<d1b2> <abhishek> More so in a country like India, where you have to add a 50% markup on digikey prices to account for tariffs.
<d1b2> <azonenberg> ouch. and i thought the... 25%? tariff on parts coming from china was bad
<d1b2> <azonenberg> (to the US)
<azonenberg> lain, johnsel: when you get a chance can one of you please look at the macos CI build failures i've had the last day or so?
<d1b2> <azonenberg> (@johnsel )
<d1b2> <abhishek> It's something like this last time I ordered - 10% of invoice value (parts + shipping) + 28% customs (this can vary depending on the part) + some more + some $7-8 Fedex/DHL charge to file the paperwork on your behalf.
<d1b2> <azonenberg> anyway, so to give you a bit of background on our larger scale roadmap for scopes
<d1b2> <abhishek> Yep
<d1b2> <azonenberg> thunderscope basically replaces an entry level scope called BLONDEL that I had on the original roadmap, but axed because I didn't feel it was worth the effort
<d1b2> <azonenberg> in particular i realized we couldn't compete with rigol, siglent, etc on price for the low end
<d1b2> <abhishek> Yeah, economies of scale hit really hard when you need low cost
<d1b2> <azonenberg> Then the second model, which I may or may not ever end up building, is a ~250 MHz scope called DUDDELL that uses the same ADC as the thunderscope, rigol 1000z, etc. But one dedicated per channel
<d1b2> <azonenberg> So you get 1 Gsps no matter how many channels are active
<d1b2> <azonenberg> (vs having one adc shared by up to 4 channels)
<d1b2> <azonenberg> Then we get to the first one I am currently seriously interested in, ZENNECK. It's 5-6 Gsps (the ADC can do 6, but unsure which FPGA I will pair it with so it might be cut back to 5) at 12 bits, aiming at 1 GHz bandwidth
<d1b2> <abhishek> The reason why I wasn't really able to get BeagleLogic off the ground - I needed to get the MSRP below $50 (so I was directly competing with SBCs plus some with higher performance that didn't have the hardware to support logic analysis like the TI SoCs that were in the Beagle series of boards).
<d1b2> <azonenberg> I have one of the ADCs (AD9213 in the 6G speed grade) but the FPGA I planned to pair it with is backordered until 2024 according to current estimates, so it may be a while before I can do more work on it. There is more R&D to do on the frontend in the meantime but i kinda de-prioritized it due to the component shortage
<d1b2> <azonenberg> Makes sense. This is why I'm targeting the high end
<d1b2> <abhishek> Which FPGA btw?
<d1b2> <azonenberg> XCAU25P
<d1b2> <azonenberg> (Per channel)
<d1b2> <azonenberg> and probably 1-2 sodimms of ddr4 per channel as capture buffer
<d1b2> <abhishek> Ah, Artix UltraScale
<d1b2> <azonenberg> As you can imagine it won't be cheap
<d1b2> <azonenberg> but i'm not interested in being the f/oss competition to rigol and siglent
<d1b2> <azonenberg> i want to play in the big boy pool :p
<d1b2> <azonenberg> so that's something that would be competing with lecroy waverunner HD, tek mso5, etc
<d1b2> <azonenberg> And then its sibling, VOLLUM, using the full speed 10 Gsps AD9213 with a 2 GHz frontend
<d1b2> <azonenberg> The digital subsystem for both would likely have a lot in common
<d1b2> <azonenberg> but the frontend may or may not be the same
<d1b2> <azonenberg> And then finally, the big scope that i am dreaming of but will probably never build: MURDOCK. Four interleaved AD9213s per channel giving 40 Gsps @ 12 bits, and as much bandwidth as I can manage to fit
<d1b2> <azonenberg> this would probably need a kintex/virtex ultrascale+ per channel and a ton of ddr4 or even HBM to keep up with the 480 Gbps of sample data
<d1b2> <azonenberg> Anyway, right now these are roadmaps and I have some prototypes of an early generation frontend
<d1b2> <azonenberg> But no actual engineering has happened on that. All of the serious work has been on probes and the software side
<d1b2> <abhishek> But how would you be attaching these to the computers?
<d1b2> <abhishek> I assume, PCIe Gen 4x16 or PCIe Gen5x16 cards, or NICs?
<d1b2> <abhishek> Interestingly I am also interested in playing around with the Artix UltraScale+ FPGAs but on the PCIe front
<d1b2> <abhishek> I was in touch with India Xilinx reps
<d1b2> <abhishek> They're both based on XCAU25P, so if you can get your hands on one of these boards, that should be helpful
<d1b2> <abhishek> BTW, following you on Mastodon now.
<d1b2> <azonenberg> Back
<d1b2> <azonenberg> as far as PC interface goes, I was planning to use 10 or 40G Ethernet. Maybe 100G as an option in the future depending on which FPGA I'm suing
<d1b2> <azonenberg> using*
<d1b2> <azonenberg> Or 25G. Right now none of my gear is 25/100 capable other than my VCU118
<d1b2> <azonenberg> switching and endpoints are all 10G except for a single 40G link
<d1b2> <azonenberg> In general network transparency is a key requirement for me because i'm so often in a different room from the device i'm debugging
<d1b2> <abhishek> Yeah, and with remote work this trend is only gonna increase
<d1b2> <abhishek> The only thing is, if you have an interface like USB/PCIe/TBT - it's point to point so easy to configure. Networking gets a bit tricky but only when you do the set up (like you got to have to do a few steps before it's all fully set up)
<d1b2> <azonenberg> Exactly. this is one of the key reasons that ngscopeclient put a lot of work into being even more nonblocking than glscopeclient
<d1b2> <azonenberg> I want to support use cases like "I'm at my desk near Seattle debugging a production line in Shenzhen"
<d1b2> <azonenberg> We want to support high latency low throughput WAN links with usable performance, but also - if the bandwidth is there
<d1b2> <azonenberg> have the performance to stream 10+ Gbps of wavefrom from scope to GPU and do accelerated processing and rendering etc
<d1b2> <abhishek> You know, when I was building BeagleLogic, I would envision the instrument to expose a web interface to itself and be fully functional. That'd be more plug and play than a native OS application
<d1b2> <azonenberg> We're not quite there yet. So far the record of 7.16 Gbps is held by ngscopeclient paired with a ThunderScope
<d1b2> <abhishek> Saleae is going in that direction with their Electron-based frontend for their logic
<d1b2> <azonenberg> I'm not opposed to other interfaces. But my focus here has always been performance, and it's something where we absolutely annihilate the competing solutions from just about any big vendor
<d1b2> <abhishek> Yeah, that's why I have been closely watching Thunderscope
<d1b2> <azonenberg> Yeah. They're so far the only scope that can really push ngscopeclient to its limit
<d1b2> <azonenberg> the PicoScope 6000e is a distant second place, the record is about 2.5 Gbps there
<d1b2> <abhishek> They might be using TCP sockets for now, if they'd try named pipes it should go even faster
<d1b2> <abhishek> assuming the PCIe is not the limiting factor
<d1b2> <azonenberg> Yeah i havent done a lot of profiling on the thunderscope yet because there's still more debug to do
<d1b2> <abhishek> Got it
<d1b2> <azonenberg> I run ngscopeclient under vtune and tweak it constantly though
<d1b2> <azonenberg> To give you an idea of where my brain is roadmap wise, there have been some discussions about partitioning/scheduling complex workloads across multiple GPUs for when one isn't enough
<d1b2> <azonenberg> imagine being able to run de-embedding, CDR PLL, thresholding, eye pattern, and line code processing for each of four pcie lanes on a separate gpu
<d1b2> <azonenberg> then push that up to the CPU or a single GPU to do upper layer protocol analysis
<d1b2> <azonenberg> We're nowhere near that point, but if we want to be able to keep up with >10 Gbps of streaming waveform data with complex filter graphs, i think we'll need to get there
<d1b2> <abhishek> One thing I notice is that most of the protocol (I skimped through the scopeclient code) is pretty simple. Because I work on backend stuff most of the time during my workday I started to wonder if RPCs can be used to abstract out the instrumentation protocol. Had been trying a few ideas using a TCP server/client model and gRPC in Rust/Tonic and could saturate the 1Gbps ethernet when transporting data
<d1b2> <abhishek> Yep, it'd be really awesome
<d1b2> <azonenberg> Yes, that is something people are looking into
<d1b2> <azonenberg> one major scope vendor is shipping a beta firmware that has a gRPC API to a couple of people
<d1b2> <azonenberg> I don't know a lot about it as it's pre-release firmware under NDA. but it's something people are looking at
<d1b2> <abhishek> Whoa
<d1b2> <azonenberg> And if/when it lands in release firmware we fully intend to support it as an alternate backend instead of scpi
<d1b2> <azonenberg> The "software defined oscilloscope" flow we're pushing is, I think, going to really change the T&M industry
<d1b2> <azonenberg> Just a question of how long it takes everyone to catch up and realize that having a good, documented, fast API for your instrument is an extreme selling point
<d1b2> <abhishek> Software will eat everything
<d1b2> <azonenberg> We have been pushing this concept heavily in all of our meetings with T&M vendors
<d1b2> <azonenberg> Pico, being a PC based scope vendor who doesn't charge for software options, has been on board from day one. My 6824E was a dev scope they sent me at no cost specifically to support driver development
<d1b2> <azonenberg> The bigger players have had more inertia, but we're working on it 🙂
<d1b2> <abhishek> Okay, so let me try and put up something about the schema I've been working with later this evening. Let me know what you think based on the protobuf schema I came up with.
<d1b2> <azonenberg> The concept I'm most a fan of is what I implemented in RemoteBridgeOscilloscope
<d1b2> <azonenberg> namely, a pub/sub model where you have a control plane that's basically SCPI, and a push based data plane on a separate socket
<d1b2> <abhishek> The TwinBridge?
<d1b2> <azonenberg> you mean twinlan transport? yes
<d1b2> <azonenberg> The control plane is slow, low bandwidth, a few Kbps max. basically setting up memory depth, sample rate, trigger config, channel gain/offset, etc
<d1b2> <abhishek> Yeah TwinLan, I was going through the code y’day
<d1b2> <azonenberg> then the data plane is push based
<d1b2> <azonenberg> every trigger the scope writes a header and a bunch of raw samples to the socket and immediately re-arms
<d1b2> <azonenberg> no polling
<d1b2> <azonenberg> This is hugely more efficient especially over a WAN
<d1b2> <azonenberg> I idd a demo at LeCroy's office in October when I was VPN'd back into my lab in Seattle
<d1b2> <azonenberg> My WaveRunner 8404 was pushing maybe 2 WFM/s because it had to wait for a cross country polling round trip to download each waveform and re-arm the trigger
<d1b2> <azonenberg> the PicoScope behind the same vpn bridge was running at 60 FPS 😄
<d1b2> <abhishek> The thing about gRPC and HTTP/2 and HTTP/3 is that they are lower latency and fewer handshakes
<d1b2> <azonenberg> and sure it took 200ms from when you change gain/offset before the scope reacts. but your waveform rate didn't drop as long as you weren't moving more than about 15 Mbps of total data
<d1b2> <azonenberg> Even on a LAN i've seen huge performance benefits from this flow
<d1b2> <azonenberg> And yeah, I'm not choosy about the exact transport although we might have to refactor a few things that assume the transport is scpi
<d1b2> <azonenberg> maybe make a new base class and have SCPITransport and RPCTransport derive from it, or something, idk yet
<d1b2> <azonenberg> it's on our roadmap but until someone ships such an API we won't quite know how it will fit into our model
<d1b2> <abhishek> Let me try this then
<d1b2> <azonenberg> By all means do the research. In general we are big fans of questioning assumptions and not doign things X way because everyone always has
<d1b2> <azonenberg> for example, you'll notice our software has no concept of divisions in x or y
<d1b2> <azonenberg> you set full scale voltage range, sample rate, and record length
<d1b2> <azonenberg> Because guess what, it's not the 1970s. you're not adjusting your scope timebase with a mechanical switch with a fixed number of detents and looking at waveforms on a CRT with an etched graticule
<d1b2> <azonenberg> So stop pretending to be one :p
<d1b2> <abhishek> 🙂
<d1b2> <azonenberg> and sure if the conclusion of an experiment is that the way everyone does it is the best way, then by all means do it that way
<d1b2> <azonenberg> But we're not trying to just copy the industry, we want to innovate and push ahead
<d1b2> <abhishek> Okay, I had to jump off for now, but will get back in the evening
massi has joined #scopehal
bvernoux has joined #scopehal
Bird|ghosted has quit [Ping timeout: 268 seconds]
massi has quit [Remote host closed the connection]
<d1b2> <louis> Looking at upstreaming my RTO6 driver. Do we want to keep the existing RohdeSchwarzOscilloscope driver that according to the manual "appears to have bitrotted"?
<d1b2> <louis> (and somewhat more generally, I personally am not a huge fan of having drivers be by-vendor vs. by scope family... how do we want to deal with what is the likely near future of having a e.g. Tek MSO5/6 driver and a Tek MDO3 driver that are mostly-different)
<d1b2> <zyp> the obvious answer is that neither vendor nor family are necessarily good classifications when a single vendor can both have families that are mostly identical and families that are wildly different
<d1b2> <louis> Yeah
<d1b2> <louis> Is there an obvious reason that we don't support automatically dispatching to the appropriate response based on an *IDN? query?
<d1b2> <louis> at least in cases where the transport layer selection implies SCPI?
<d1b2> <azonenberg> This would be good to have, the challenge is in part where to put that logic
<azonenberg> since the driver object doesn't exist yet, you can't do it there
<azonenberg> And yes, the mapping of drivers 1:1 to vendor is not strictly necessary
<azonenberg> generally a driver should be for one protocol IMO
<azonenberg> i.e. if the command sets are nearly identical, one driver
<azonenberg> there's a bit of a line between quirks mode in one driver vs having completely new drivers and it really depends on how much code can be reused
<azonenberg> the intent has always been that we could have multiple drivers for a single vendor
<azonenberg> and IIRC the current R&S driver is for the older RTM3000 series and similar
<azonenberg> i thought someone sent some patches against it recently
<d1b2> <louis> OK
<d1b2> <louis> I am happy to leave it there, it's not hurting anything
<d1b2> <louis> I made a new RSRTO6Oscilloscope / rs.rto6 driver for the RTO
<d1b2> <louis> I'm imagining something where each driver class has a static bool RecognizeIDN(const string&) and we have a map of them like m_createprocs and iterate through until a driver claims to recognize a scope
<d1b2> <louis> would be a nice clean interface to present to the user that they just indicate IP:port for SCPI and we figure out the rest. we could even figure out if we need to open it as a twinlan or gRPC or whatever once we ID the scope.
<d1b2> <louis> though tbh that is one of those cases where i don't know how neccesary/helpful that is vs. how much work considering our target user probably already knows a fair bit about how they want to connect to their scope
<azonenberg> yeah. I'm all for reducing friction, as well as things like automatically spawning servers when a picoscope or dscope is plugged into the local computer
<azonenberg> i just want to get the ngscopeclient transition done before we do any more of that
<azonenberg> Right now my focus is improving the CDR PLL to figure out why it's losing lock on some pathological signals