azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
Degi_ has joined #scopehal
Degi has quit [Ping timeout: 272 seconds]
Degi_ is now known as Degi
<sajattack[m]> would it be useful for me to enumerate the responses to `TRIG_SELECT?` on the SDS1000 (1104X-E hacked to be a 1204X-E) or is that too trivial to be helpful
<sajattack[m]> I'm not sure I can commit to the time required to fully implement the triggers but I want to help if I can
<azonenberg> sajattack[m]: i think that info is all available in the programming guide
<azonenberg> one of us can have a look at it later when we have time
<azonenberg> did you fix the crash at least?
<sajattack[m]> I can't recall
<sajattack[m]> whether it was still crashing or just logging a message
<azonenberg> it should not segfault with the current "bandaid" fix
<azonenberg> but there's still a null channel input that shouldn't be there
<azonenberg> which suggests something else is wrong
<sajattack[m]> I'm pretty sure that was coming from pulltrigger
<azonenberg> yeah
bvernoux has joined #scopehal
bvernoux has quit [Quit: Leaving]
<azonenberg> @louis: sooo fun api/ux question if you have ideas
<azonenberg> so, most "normal" scopes have plenty of ram bandwidth and do no preprocessing or compression on the data before storing it to ram
<azonenberg> Some of the stuff i'm working on, like the logic analyzer on my sniffer board, does not
<azonenberg> i'm trying to capture 80 Gbps of sample data and shove it into a ram that has 41.6 Gbps of theoretical bandwidth before address/refresh etc overhead
<azonenberg> So i compress it
<azonenberg> Which works well as long as the data compresses well
<azonenberg> The question is, what do i do if it doesn't?
<azonenberg> if the data compresses poorly, eventually i will overflow one or more of the block ram based fifos leading into the memory arbiter and then the ddr
<azonenberg> This is unavoidable, not all data can compress
<azonenberg> but how do i handle this situation?
<azonenberg> there has to be some way to return an error to the user
<azonenberg> and the other challenge is, when the overflow happens do i simply terminate the acquisiotn? do i try to return partial data? with the arbitration etc, figuring out when every data stream ended would be nontrivial
<azonenberg> so the simplest solution IMO is simply to abort the acquisition and act like it never triggered
<azonenberg> but return an error saying you ran out of memory BW