<azonenberg>
Got the Aaronia specan in to play with
<azonenberg>
first complaint: apparently their software has to run as root
<monochroma>
>:(
<azonenberg>
ok so i have a spectrm off it from a test waveform
<azonenberg>
trying to figure out how to get a constellation etc just to sanity check
<Degi>
Can't be too bad if the FBI uses it
<Degi>
Ooh, it advertises "Over 20" measurement units, while they all measure the same thing with a different scale factor
<d1b2>
<louis> May want to avoid using absolute timestamps because that will require the systems to be in clock lockstep, which definitely isn't always the case. Maybe just ms-since-start-of-connection?
<azonenberg>
No, it doesn't need that
<azonenberg>
they're opaque handles clientside
<azonenberg>
you just ack each sequence number
<azonenberg>
you're comparing server time to server time
<azonenberg>
flow control is done server side comparing current time to timestamp of last ack
<d1b2>
<louis> oh, sure
<azonenberg>
the only thing the client does is loop them back, and upon connection setup sends a scpi command to specify the requested buffer depth
<d1b2>
<louis> OK, so tried that and unless I'm misunderstanding the flow control algorithm you proposed, it dosen't seem to work as well as the previous approach
<d1b2>
<louis> the observed issue being that with a maxQueue/window of 200ms, initially it is in sync and then the server runs ahead and emits a whole bunch of waveforms before it runs 200ms ahead of the ack. during that 200ms it sends ~45 waveforms, which at ~40FPS takes the better part of a second to process, during which point it sends no waveforms. then the client catches up and ACKs and ACKs something recent enough at which point the client gets to
<d1b2>
freerun for Xms again
<d1b2>
<louis> and so it ends up pulsing instead of flowing evenly, since in Xms the server can produce more waveforms than the client can process in Xms
<d1b2>
<louis> I don't think there's a way around having to keep some kind of stateful rate information to smooth this; although the timestamps approach is cleaner since it can all be server-side
<_whitenotifier-e>
[scopehal-sigrok-bridge] 602p 8469b3a - Refactor to do flow control in terms of server timestamps
<azonenberg>
and yeah i want to keep the flow control server side to the extent possible
<azonenberg>
we also want to try and keep the trigger rate uniform
<azonenberg>
i.e. it's better to have 10 WFM/s be 10 waveforms 100ms apart
<azonenberg>
than 10 waveforms 10ms apart and a 900ms gap
<azonenberg>
because it gives more uniform coverage of the DUT
<d1b2>
<louis> Yes. I am thinking that it may be more effective to do flow control directly in terms of WFM/s
<d1b2>
<louis> and then drop waveforms client-side if accepting them would overfill the buffer (which should then only happen in the case where the user does something that makes rendered WFM/s drop)
<azonenberg>
Experiment
<d1b2>
<louis> 👍
<azonenberg>
This is an open problem and I don't have the answers :)
massi has joined #scopehal
bvernoux has joined #scopehal
massi has quit [Remote host closed the connection]
<d1b2>
<louis> An interesting result that is related and unrelated to this flow control discussion: If I set it up to just send all the waveforms, and i have the client just keep m_pendingWaveforms to a max depth of 2 (discarding older waveforms as newer appear), it appears plenty fast to do that at hundreds of WFM/s from the DSCope, and this is plenty responsive too since there's no buffer bloat.
<azonenberg>
Hmm. That is indeed an option
<azonenberg>
Simple may be enough
<azonenberg>
we need *a* queue so that we can be receiving/downloading one waveform while displaying another
<azonenberg>
but i guess there's no reason to let it get too deep
<d1b2>
<louis> Well, and we need some kind of flow control if for no other reason than it's easy to request more data from the DSCope bridge than you could fit over a N-mbps pipe, which will entail it's own jamming-up.
<azonenberg>
Yes, some kind of flow control is definitely important. especially for WAN vs LAN use cases
<azonenberg>
imagine trying to access a scope over a slow vpn
<azonenberg>
and DoSing your whole corporate office by trying to push Gbps of waveforms out to the internet
<azonenberg>
lol
<d1b2>
<louis> But I think it could be as simple as the client reporting waveform pops to the server, and the server using that to calculate a WFM/s number X, and then aiming to send waveforms at X*1.05
<d1b2>
<louis> (*1.05 to bias towards picking up speed when possible; then the client drops (and so decreases it's effective ack-rate) any waveform that would cause it's buffer depth to exceed 2 (or N))
<d1b2>
<louis> I'll play with that and see how it works (esp over thin pipe)
<azonenberg>
yeah that makes sense
<azonenberg>
maybe 10-20% safety margin would be better
<azonenberg>
but yeah i agree in principle
<azonenberg>
(and yes we also need more optimization clientside, but there will always be a faster scope than a slow computer)
<_whitenotifier-e>
[scopehal-sigrok-bridge] 602p 557773e - Very messy, but seemingly effective interval/rate flow control
<d1b2>
<louis> That approach seems to work quite well. Going to work and will test it on faster PC and remote bridge.
<d1b2>
<louis> Ended up being a little abominable to implement on the DSCope because there isn't a way (i've found) to disarm the trigger w/o stopping the acquisiton loop, and stopping/starting the loop takes forever.
<d1b2>
<louis> Which means that the only flow control primitive available is dropping capture frames. Except the capture frames are more-or-less-isochronous, and that's not guaranteed to be a multiple of the wanted WFM/s rate (more importantly, doing it naïvley means that there's not enough granularity for it to ever speed up)
<d1b2>
<louis> So it ends up having to have multiple counters with different periods to approximate the wanted WFM/s rate out of whatever actual rate of samples comes from the hardware.
<azonenberg>
lol eew
<azonenberg>
at least arming/disarming the picoscope is fast
bvernoux has quit [Read error: Connection reset by peer]