azonenberg changed the topic of #scopehal to: libscopehal, libscopeprotocols, and glscopeclient development and testing | https://github.com/glscopeclient/scopehal-apps | Logs: https://libera.irclog.whitequark.org/scopehal
Degi_ has joined #scopehal
Degi has quit [Ping timeout: 268 seconds]
Degi_ is now known as Degi
<azonenberg> Debugging a truly puzzling issue
<azonenberg> I'm looking at some PCIe in ngscopeclient
<azonenberg> and i'm seeing phantom packets that come and go
<azonenberg> Acquire waveform, decode, see two DLLPs
<azonenberg> acquire second waveform, nothing interesting
<azonenberg> go back to first waveform, the packets are gone and i just see idles
<azonenberg> also separate bug seems to be causing filters to not be destroyed when i close a session (or at least something is keeping pointers around, the filters may or may not still be there)
<azonenberg> but i'll address that later
<azonenberg> oho
<azonenberg> it seems like it's loading the old data but not re-running the filter graph when i click the old history waveform
<clever> azonenberg: random question, do you know the usb-msd protocol well?
<azonenberg> no
<azonenberg> i know HID a little bit, but have not ever worked with raw mass storage
<azonenberg> i've done a little bit of low level NVMe but not a ton
<clever> i was feeling a bit crazy today, and tried making a 1PB usb storage device
<azonenberg> lol
<azonenberg> um
<clever> but after debugging things, i discovered, the block-number seems to be limited to 32bits
<azonenberg> i assume it just reported that size
<azonenberg> and wasn't actually backed by that much capacity :p
<clever> 512 byte blocks, mean a max-size of 2tb
<clever> 4k blocks, a max size of 16tb
<clever> go over that, and things break in weird ways
<azonenberg> yeah i believe that
<clever> it seems to just silently truncate the size to 32bits
<azonenberg> i expect large external drives in the future will likely be thunderbolt aka pcie
<clever> 20tb turns into 4tb
<azonenberg> and just run nvme
<clever> and 1pb, was evenly divisible by the limit, so it turned into 0 bytes
<azonenberg> lolol
<clever> but the extra crazy part of my idea, was how i was doing it
<clever> step 1, a zvol on zfs
<clever> step 2, iscsi it into an rpi-zero over wifi
<clever> step 3, usb-msd gadget it, into another machine
<clever> so its basically a wifi based usb->iscsi converter
<azonenberg> lol
<azonenberg> i'm happy with my actual ceph cluster
<monochroma> (iirc USB mass storage is a stripped down SCSI / t10.org spec transport over usb)
<clever> just plug it into any computer, and it magically gains a 15tb disk
<clever> monochroma: but the iscsi layer, was perfectly happy to have a 1PB volume with 512 byte sectors
<clever> and thats just scsi over tcp
<azonenberg> https://www.antikernel.net/temp/akl-pt5-manual.pdf btw, if anyone has feedback i have an early draft of the PT5 manual
<clever> so what was MSD doing differently?
<azonenberg> only look at sections 1-4
<azonenberg> 5 and on are mostly copy pasted from the PT1/PT2 and are out of date or blank
<monochroma> clever: may be kernel implementation specific
<clever> monochroma: yeah, a limitation within this implementation of MSD
<clever> but i would first need a MSD that violates the above rules
<clever> a few months ago, i put a 1tb 512b disk on a usb sata adapter, it lied and claimed 4k sectors
<clever> the partition table was horribly confused and it kept throwing IO errors
<clever> [Sun Nov 13 19:58:31 2022] sd 11:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16).
<clever> dmesg does sometimes say this...
<clever> [Sun Nov 13 19:58:31 2022] sd 11:0:0:0: [sdd] Using 0xffffffff as device size
<clever> and this
<clever> but wireshark and usbmon, showd read capacity(10) when it was working normally
<clever> looks like "read capacity (10)" can respond with a 32bit block count, and a 32bit block size, total of 8 bytes of answer
<clever> and if there is an overflow, the device is supposed to send all 1's, INT_MAX
<clever> to indicate that the host should try the 16 variant
<clever> ah, as expected, "read capacity (16)" has a 64bit block-count, but still 32bit block-size
<clever> and do_read_capacity_16 is implemented in the linux source...
<clever> aha, this is the fix, its present in 5.17, but its absent in 5.15!!
<_whitenotifier> [scopehal-apps] azonenberg pushed 1 commit to master [+0/-0/±4] https://github.com/glscopeclient/scopehal-apps/compare/45c21904657b...1b3b3393937e
<_whitenotifier> [scopehal-apps] azonenberg 1b3b339 - Fixed bug where navigating through history wouldn't refresh filter graph. Also made all filter graph refreshes triggered by GUI events asynchronous. See #541.
massi has joined #scopehal
bvernoux has joined #scopehal
jevinskie[m] has quit [Quit: You have been kicked for being idle]
fridtjof[m] has quit [Quit: You have been kicked for being idle]
d1b2 has quit [Remote host closed the connection]
d1b2 has joined #scopehal
massi has quit [Remote host closed the connection]
<bvernoux> @azonenberg, do you plan to add export on ngscopeclient ?
<bvernoux> @azonenberg, also it will be nice to have session ;)
<azonenberg> bvernoux: yes. full feature parity with glscopeclient is planned
<azonenberg> i just have limited hours and more features to build than i have time for
<azonenberg> the export wizards need to be completely rewritten in imgui as the stuff in libscopeexports is all GTK based
<bvernoux> ha ok so that will take time and it is probably not your priority too
<azonenberg> The bigger priority is getting scopesession loading - but not generation - working soonish
<azonenberg> since that will enable me to load filter graphs and waveform data from earlier glscopeclient setups
<azonenberg> and use it to test various gui features
<azonenberg> and before THAT finishing the protocol analyzer
<azonenberg> ngscopeclient is basically 100% me right now, if anybody else wants to help it will go faster :p
<azonenberg> I'm aiming for end of year / early 2023 being at close to feature parity but no idea how realistic that is
<bvernoux> yes very nice
bvernoux has quit [Quit: Leaving]
fridtjof[m] has joined #scopehal