<somlo>
geertu: thanks! I was about to give up and go with sector_count (underscore proliferation be damned :) but I'm seriously warming up to nsectors...
<geertu>
somlo: There are almost as many users of nsectors as sector_count in the Linux kernel sources
<somlo>
geertu: I should have thought of grep-ing through the Linux kernel sources :)
<somlo>
geertu: on a somewhat related topic: now that I have multi-sector DMA working in "bare metal" mode, I'm getting weird lock-ups when trying to umount an ext4 partition in linux with the litesata driver, and I'm wondering: the `submit_bio` method of a `block_device_operations` -- do I need to worry about multiple invocations of it interleaving (i.e., adding locking so that only one DMA transfer can take place, atomically, at any given time)?
<somlo>
It feels like that's the most likely reason things get messed up (given how the bare-metal "single-thread" tests seem solid)
<geertu>
somlo: Documentation/filesystems/locking.rst does not seem to document all block_device_operations methods
<somlo>
I'll run some tests, try to "catch" it in the act if multiple requests are interleaved
<somlo>
of course I may still have a bug in the gateware -- guess I need to keep testing a bit longer
<geertu>
somlo: ps3vram_submit_bio() suggests it can be called while a bio is in progress
<geertu>
somlo: BTW, why is litesata a block device driver, and not an ata driver?
<geertu>
I know ps3disk is also a block device driver, but there was a good reason for that (which I forgot ;-)
<somlo>
geertu: it's a block device driver (its "sata-ness" is encapsulated mostly in the gateware layer)
Foxyloxy has joined #litex
so-offish has joined #litex
Flea86 has quit [Quit: Leaving]
Flea86 has joined #litex
<somlo>
added a `dev_info` before starting a DMA operation and right after the irq handler woke up the `wait_for_completion` at the end of the transfer
<somlo>
start: s:4212416 c:8 a:810c5000
<somlo>
done: s:4212416 c:8 a:810c5000
<somlo>
<somlo>
start: s:4212424 c:8 a:810c5000
<somlo>
start: s:243533824 c:8 a:82bae000
<somlo>
done: s:243533824 c:8 a:82bae000
<somlo>
<somlo>
start: s:4270576 c:8 a:820a9000
<somlo>
done: s:4270576 c:8 a:820a9000
<somlo>
<somlo>
done: s:4212424 c:8 a:810c5000
<somlo>
<somlo>
start: s:4212432 c:8 a:810c5000
<somlo>
start: s:4204912 c:8 a:820ac000
<somlo>
done: s:4212432 c:8 a:810c5000
<somlo>
<somlo>
done: s:4204912 c:8 a:820ac000
<somlo>
<somlo>
start: s:4212440 c:8 a:810c5000
<somlo>
done: s:4212440 c:8 a:810c5000
<somlo>
doesn't happen often, but it does happen, so yes, I think I need a mutex around each multi-block DMA xfer...
<somlo>
I'm a bit surprised I'm not getting errors from the gateware when this happens, though...
so-offishul has joined #litex
so-offish has quit [Ping timeout: 246 seconds]
<geertu>
somlo: Just keep a list of bios, and queue a new request when still busy, cfr. ps3vram_submit_bio()?
FabM has quit [Quit: Leaving]
so-offishul has quit [Quit: Leaving]
Finde_ has joined #litex
Finde has quit [Read error: Connection reset by peer]
so-offish has joined #litex
Finde has joined #litex
Finde_ has quit [Read error: Connection reset by peer]
so-offishul has joined #litex
so-offish has quit [Ping timeout: 246 seconds]
Finde has quit [Read error: Connection reset by peer]
<somlo>
geertu: I just put a mutex around the function handling an individual DMA transaction for now, which seems to have fixed the problem from a *functional* standpoint
<somlo>
I'm going to look into enqueueing requests the way ps3vram does it, maybe that's going to improve performance a bit more