bl0x[m] has quit [Quit: Idle timeout reached: 172800s]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 268 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 240 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 268 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 252 seconds]
notgull has joined #amaranth-lang
notgull has quit [Ping timeout: 256 seconds]
notgull has joined #amaranth-lang
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
tarikhamedovic has joined #amaranth-lang
notgull has quit [Ping timeout: 268 seconds]
cyrozap has quit [Quit: ZNC 1.8.2+deb3.1 - https://znc.in]
cyrozap has joined #amaranth-lang
Hoernchen has joined #amaranth-lang
Hoernchen_ has quit [Ping timeout: 268 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 268 seconds]
tarikhamedovic has quit [Quit: Client closed]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
<zyp[m]>
I managed to crash Sentinel by swapping out the `WBMemory` from the example with the `amaranth_soc.wishbone.sram.WishboneSRAM` that jfng added yesterday, it appears to be driving ACK wrong
<jfng[m]>
WB targets are allowed to deassert ACK later than CYC
<jfng[m]>
i think this is a sentinel bug
<zyp[m]>
it might be, I'm not completely sure what the wishbone rules are
<zyp[m]>
but the issue appears to be that ACK is still asserted when Sentinel starts fetching the next instruction, so it interprets it as the data being available already
frgo has joined #amaranth-lang
<jfng[m]>
in the "Next instruction fetch", i think the bug comes from the fact that this is a classic WB cycle, so the initiator must deassert STB immediately after ACK, and wait at least one clock cycle before reasserting it agait
<jfng[m]>
ok, this is actually apparent in the figure zyp posted; STB is held high
<jfng[m]>
this is a WishboneSRAM bug
Lord_Nightmare has joined #amaranth-lang
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 246 seconds]
d_olex has joined #amaranth-lang
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 264 seconds]
frgo has joined #amaranth-lang
frgo has quit [Remote host closed the connection]
frgo has joined #amaranth-lang
<tpw_rules>
whitequark[cis]: i would love to see a reliable set of AXI primitives
<tpw_rules>
(and contribute to some degree)
<whitequark[cis]>
I think everyone would :p
<whitequark[cis]>
we'll get there
<whitequark[cis]>
no promises as to when
<tpw_rules>
of course
<tpw_rules>
i had assembled a couple for my project. i think next target will be an arbiter
frgo has quit [Remote host closed the connection]
<tpw_rules>
i need to upgrade that stuff to real streams
phire has joined #amaranth-lang
<cr1901_>
jfng[m]: Right now, there is exactly one case where Sentinel will not deassert CYC when ACK is received; when a memory store is immediately followed by an insn fetch
<cr1901_>
deassert CYC and STB*
<cr1901_>
This is a WB block cycle; every other mem access is a classic cycle
<jfng[m]>
cr1901_: this is the case for classic cycles in pipelined mode, where the initiator has to deassert STB upon receiveing ACK, iirc
<jfng[m]>
or something like that.. anw i'll reread the spec in detail and fix this
<cr1901_>
Yea, I can't find anything that suggests that ACK has to lower when CYC/STB lowers- just that ACK,RTY,ERR, or some other signal has to come up in response to CYC/STb
<cr1901_>
zyp[m]: wb_bus_we going low before cyc/stb goes low looks like a bug on my end tho
<cr1901_>
(Can I see your testbench/repo code as well?)
<tpw_rules>
hm how would you round robin AXI given that each stream is kinda independent
<whitequark[cis]>
there's many approaches with different tradeoffs
<whitequark[cis]>
AXI is harder to build interconnect for
<cr1901_>
(Actually, Idk if what I'm doing is wrong. WB spec says for block cycles "MASTER negates [STB_O] to introduce a wait state (-WSM-)". Which past-me interpreted as "if xfer initiator doesn't need wait states, then it doesn't need to deassert STB during a block write"
frgo has joined #amaranth-lang
<zyp[m]>
<cr1901_> "(Can I see your testbench/repo..." <- the code is pretty much the same as yesterday, just with `WishboneSRAM` replacing `WBMemory` and a `Decoder` thrown in to adapt the `addr_width` between the cpu and the mem
<cr1901_>
I wanted the pdm lock so I know which version of sentinel you're using (main or next)
<cr1901_>
pyproject.toml/pdm.lock*
frgo has quit [Client Quit]
<zyp[m]>
oh, whatever was main yesterday I guess, I haven't specified a particular ref
<cr1901_>
ahhh cool
<zyp[m]>
but does it even matter? at a glance I can't see any HDL differences between main and next
<cr1901_>
There might not be :P... I haven't worked on it much
<cr1901_>
(since the last release)
<cr1901_>
jfng[m]: Just for future me when I forget again, "PERMISSION 3.40" in WB spec 4 is what I'm (ab)using in Sentinel. Sentinel never generates wait states, so I tie CYC to STB. 1/2
<cr1901_>
I managed to confuse myself b/c I didn't think WB block xfers could get a throughput of 1 data xfer per clock cycle, but after rereading, I'm realizing "yes, it can, if the responder always has ACK ready in response to STB/CYC". Makes me wonder why pipelined mode exists at all
balrog has quit [Quit: Bye]
balrog has joined #amaranth-lang
<zyp[m]>
pipelined mode as in bursts? exists because it's not always feasible to have data returned in the same cycle it's requested