<whitequark[cis]>
(but racy, as we've figured out)
<whitequark[cis]>
I have a potential solution for the races
<whitequark[cis]>
each time a signal is set or waited, pause the simulation
<whitequark[cis]>
look at the comb input cone of every waited signal, and create a happens-before edge between every signal in it and any testbench that's setting it
<zyp[m]>
those shouldn't be racy since they're using add_process
<whitequark[cis]>
ah right
<zyp[m]>
they're not using add_sync_process because they're never doing sim.tick(), only ever awaiting signals in the pin interface
<whitequark[cis]>
yep makes sense
<zyp[m]>
seems to work well, used it to simulate a SPI ADC driver that uses DDR registers to run at 65MHz or so
<whitequark[cis]>
nice
<zyp[m]>
today I've experimented with multilane streams: https://paste.jvnv.net/view/DgdWH, I like how lane-awareness in the helpers means they'll just work with bytes objects when the lane shape is 8b
<zyp[m]>
much nicer to have a four-lane bytestream than to manually have to deal with 32b conversion
<zyp[m]>
not sure yet how I like the first/last ergonomics, thinking about forcing first[1:] and/or last[:-1] to 0 for interfaces that require packets to start and/or end on a transfer boundary
<whitequark[cis]>
so I think I'm pretty sure that first and last should be a part of `data
<whitequark[cis]>
* so I think I'm pretty sure that first and last should be a part of `data`
<whitequark[cis]>
* so I think I'm pretty sure that first and last should be a part of data\
<whitequark[cis]>
* so I think I'm pretty sure that first and last should be a part of data
<whitequark[cis]>
brb
<zyp[m]>
I mostly only see disadvantages with that approach
<zyp[m]>
my helpers needs those for framing, and doesn't care what shape data is
<zyp[m]>
the advantage is of course that any block that doesn't care about first/last doesn't need to special case for it, but I don't think that justifies the disadvantages
<whitequark[cis]>
I think we should have exactly one kind of stream signature and this will be my RFC when I get around to it
<whitequark[cis]>
this (in combination with having ready/valid be C(1) sometimes) will mean we can have actually generic stream infrastructure
<whitequark[cis]>
with your solution, you will need AsyncFIFO specialized for first/last
<whitequark[cis]>
I think that's a bad solution
<zyp[m]>
the way I see it, this is not something we need to agree on, there doesn't have to be only one stream signature implementation, my stream interfaces will be compatible with yours where they match
<zyp[m]>
since lib.wiring only cares about compatible signatures
<whitequark[cis]>
lib.wiring will probably care about metadata attached to signatures at some point
<whitequark[cis]>
so I wouldn't count on that
<whitequark[cis]>
it is true for that
<whitequark[cis]>
however, more importantly, there will be support for packetization in the standard library, and that will not be interoperable with yours
<whitequark[cis]>
I think the way to go for standard streams is to have a stack of "transformers", call it whatever you like, which say how to interpret data
<whitequark[cis]>
so you have multiple lanes, packetization, and transaction IDs, and these can be assembled in a bunch of different ways
<whitequark[cis]>
* it is true that for now it only cares about the members of the signature
<whitequark[cis]>
re metadata: because we cannot do things like "enforce that the signatures passed to connect are all mirror images of each other" and for other reasons (mainly it not being feasible to do a reasonable functional API around that), connect will probably eventually collect metadata (just some dict of primitives or something) from each signature and require that it all match
<whitequark[cis]>
which solves problems like "connecting two interfaces with different latencies that aren't expressed via members" or "connecting two streams with incompatible data payload format"
<whitequark[cis]>
zyp: actually it doesn't just affect first/last
<whitequark[cis]>
something as simple as a 4-lane stream with no first/last cannot be used with the generic AsyncFIFO with your implementation
<whitequark[cis]>
(because a 8x4 array data is incompatible with a 32x1 scalar data)
<zyp[m]>
true
<whitequark[cis]>
and I think this approach wouldn't scale to adding txids
<whitequark[cis]>
just because of having to track all these optional features in a bunch of call sites that mostly don't care about them doesn't work very well
<whitequark[cis]>
litex is trying to do that and it's not succeeding
<zyp[m]>
the worst part about litex streams is that they're not explicit about what exact features are supported on a specific endpoint
<whitequark[cis]>
I think that's a big part of it but not the only problem
<whitequark[cis]>
I think streams should be actively helping building infrastructure that mostly doesn't care about optional features, or we'll end up in a similar mess
<whitequark[cis]>
there should be some easy way to say "unwrap the outer layer of the stream if it's X [X = multilane/packet/...]" so you can care about it if you want and if the stream is right
<whitequark[cis]>
and ignore otherwise
<zyp[m]>
would txids be like the param field in litex streams? i.e. constant for the duration of a packet?
<whitequark[cis]>
txids should be constant for the duration of a packet, probably
<whitequark[cis]>
one thing I want to avoid is baking in not necessarily correct assumptions about which goes over what
<whitequark[cis]>
e.g. you baked in the assumption that streams can start and end anywhere within a word
<whitequark[cis]>
and I can see someone wanting to have a multilane stream that can start or end only at word boundary
<zyp[m]>
I already proposed the solution to that
<whitequark[cis]>
that's one option. another is to not create this problem in first place
<whitequark[cis]>
(by having first/last be scalars in this case)
<whitequark[cis]>
in my interpretation, this is basically "packetization in lanes" vs "packetization outside of lanes"
<whitequark[cis]>
something like stream.Signature(8, transformers=[Lanes(4), Packet]) vs stream.Signature(8, transformers=[Packet, Lanes(4)])
<whitequark[cis]>
or stream.Signature(Lanes(4, Packet(8))) vs stream.Signature(Packet(Lanes(4, 8)))
<zyp[m]>
ok, that's an interesting approach
<whitequark[cis]>
stream.Signature would interrogate the transformers and build some sort of representation for them that's queryable in a better way than just trying to inspect the data.Layout or something
<whitequark[cis]>
if lanes := str.unwrap(Lanes) or something
<whitequark[cis]>
you could even do multiple layers, like if you have very specific gateware that expects specifically packet-in-lanes, you would do
<whitequark[cis]>
if lanes, packet := str.unwrap(Lanes, Packet):
<whitequark[cis]>
pattern matching could work here, actually
<whitequark[cis]>
you get the idea
<zyp[m]>
alternatively, the interface type could have a wrap method or property that bundles everything up, and any standard components that doesn't care about the stream payload could just use that
<whitequark[cis]>
if data is a single member that is just inherent in how wiring works
<whitequark[cis]>
also, note that there are going to be a decent amount of components that care somewhat
<whitequark[cis]>
for example, upconverters care about multilane iff it's in the outer layer
<whitequark[cis]>
but you could still have packetization inside
<zyp[m]>
I did try using a StructLayout({'data': 8, 'last': 1}) first but writing self.input.data.data or whatever felt kinda silly
<whitequark[cis]>
if you make a sufficiently advanced wrap I think it is roughly equivalent in power to what I'm proposing but the ergonomics is probably worse
<whitequark[cis]>
oh, call the outer member payload
<whitequark[cis]>
ready, valid, payload
<whitequark[cis]>
and alias payload as p
<whitequark[cis]>
self.input.p.data is okay
<whitequark[cis]>
that was actually always the plan with streams, going back to 2020
<whitequark[cis]>
s/2020/2019/
<zyp[m]>
I had it as payload first, renamed it to data at some point before I needed last
<whitequark[cis]>
self.input.p.lane[0].data still okay
<whitequark[cis]>
self.input.p.data.lane[0] also okay
<whitequark[cis]>
so we have a "stream payload (+ ready, valid)" and "packet data (+ first, last)"
<whitequark[cis]>
it's not the best situation but it's not that bad either, and it solves a really sticky problem of trying to get everyone to agree in which order things are wrapped, which is probably impossible
<whitequark[cis]>
there's also lane strobes in the mix
<zyp[m]>
yes
<zyp[m]>
I've thought about them, but not needed them yet and hence not added them yet
<whitequark[cis]>
I think what I'll do is to propose a minimal stream interface which just has payload, ready, valid, plus the options to set ready/valid to be always 1, and add middleware for FIFOs but that's it
<whitequark[cis]>
which will already be super useful
<zyp[m]>
that's what I started with
<whitequark[cis]>
and then we can decide exactly how to build the addons on top of that
<zyp[m]>
so my basic streams will already be compatible with that
<whitequark[cis]>
I'm actually unsure whether we need separate middleware FIFOs or if existing ones should grow a source/sink or whatever methods
<whitequark[cis]>
which repackage the existing ports as a stream interface
<whitequark[cis]>
"read endpoint" / "write endpoint" with the idea that we have a stream.Endpoint rather than stream.Interface?
<whitequark[cis]>
r_ep
<zyp[m]>
I've been calling my streams input and output, I find those names faster to reason about than sink/source despite having used litex streams for years
nelgau has quit [Read error: Connection reset by peer]
<whitequark[cis]>
yeah I think source/sink are probably bad
nelgau_ has joined #amaranth-lang
<whitequark[cis]>
in_ep / out_ep seem perfectly fine
<whitequark[cis]>
but that's not what the question was about
<zyp[m]>
I'm also not sure it's useful to distinguish between streams and other interfaces
nelgau_ has quit [Read error: Connection reset by peer]
<whitequark[cis]>
I'll address that but I want to discuss the question I asked first
nelgau has joined #amaranth-lang
<whitequark[cis]>
which was about wanting or needing separate middleware FIFO classes
<zyp[m]>
you're not invoking stream.Endpoint in any case, you're doing In(StreamSignature) or something
<zyp[m]>
depends how big the changes would be, I guess
<zyp[m]>
if the existing classes are easy to migrate, do that
<zyp[m]>
if not, make a wrapper
<whitequark[cis]>
the main question would be "do we want to be able to specify the shape of the FIFO contents?"
<whitequark[cis]>
actually, this is useful even without streams
<zyp[m]>
yes
<zyp[m]>
and I've wanted to propose that for Memory too
<whitequark[cis]>
reasonable
nelgau has quit [Read error: Connection reset by peer]
<whitequark[cis]>
want to write an RFC for FIFOs and Memory? probably separate ones
<whitequark[cis]>
I'd be quite eager to have this in both cases, but especially for FIFOs
<zyp[m]>
the FIFO one should probably be done after the streams RFC, the Memory one doesn't depend on anything
<whitequark[cis]>
I think neither of them depend on streams
<zyp[m]>
are there any other stuff that takes a width now that could take a shape instead?
<whitequark[cis]>
if we aren't making new classes, what happens with FIFOs post-streams is that they gain two properties, r_ep and w_ep (preliminary name) that return a stream.Interface or whatever
nelgau has joined #amaranth-lang
<zyp[m]>
did FIFOs get signatures yet?
<whitequark[cis]>
nope
<zyp[m]>
that should be part of it then
<zyp[m]>
I'd be more inclined to make the streams the primary interface and the existing properties compatibility shims that are deprecated
<whitequark[cis]>
one more radical option would be to deprecate the current interface entirely, and instead have fifo.r.ready/fifo.r.valid/fifo.r.data and fifo.r is just a stream
<whitequark[cis]>
and we have fifo.r_rdy aliasing fifo.r.ready
<zyp[m]>
exactly
<whitequark[cis]>
ok, I like this
<whitequark[cis]>
actually, fifo.r_rdy is an alias for fifo.r.valid, confusingly
<whitequark[cis]>
which is honestly all the more reason to deprecate the old interface
nelgau has quit [Read error: Connection reset by peer]
<zyp[m]>
I'd like to standardize the names of input/output streams while we're at it, so it's easier to programmatically connect together e.g. a list of blocks
nelgau_ has joined #amaranth-lang
<whitequark[cis]>
like litex's pipeline?
<zyp[m]>
yes, e.g. for source, sink in itertools.pairwise(pipeline): connect(m, source.output, sink.input)
<whitequark[cis]>
well, you could always write a function that examines the signature and finds the one source/sink in it
<whitequark[cis]>
I think I want FIFO's output to be called fifo.r and not fifo.output
<zyp[m]>
which doesn't look very useful on it's own, but it's used with a temporary, like cross_connect(m, cmd_dispatcher.get_port(handler.cmd), handler)
<whitequark[cis]>
interesting
<zyp[m]>
which is much nicer than passing in a reference to the cmd_dispatcher so the handler can request a port itself
<zyp[m]>
less stuff to mock when writing a testbench, for one
<whitequark[cis]>
I think the standardized input/output naming breaks when you have, well, more than one input/output
<whitequark[cis]>
but also when it's ambiguous because of the context
<zyp[m]>
yes, but you can have subinterfaces that has an input/output each
<zyp[m]>
like each handler port from the dispatcher
<zyp[m]>
by the way, did anybody do any ethernet L2 implementations in amaranth yet?
<zyp[m]>
liteeth is the main migen dependency I've got left in the project I'm working on
nelgau_ has quit [Read error: Connection reset by peer]
nelgau has joined #amaranth-lang
<whitequark[cis]>
sounds solvable :)
<whitequark[cis]>
is the project open source?
<zyp[m]>
yeah, but it's an unpublished pile of mess at the moment
<zyp[m]>
I think I've mentioned it before, it's a replacement controller for a three decades old fanuc six axis robot arm
<zyp[m]>
it was implementing the motor vector transforms in migen that inspired me to look at implementing fixedpoint for amaranth, and then everything snowballed from there, to the point where I've now ported almost all the migen code I wrote for it to amaranth
<whitequark[cis]>
nice!
<whitequark[cis]>
oh speaking of, did I already suggest you write up fixed point as RFC?
<zyp[m]>
it's still building with a litex toplevel, using the glue code I wrote for orbtrace and the platform proxy addition I recently made
<zyp[m]>
yeah, it's on my TODO list
<whitequark[cis]>
I think I did and that's how we got RFC 28
<zyp[m]>
yep
<zyp[m]>
it's just a matter of finding time, there's so many different things to work on :)
<whitequark[cis]>
yep!
<whitequark[cis]>
I do want to reiterate that this RFC would be very appreciated
nelgau has quit [Read error: Connection reset by peer]
nelgau has joined #amaranth-lang
nelgau has quit [Read error: Connection reset by peer]
nelgau_ has joined #amaranth-lang
skipwich_ has quit [Quit: DISCONNECT]
skipwich has joined #amaranth-lang
Degi_ has joined #amaranth-lang
Degi has quit [Ping timeout: 264 seconds]
Degi_ is now known as Degi
smkz has quit [Quit: smkz]
<tpw_rules>
i've been doing a little bit of fixed point, but it was like 10 lines of it so i didn't really even wrap it up in anything
frgo has quit [Remote host closed the connection]
frgo has joined #amaranth-lang
frgo has quit [Read error: Connection reset by peer]
frgo_ has joined #amaranth-lang
frgo_ has quit [Ping timeout: 255 seconds]
frgo has joined #amaranth-lang
frgo has quit [Read error: Connection reset by peer]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 252 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 260 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 260 seconds]
frgo has joined #amaranth-lang
cyb3r1a[m] has quit [Quit: Idle timeout reached: 172800s]
frgo has quit [Ping timeout: 264 seconds]
frgo has joined #amaranth-lang
frgo_ has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo_ has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 260 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 264 seconds]
frgo has joined #amaranth-lang
frgo has quit [Remote host closed the connection]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 246 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 268 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 245 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 255 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 256 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 246 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 264 seconds]
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 260 seconds]
<key2>
where would be the place of libs such as cordic / fft ?
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 240 seconds]
<galibert[m]>
key2/amaranth-math ? :-)
<key2>
ah
<galibert[m]>
Personally I think an amaranth-math would make sense (including fp32 mac and friends) but I don’t think anybody has volunteered to bandwidth at this point
frgo has joined #amaranth-lang
<ravenslofty[m]>
that and implementing floating point math correctly is Difficult™️
<vipqualitypost[m>
i never really considered how complicated it is!
frgo has joined #amaranth-lang
frgo has quit [Ping timeout: 252 seconds]
frgo has joined #amaranth-lang
frgo has quit [Quit: Leaving...]
<zyp[m]>
<zyp[m]> "another potential solution would..." <- I just had the (cursed?) realization that I can already achieve this by simply adding a picosecond delay before setting the values that the other testbench shouldn't observe until the next tick
<whitequark[cis]>
yep
<zyp[m]>
before that, I put a stream buffer in front of the DUT, which naturally also worked since it breaks the combinational connections between the testbenches
<whitequark[cis]>
that wrap reminds me of how people used to have def ports(self): before lib.wiring
<zyp[m]>
well, in this case it's meant to be an opaque value
<whitequark[cis]>
it's not really something we should be having; it's indicative of a design issue in either lib.wiring or StreamInterface
<zyp[m]>
possible
<zyp[m]>
I was thinking a bit of your stream transformers, and I guess it'll effectively be something in between a signature and a value castable/view.Layout
<whitequark[cis]>
yep
<whitequark[cis]>
I was thinking about whether it should be a Signature with all members pointing same direction, or a value castable
<whitequark[cis]>
and the answer is the latter; since the members of a signature must match on both sides
<whitequark[cis]>
... unless we decide to allow having a FIFO with such a signature for example
<whitequark[cis]>
I think I would probably try going with a ValueCastable (even just a data.Struct, at least to begin with), and add a warning to wiring.connect for shape mismatch
<zyp[m]>
I figure we want the type checking that connect() lacks for signatures with different value castables that cast to identical types
<whitequark[cis]>
yes, hence the warning
<whitequark[cis]>
I always thought that such type checking would eventually be added, in some form at least
<zyp[m]>
and I'm also kinda worried about order mismatches if the layout doesn't come from «one true source»
<whitequark[cis]>
that's why I want lib.stream to be universally used, yes
<zyp[m]>
universal use means enough flexibility to support any use case
<whitequark[cis]>
I think the transformer concept is quite flexible