<whitequark[cis]>
I was thinking that I probably do not want the base Stream type to always support packetization (first/last)
<whitequark[cis]>
I am uncertain whether packet boundaries are a part of the payload conceptually or of the framing
<whitequark[cis]>
I think it is probably payload
peepsalot has quit [Read error: Connection reset by peer]
<cr1901>
Is "now" an okay time to discuss something I want in streams, along with presenting my case for "why"?
<whitequark[cis]>
ye
<cr1901>
Alright, so I don't think it needs to be as complex as AXI streams, but I would like Amaranth streams to be source and dest-aware, even if the source/dest is as simple as a single bit field.
<whitequark[cis]>
elaborate?
<cr1901>
This is because my AXI streams always look a specific way: I have a write and read stream. The write stream has a "data" and "control information to IP" destination. The read stream has a "data" and "control information from IP" source.
<cr1901>
The idea being that I don't want there to be the possibility to send control and data to/from an IP simultaneously
<whitequark[cis]>
okay, no, this is definitely out of scope
<cr1901>
So I use streams to "eliminate that possibility" (the state of "send control/data simultaneously is not representable)
<cr1901>
ahhh
<whitequark[cis]>
a stream is a strictly unidirectional channel
<whitequark[cis]>
(with feedback)
<cr1901>
yes, these are unidirectional streams
<cr1901>
just two of them
<whitequark[cis]>
okay, go ahead in case I misunderstood you
<cr1901>
There is a write stream to the IP, and a read stream _from_ the IP. They operate completely independently from each other. So a unidirectional Amaranth stream can be used twice to implement both
<whitequark[cis]>
yes
<cr1901>
I could have _4_ unidirectional streams. 1 write data, 1 write control data, 1 read data, 1 read control data. But I know apriori that I will never have both write streams or read streams active at once
<whitequark[cis]>
okay
<cr1901>
So "why juggle 4 streams when I can use two and multiplex regular data and control data" at different points in time for read and write channels?
<whitequark[cis]>
what benefit does this provide?
<whitequark[cis]>
it is strictly worse in terms of area, it is potentially worse in terms of delay (stream feedback paths are often on the critical path)
<cr1901>
The states of "control and regular data" streams being used simultaneously are unrepresentable.
<whitequark[cis]>
yes, that is true
<cr1901>
(Also is it strictly worse in terms of area? You have to have logic to make sure that you're not using ctrl and write streams simultaneously in the 4 stream case)
<whitequark[cis]>
you need that logic too if your source can attempt to write both control and data
<whitequark[cis]>
and if it does not, you do not need any additional logic since it is ensured by design
<whitequark[cis]>
* logic too in the combined case if your
<whitequark[cis]>
I'm saying it is strictly worse because it creates congestion and it needs an additional mux
* cr1901
nods
<cr1901>
I buy the idea that "making those states unrepresentable" isn't that big of an advantage/something that one is likely to screw up. It's just how I've written all my streams so far.
<whitequark[cis]>
I thought that at first but then realized it's not an either/or
<whitequark[cis]>
what stops you from adding, in the payload, one bit for control/data?
<whitequark[cis]>
with lib.data, you can make that a discriminated union very easily
<cr1901>
(what stops you from adding, in the payload, one bit for control/data?) Nothing I'm still in Verilog mode. That probably does what I want
<cr1901>
Yea, withdrawn I think, a tagged union probably does what I want
<whitequark[cis]>
having any functionality in core streams is expensive because basically every single thing needs to potentially know about it
<whitequark[cis]>
this is why I think even the "first" signal should be a part of payload if possible
<cr1901>
I don't particularly want a proliferation of different streaming protocols defined in the data payload, but I can defend creating specialized fifos/routers for a single data/control bit router
<cr1901>
for my own use case*
<cr1901>
Maybe there's a way to parameterize a router based on fields defined in the data payload, such that the routing function is a lambda
* cr1901
is thinking out loud
<cr1901>
(Also, I had a control channel that recently needed an upconverter. I could use the standard lib upconverter. I'd just strip the tagged union bit before sending the data to the upconverter)
<whitequark[cis]>
it is very likely there will be a proliferation of streaming protocols defined in the data payload
<whitequark[cis]>
I do not really see a way to avoid it
<cr1901>
I think an ad-hoc protocol is defensible in my particular case, b/c 1-bit of routing info is simple enough that 1. I don't mind doing it myself, 2. It's one extra bit of data. How much complexity can you fit into one bit?
<cr1901>
(don't answer that)
<whitequark[cis]>
I mean globally
<whitequark[cis]>
not in your specific case
<cr1901>
Ahhh, well hopefully others will document their own protocols well. Or the minimal base is "good enough" that most ppl don't feel the need to extend it. I don't think most ppl use most of AXI anyway (position/null bytes?). It's a little unfortunate that I ended up really liking the src/dst parts.
<cr1901>
most of, for example*, AXI
<whitequark[cis]>
there is no intent to copy the entirety of AXI4-Stream
<whitequark[cis]>
only to get the best ideas
<cr1901>
Sure. I'm making an analogy
<jevinskie[m]>
Maya, thanks I saw that and went back to the team to work on clearly specing out requirements, goals, non-goals and such so we’re all satisfied with it and be pleasantly happy with and unsurprised by the deliverable.
<Maya[m]>
Thank you. Just making sure it was not lost in some way
Degi_ has joined #amaranth-lang
Degi has quit [Ping timeout: 246 seconds]
Degi_ is now known as Degi
dlharmon has quit [Ping timeout: 246 seconds]
<charlottia>
Query: expected behavior from subclassing a Component? We currently do not note the superclass's annotations, but I think we probably should.
<charlottia>
(we check getattr(type(self), '__annotations__') but we may want to accumulate, riding the MRO up to Component exclusive)
<zyp[m]>
<whitequark[cis]> "I was thinking that I probably..." <- I agree, one of the points of frustration I've had with litex streams is not knowing whether a module implements the «optional» signals, so one of the rules I want for a stream convention is that all signals that exist in a stream signature needs to have expected behavior
<zyp[m]>
the same goes for ready; a stream source that doesn't support backpressure shouldn't take a ready signal to make it obvious that it doesn't
<zyp[m]>
also, I've started to think that since a stream interface is effectively any interface with a given set of signature members, it might be a good idea to limit the scope of the first RFC to just a «stream convention» that defines how a stream interface should look and behave
<zyp[m]>
because I went down the rabbit hole of «if first, last and ready are optional members, what other optional members could be useful to have?»
<zyp[m]>
<whitequark[cis]> "this is why I think even the "..." <- this is a somewhat compelling argument, since an upconverter could then take a `StructLayout({'data': 8, 'last': 1})` and produce an `ArrayLayout(StructLayout({'data': 8, 'last': 1}), 4)` without any extra code support, but it's also dangerous since `connect()` will happily connect it to a stream interface with a `StructLayout({'data': 32, 'last': 4})` payload
adamgreig[m] has joined #amaranth-lang
<adamgreig[m]>
<whitequark[cis]> "this is why I think even the "..." <- This is how I've done it historically when my "streams" move through eg asyncfifo or skid buffers or anything else that doesn't really need to deal with it, seems to work fine
peepsalot has joined #amaranth-lang
<zyp[m]>
the problem isn't really the blocks that doesn't need to care, it's the blocks that do need to care
<zyp[m]>
e.g. in the case of upconverting a single byte wide packetized stream to a multibyte stream with lane enables, you'll want the converter to be able to emit a non-full transfer if the end of a packet is unaligned, so it doesn't sit and wait for the beginning of the next packet to fill the transfer containing the last byte of the previous packet
GenTooMan has joined #amaranth-lang
GenTooMan has quit [Ping timeout: 240 seconds]
GenTooMan has joined #amaranth-lang
dlharmon has joined #amaranth-lang
dlharmon has quit [Ping timeout: 246 seconds]
<whitequark[cis]>
<zyp[m]> "the same goes for ready; a..." <- zyp: I'm not sure if `ready` can be optional
<whitequark[cis]>
ah, actually, no, we can handle this in a good way already
<whitequark[cis]>
we can make ready an input that actually has a Const(1), and connect() would only let you connect a Const(1) there (this isn't actually implemented right now but will be easy to do)
<galibert[m]>
I know a mpeg decoder chip that says “when I raise drq your can transfer one full block then wait until the next edge”. Block are configurable size, around 32 bytes
<galibert[m]>
So it’s slightly different than a ready signal
<whitequark[cis]>
galibert: I think that might not fit the stream abstraction well
<galibert[m]>
Possibly not
<galibert[m]>
Not everything has to fit it thankfully
<whitequark[cis]>
yes
<zyp[m]>
<whitequark[cis]> "we can make ready an input..." <- this sounds interesting if it could be propagated through multiple modules in a stream pipeline, but I'm not sure that's possible
<whitequark[cis]>
hmm
<whitequark[cis]>
we do not currently propagate data shapes either
<whitequark[cis]>
so it's a parameter you would give when creating the modules
<zyp[m]>
and I think it'd be more obvious if we just omit ready from the stream signature if it's not used
<zyp[m]>
the only advantage of having a ready that's fixed to 1 is that it'd allow you to connect a source with flow control to a sink without, without a gasket
_whitelogger has joined #amaranth-lang
<zyp[m]>
which is not very different from taking a shape castable for the payload, just more flexible
<galibert[m]>
So it’s decided there’s a -soc meeting Friday?
<zyp[m]>
with regards to compatibility, I figure a defined superset of stream conventions and a set of gaskets to adapt between different implemented feature sets is better than a limited feature set with various third party extensions
<galibert[m]>
We probably need a way to easily insert gaskets for non-stream busses too I think.
<galibert[m]>
I was wondering, is it thinkable to build modules that could connect through wishbone, axi3 or other without really caring about which it is? That kind of genericity?
<galibert[m]>
I’m not saying inserting converters, converting burst characteristics doesn’t work well in some directions, and it’s extra gates
<zyp[m]>
<whitequark[cis]> "I'm wondering if we should..." <- I think it's better to have it, so we can make e.g. a stream block that takes a packetized stream without flow control, monitors upstream readyness and on overflow send an abort signal and discard the rest of the packet
<galibert[m]>
Discarding is really annoying though
<zyp[m]>
if you're operating with packets, you want to discard a whole packet, not arbitrary bytes in the middle of it
<galibert[m]>
When you’re on the same friggin’ die it’s better not to
<zyp[m]>
I'm not sure I understand
<galibert[m]>
I mean it’s less expensive to route a ready or suspend or whatever signal than creating memory that used in the rare cases when a packet has to be discarded
<galibert[m]>
+is
<zyp[m]>
oh, yeah, absolutely, whenever you can have flow control, you should
<zyp[m]>
but when you're connecting to a USB or Ethernet PHY or something like that, you've got a firehose of data that you can't turn off
<galibert[m]>
It’s sane to have amaranth flows say “you must”
<galibert[m]>
Yes and no
<galibert[m]>
You have a “bugger off” protocol on each, which can mean discard, but that’s at a higher level than the stream
<galibert[m]>
Ethernet phy has no idea about the framing iirc
<zyp[m]>
the TransactionalizedFIFO that I linked to above is part of how LUNA implements flow control for USB
<zyp[m]>
upstream of it, you don't have flow control, so if it overflows, it discards the whole packet
<zyp[m]>
downstream of it, you've got a nice flow controlled stream that you can read at whatever pace you like
<zyp[m]>
and IMO it's useful to have that as a reusable block that you can put in the middle of a stream pipeline
<whitequark[cis]>
<galibert[m]> "I was wondering, is it thinkable..." <- we've discussed this before for Amaranth SoC. the CSR interface is made to make CSRs independent of the underlying bus, but if you have memory windows, you almost always want to use advanced features like bursts
<whitequark[cis]>
and it is not practical to make universal adapters to those
<galibert[m]>
Yeah, I don’t believe in adapters. I’ll have to dig down to what amount of genericity would be needed and how much code could actually be shared
<galibert[m]>
Will need experimentation
<whitequark[cis]>
<zyp[m]> "but when you're connecting to..." <- you have to drop *somewhere*
<zyp[m]>
exactly, and I'd like to be able to have a reusable block that takes care of that
<whitequark[cis]>
but should the rest of the stream infrastructure have to pay for it?
<galibert[m]>
Especially since you may consider that some packets are more important than others and suddenly your fifo gets complicated
<zyp[m]>
I think it is reasonable for library blocks to only support a specific subset of the whole stream convention, as long as it's being explicit about it
<jfng[m]>
genericity (as in, shared abstraction) between WB/AXI, would be reduced to their common denominator, which would probably just be single cycle transactions...
<galibert[m]>
jfng: that would be exactly what I’d like to avoid
<jfng[m]>
yeah, i don't see a use-case for that
<whitequark[cis]>
jfng: you can always use a CSR for that
<whitequark[cis]>
a GenericField in a register, it would be the same functionality
<jfng[m]>
whitequark[cis]: if backpressure isn't needed, yes
<galibert[m]>
I want to try (and possibly fail hard) for a kind of genericity that allows for “optimal” designs for both without having to copy/paste 90 % of the code
<whitequark[cis]>
yeah
<galibert[m]>
Your car stuff for instance is 90% not bus-type-dependant
<whitequark[cis]>
car stuff?...
<galibert[m]>
Csr damnit autocorrect
<whitequark[cis]>
I mean, CSRs are 100% independent from the bus, this is what our design does
<whitequark[cis]>
but CSRs are an easy case
<galibert[m]>
Most cases are easy honestly
<galibert[m]>
Bus-wise
<galibert[m]>
Complexity is after
<galibert[m]>
Arbiter, decoder, csr, things like clock generators, serial, spi, dma… much of the complexity is not in the bus interaction
<galibert[m]>
And having like four different busses is annoying
<galibert[m]>
Anyway, I’ll see what I can do, and if I get results I’ll share them:-)
dlharmon has joined #amaranth-lang
<whitequark[cis]>
okay, I'm off a meeting and free to talk about streams
<whitequark[cis]>
there are several questions already. for example, should all streams support multiple lanes potentially, or should this be limited to e.g. MultilaneStreamSignature?
<whitequark[cis]>
the reason to limit it would be that otherwise you will have to write stream.data[0] everywhere to get the only data element
<whitequark[cis]>
but maybe it's not so bad
<whitequark[cis]>
or, we could have stream.d be an alias to stream.data[0] so that in cases where you know you always have exactly one lane, you can use stream.d easily, but when you can have multiple lanes, such as in stream infra or in PCIe or in USB3, you use for lane in range(lanes): do_something(stream.data[lane])
<whitequark[cis]>
of course, Out().array(1) cannot be connected to Out() with no array
GenTooMan has quit [Ping timeout: 260 seconds]
<whitequark[cis]>
allowing that is another option, but I don't like it too much
<zyp[m]>
I'd be inclined to have the array be part of the shape, not the signature
<whitequark[cis]>
I don't think that's the right approach for two reasons
<whitequark[cis]>
the main one is that I want lane compatibility to be done when connecting interfaces
<whitequark[cis]>
e.g. I do not want it to be easy to connect the data of a 4-lane 8-bit stream to a 32-bit stream with no converter
<whitequark[cis]>
this will fail right now because of valid width check
<whitequark[cis]>
but you can still .eq it easily and that's most likely incorrect
<whitequark[cis]>
the second one is that having per-lane data signals split makes debugging easier and indicates intent
<whitequark[cis]>
when working with something like 8b10b, you want each lane to be displayed as K27.5 or whatever in the simulator
<whitequark[cis]>
and this is not very easy if you squish them all into the data array
<zyp[m]>
in that case I absolutely think it'd make sense to separate StreamSignature and MultilaneStreamSignature
<whitequark[cis]>
does the stream infrastructure ever need to care about packet boundaries (first/last)?
dlharmon has quit [Quit: Client closed]
vegard_e[m] has joined #amaranth-lang
<vegard_e[m]>
Yes. I'll elaborate on why later, got to run
<whitequark[cis]>
in that case that creates a problem because you have PacketStreamSignature and PacketMultilaneStreamSignature
<whitequark[cis]>
either we have composition (no idea how) or we allow lanes in base streams
saberhawk[m] has joined #amaranth-lang
<saberhawk[m]>
Details like that should always be part of the data payload. Why should a SyncFIFO be any different from a stream?
dlharmon has joined #amaranth-lang
<dlharmon>
AXI4 bus as opposed to stream has WSTRB as part of the write AXI stream data to indicate which bytes are to be written and a single valid.
<whitequark[cis]>
that would make composing streams easier, yes
<dlharmon>
I prefer simple with a single valid, ready, user defined data to include last, byte enables, etc..
<whitequark[cis]>
can you elaborate on the user defined data part? do you mean the stream payload, or sidebands?
<whitequark[cis]>
(I have a hour long meeting in 20 min but we can continue for now)
<dlharmon>
I mean as payload.
<whitequark[cis]>
okay, that's something I would also like
<whitequark[cis]>
do you think the stream infrastructure in the stdlib (upconverter, etc) will ever need to consider e.g. first/last?
<whitequark[cis]>
I think this is for me the key bit that determines if it should be in the payload or not
<whitequark[cis]>
I think the stream infrastructure should only ever rely on the fields defined in the StreamSignature (or signatures) and without looking at the payload
<dlharmon>
I can't think of any scenario where it would need to be considered but could be missing something.
<adamgreig[m]>
"sometimes" imo. like the crc module needed a "last" indicator but a regular fifo can probably pass it through transparently
<adamgreig[m]>
The "transactional" fifo discussed above needs to know about it too, if it's to abort whole frames
<whitequark[cis]>
the CRC module is not stream infrastructure, right?
<adamgreig[m]>
Sure, no, it would just have a stream interface
<whitequark[cis]>
it's an endpoint, endpoints naturally care about the contents of the data payload
<whitequark[cis]>
and the data payload in this case will be just data: 32 or w/e and last
<whitequark[cis]>
or first depending on how we make it
<whitequark[cis]>
I am not sure about the transactional FIFO, that sounds protocol specific
<whitequark[cis]>
should we even try to make it generic enough to include in the stdlib or soc?
<adamgreig[m]>
Personally I don't expect so
<adamgreig[m]>
Like I already have my quite niche fifo that accepts whole packets, commits them when it gets to the end or otherwise rewinds the write pointer if they get aborted, and is specialised for my packet structure, I just want a common stream interface on them
<adamgreig[m]>
It means it's going to have some extra control signals like end of frame, commit, abort
<adamgreig[m]>
(actually commit and eof were the same iirc, not got it in front of me)
<whitequark[cis]>
if you have multiple lanes, you'd do MultilaneStreamSignature which is just StreamSignature(Signature({"lane": Out(data_payload).array(lanes), "strobe": Out(1).array(lanes)))
<dlharmon>
The updated StreamSignature draft is functionally equivalent to what I've been using for the last few years so will certainly work for my uses.
<whitequark[cis]>
UpConverter and DownConverter would have MultilaneStreamSignature at their boundary so they could only be connected to multilane endpoints
<whitequark[cis]>
but you could still use a stream.AsyncFIFO (or whatever we call it) to buffer transfer the multilane stream because it's compatible
<whitequark[cis]>
it would just not look at the lanes inside
<whitequark[cis]>
dlharmon: alright, perfect! any other thoughts or wishes for functionality to include?
<whitequark[cis]>
which stream infrastructure components do you foresee using? I'm thinking lane converters, sync and async FIFOs, for now
<jfng[m]>
skid buffers, maybe ?
<adamgreig[m]>
yea, a skid buffer would be great
<jfng[m]>
though they could always be added later
<whitequark[cis]>
how would a skid buffer look like?
<dlharmon>
Simple is good. Probably lots of FIFOs, few to no lane converters, skid buffers would get used if they were available. An interface to a stream in simulation would be really nice.
<adamgreig[m]>
mine is ready/valid/data on both sides, but it can present ready to its input for a cycle when the output side isn't yet
<adamgreig[m]>
I also stuck a reset on mine to clear whatever it's holding on to
<whitequark[cis]>
dlharmon: we will be rethinking the simulation interface so that something like `await stream.write(0x1234)` is feasible
<adamgreig[m]>
adapters to stream into/out of Memory would be useful too
<whitequark[cis]>
(right now it is not, Migen ancestry is hell)
<whitequark[cis]>
adamgreig[m]: only one cycle?
<whitequark[cis]>
adamgreig[m]: how do you envision these?
<jfng[m]>
whitequark[cis]: something functionally similar to a SyncFIFO with a depth of 1, and first-word fallthrough
<adamgreig[m]>
whitequark[cis]: yea, I think a single pipeline stage is typical
<whitequark[cis]>
jfng: I am thinking that maybe skid buffer functionality should be built into `stream.FIFO`
<whitequark[cis]>
since it is just something that is subtracted from the level when determining if the FIFO is ready ornot
<whitequark[cis]>
adamgreig[m]: I need two in Glasgow
<adamgreig[m]>
I think I'm not really explaining the utility of a skid buffer well
<whitequark[cis]>
(two stages)
<whitequark[cis]>
though I guess you could connect two of them in series?
<adamgreig[m]>
it lets you have a pipeline stage between the ready/valid handshake
<adamgreig[m]>
so it's mostly a timing thing rather than a fully fledged fifo buffer
<whitequark[cis]>
hm
<adamgreig[m]>
i.e., it's not "a little overflow in case the data rates in/out don't quite match", for that I want a fifo
<adamgreig[m]>
it's "i want to put a register between the o_valid and i_ready of two stages"
<whitequark[cis]>
in case of Glasgow this is because there are registers (unavoidable) in I/O and in the FX2
<adamgreig[m]>
(because one side or the other has a deep combinatorial path or whatever)
<whitequark[cis]>
two of them
<whitequark[cis]>
and they need to be matched to Glasgow's output stream
<adamgreig[m]>
yea, I guess if you have a two-deep unavoidable pipeline between the ready/valid ends you need to store up to two data words
<jfng[m]>
yes, they are handy to cut long combinatorial chains of ready signals in a deep pipe
<whitequark[cis]>
I see!
<whitequark[cis]>
this seems very handy indeed
<adamgreig[m]>
looking through my little library of stream stuff, I also have some some cdc things like a word-at-a-time cdc handshake to move streams through clock domains
<whitequark[cis]>
is this essentially BusSynchronizer one way + PulseSynchronizer another way wrapped in a Stream?
<adamgreig[m]>
<whitequark[cis]> "how do you envision these?" <- usually for me this is like "some Thing has written into a memory, and I want to trigger dumping the memory out a uart"
<adamgreig[m]>
or vice versa
<adamgreig[m]>
so it's "on trigger, continually assert valid until the whole memory (or whatever length was configured when triggered I guess) is read out into the stream sink"
<whitequark[cis]>
I'm not sure how to make a reusable component for that
<adamgreig[m]>
or equivalently, "on trigger, start to sink a stream from a uart into a Memory"
<whitequark[cis]>
ah I see, so would that take a memory port as an argument?
<adamgreig[m]>
yea, it might be too annoying to make generic, it's not hard to write myself when I need it, but I've used it surprisingly often while prototyping
<whitequark[cis]>
we can consider it
<adamgreig[m]>
yea. or an Interface that matches a memory ports'
<adamgreig[m]>
so in my case, signals on self for addr/data/valid/ready
<whitequark[cis]>
adamgreig[m]: oh yeah, good idea!
dlharmon has quit [Quit: Client closed]
<whitequark[cis]>
tbh, we should make Memory a library construct
<galibert[m]>
adamgreig: that has a big dma feeling
<zyp[m]>
it sounds very similar to litex' DMAReader/DMAWriter
<adamgreig[m]>
I guess, I don't usually think much about DMA in my gateware which tends to not have a soc/cpu sort of thing
<adamgreig[m]>
but yes, autonomous triggered transfers in/out of memory does sound a lot like dma
<crzwdjk>
Seems like a pretty common thing, to want to write a stream of stuff into memory (or read a bunch of stuff from memory)
<adamgreig[m]>
it's been great for gluing a debug uart on that can drain a memory that i've filled with samples or a mirror of a stream or whatever
<zyp[m]>
wishbone on one side, stream on the other
<zyp[m]>
can be controlled by gateware, or have CSRs to be controlled by software
<zyp[m]>
<whitequark[cis]> "do you think the stream infrastr..." <- I mentioned this earlier, upconverter will need to consider `last` if it should be able to emit an unaligned end, before waiting for the first bytes of the next packet to fill the whole transfer
<zyp[m]>
(in which case it'll also need some sort of lane validity signal)
<zyp[m]>
in liteeth, this is bolted on as a last_be member in the stream payload, and working out how that was supposed to be used was confusing
<_whitenotifier>
[amaranth-lang/rfcs] whitequark 8407b96 - add a note on connecting constant port members
<whitequark[cis]>
I realized it's neither specified not implemented, but here you go
GenTooMan has joined #amaranth-lang
<crzwdjk>
Is it possible to have an RFC 2 Component with a variable signature (depending on stuff passed to the constructor for example)? Or is Component only for stuff with a fully fixed signature?
<whitequark[cis]>
Component's only purpose is to extract a fixed signature from Python type annotations
<whitequark[cis]>
if you do not need that, there is no reason whatsoever for you to use Component
<crzwdjk>
Ah okay, thanks
dlharmon has joined #amaranth-lang
M8c4db68ff02f4fa has quit [Quit: Idle timeout reached: 172800s]
pbsds has quit [Ping timeout: 246 seconds]
pbsds has joined #amaranth-lang
miek[m] has quit [Quit: Idle timeout reached: 172800s]
<crzwdjk>
Well, Component's constructor also creates instance attributes from the signature members, which is a handy shortcut.
<whitequark[cis]>
mm, yes
<whitequark[cis]>
I have some minor changes related to that; I'll get back to you in a bit
<crzwdjk>
Cool
gatin00b[m] has quit [Quit: Idle timeout reached: 172800s]
sys64738_2574[m] has quit [Quit: Idle timeout reached: 172800s]
mcc111[m] has quit [Quit: Idle timeout reached: 172800s]