<whitequark[cis]>
<Wanda[cis]> "special-case, as in?" <- as in, put something in the platform metadata and then look at it in the buffer override
<whitequark[cis]>
this is how SB_IO_OD should be handled, probably
<Wanda[cis]>
doable
<whitequark[cis]>
<Wanda[cis]> "oh right, here is a use case..." <- it does I guess, but this doesn't seem like a particularly serious issue?
<whitequark[cis]>
ice40 is very routing-rich
<Wanda[cis]>
agreed
<whitequark[cis]>
also nextpnr could be taught to fix that
<whitequark[cis]>
doesn't seem like something we should be surfacing in the language
<Wanda[cis]>
that too
<Wanda[cis]>
whitequark[cis]: as for this, we can instantiate the buffer, but we have no way to assert that it's actually used as open-drain only (ie. with `o` tied to 0)
<Wanda[cis]>
though I consider this a minor problem
<whitequark[cis]>
right
<Wanda[cis]>
overall... idk, I do have a slight preference for including TristateOutput vs Output distinction if only so that some weirdass platform cannot surprise us later
<whitequark[cis]>
I have a preference against it, since it definitely complicates the common case non-trivially
<whitequark[cis]>
Wanda[cis]: actually, we do!
<whitequark[cis]>
remember that lib.wiring can check that certain things are wired to a constant
<whitequark[cis]>
so a non-tristatable output would have a signature where oe is `Const(1)
<whitequark[cis]>
* so a non-tristatable output would have a signature where oe is Const(1)
<Wanda[cis]>
huh
<whitequark[cis]>
and an open-drain-only output would have a signature where o is Const(0)
<Wanda[cis]>
that... actually works
<Wanda[cis]>
and you'd pass whatever information you need for this via metadata, right
<whitequark[cis]>
yep, since it's provided to the constructor, and the constructor creates the signature, that works
<Wanda[cis]>
interface, not signature, right?
<Wanda[cis]>
but yeah, I do think this is workable
<tpw_rules>
zyp[m]: re your rfc, there's no actual async event loop running is there?
<tpw_rules>
like will asyncio.get_running_loop() return something useful?
<whitequark[cis]>
it's completely unrelated to asyncio
<tpw_rules>
ok, that's what i thought
<tpw_rules>
i was wondering about stuff like asyncio.gather or can each sim process (function passed to add_testbench/add_process) only have one thread
<tpw_rules>
is mingling with asyncio a future possibility?
<whitequark[cis]>
async testbench triggers, etc, will always be completely separate from asyncio
<whitequark[cis]>
the only way in which asyncio will be involved is that a simulator could pass an asyncio coroutine through
<tpw_rules>
ok, i think that makes sense
<zyp[m]>
I've been thinking that we might want to add something akin to asyncio.gather() later
<zyp[m]>
it'd be nice to be able to spawn off e.g. a packet_write() and a packet_read() on each end of a stream pipeline and wait for them both to finish
<tpw_rules>
exactly
<zyp[m]>
but that's still something we'd handle internally, because the simulator engine is effectively its own kind of ioloop
<tpw_rules>
could a sim.ticks() substitute for sim.time() for now?
<tpw_rules>
(with a domain argument)
<zyp[m]>
I'm probably gonna write the sim.time() RFC relatively soon
<tpw_rules>
should it have sim.ticks() too? i would say yes
<tpw_rules>
also python is good at bigints, is "integer number of femtoseconds" wrong for the type
<zyp[m]>
we should be able to pass it as an input to the functions that currently take seconds as a float as well
<tpw_rules>
which are those?
<zyp[m]>
add_clock and delay
<whitequark[cis]>
what is sim.ticks() even supposed to mean? it's not well-defined
<tpw_rules>
number of ticks for the passed domain
<tpw_rules>
i.e. positive edge events
<zyp[m]>
presumably sim.ticks(domain="sync")
<tpw_rules>
or whichever edge tick is sensitive to. yes, i noted (with domain argument) after, sorry
<zyp[m]>
which effectively is something like sim.time() / clock_interval[domain]
<tpw_rules>
yes
<whitequark[cis]>
still not really well-defined, considering we actually introduce a 180° phase offset on purpose
<zyp[m]>
I think it could make sense, and the argument for having it as another method on the context is that the context can look up the intervals of the added clocks
<whitequark[cis]>
I don't think it should be a part of this RFC, considering that you can easily emulate the functionality (make a loop that runs Tick() and increments a nonlocal variable)
<tpw_rules>
that is fair, i would define it as exactly that.
<zyp[m]>
it's absolutely out of scope for #36, but adding sim.ticks() along with sim.time() when we get around to that sounds pretty reasonable to me
<tpw_rules>
Wanda[cis]: re 55, i'm confused about the verbiage "if direction is a string, it is converted to Direction", is this not the case for all the classes? it is in e.g. amaranth-soc and should be here too
<Wanda[cis]>
<zyp[m]> "I think it could make sense, and..." <- I'd like to point out that domains can also be driven by logic, in which case you'd need to implement it with actual counter per domain
<Wanda[cis]>
tpw_rules: I... don't know what you're suggesting here?
<zyp[m]>
yes, the way I envision would only work for simulator-driven clocks
<tpw_rules>
Wanda[cis]: that all direction argument also take e.g. "io" in addition to lib.io.Direction.Bidir
<tpw_rules>
it's specifically pointed out that this is possible for one class (Buffer.Signature) so does that mean it's not possible for the others
<zyp[m]>
maybe just underspecified everywhere else
<Wanda[cis]>
that's kinda implied (I didn't want the rfc to be overly verboae with details like this, so I only wrote it on the first class)
<tpw_rules>
ok. cause it's just like the middle one in the list
<Wanda[cis]>
... oh
<zyp[m]>
probably was the first one and got rearranged afterwards
<Wanda[cis]>
that'd be because direction on `*Port` is later addition
<tpw_rules>
i also saw that feature wasn't used in the example
<tpw_rules>
Wanda[cis]: is the motivation for dropping XDR > 2 on rfc 55 that it's not going to be implementable without vendor primitives anyway?
<Wanda[cis]>
the motiviation is not that it's just not implementable without vendor primitives (so is DDR), but that the interface to it is not obvious
<Wanda[cis]>
because you need a slow clock, and a fast clock, and platforms tend to have requirements about how exactly they are driven, etc.
<tpw_rules>
maybe related, but i also never remember which time i[0] and i[1] is. is i[0] after the falling edge and i[1] after the rising? or inactive/active? can that be documented?
<Wanda[cis]>
the RFC doesn't document that, because it doesn't introduce it, it just reuses the current definition
<Wanda[cis]>
but: 0 is the one that gets send/received first (at the "normal" edge of domain), 1 is second (at "opposite" edge of domain)
<tpw_rules>
ah, it's documented as the opposite in Pin
<tpw_rules>
if that's the only place it's documented then it should definitely be added to the RFC as it deprecates Pin
<Wanda[cis]>
it what
<tpw_rules>
the emphasis that 0 is on the normal edge and 1 is on the oppsoite
<tpw_rules>
on DDRBuffer.Signature
<Wanda[cis]>
no, what do you mean about the existing Pin documentation?
<tpw_rules>
the opposite from my conception, sorry
<Wanda[cis]>
this RFC is still missing replacement for lib.Memory simulation functionality and I'd like to wait for a definition of that before hitting merge
<tpw_rules>
it seems strictly better than the current situation, i say merge. my bikeshed vote is posedge on account of i'm a Real Engineer and allergic to pressing more keys on the keyboard
<Wanda[cis]>
(I think it's also a good opportunity to actually see sim helpers in action)
<Wanda[cis]>
oh yeah, about that, I'm leaning to posedge on account of Verilog legacy
<tpw_rules>
i'm a little confused by the meaning and necessity of testbench_helper, i surmise it's just something you'd probably throw on library code to make sure it gets used right?
<whitequark[cis]>
yes
<cr1901>
I will abstain (it looks fine)
<Wanda[cis]>
has the possibility of process_helper been considered? I guess it's not nearly as useful as testbench_helper, so I'm fine if the answer is just "meh, no"
<whitequark[cis]>
Wanda: I'm not sure re: `lib.Memory` as I think we may want to replace `MemoryIdentity` with `MemoryData`
<whitequark[cis]>
I have thought about process_helper and couldn't come up with a clear use case
<zyp[m]>
whitequark[cis]: same
<zyp[m]>
we can always add it later if the need comes up
<whitequark[cis]>
I guess more generally I'm wary of blocking this RFC on even more things that aren't strictly required for its functionality
<Wanda[cis]>
hm
<zyp[m]>
I think there's enough details to work out for lib.Memory helpers that I'd like to keep it out of the scope for this
<Wanda[cis]>
okay, if you want to merge it soon and follow up with lib.Memory out of band, I'm not very opposed to it
<Chips4MakersakaS>
(am here now also)
<tpw_rules>
what was the situation for memory before this? it seems to work okay in my current simulations, does not having the functionality break things or just slow it down?
<whitequark[cis]>
I think I want to work out lib.memory details separately since I think the whole current MemoryIdentity API was a mistake and it's not really in scope for RFC 36
<Wanda[cis]>
tpw_rules: currently you `yield mem[123]` or `yield mem[123].eq(456)`
<zyp[m]>
sim.memory_read/write should already cover existing memory functionality
<Wanda[cis]>
which does not translate to current interface at all
<Wanda[cis]>
zyp: it does not, since it requires you to access a private field
<tpw_rules>
oh, i currently just poke the ports and don't access memories "out of bad"
<galibert[m]>
waves
<tpw_rules>
"out of band"*
<Wanda[cis]>
so, I have one minor comment left: __aiter__ sounds like it could also be useful for tick, does this look like something we want to throw in?
<whitequark[cis]>
which doesn't really yield to the async for _ in sim.tick() form
<whitequark[cis]>
that tick+until combo must absolutely be lowered to something more efficient than a busy wait loop in Python, but the overall loop is fine as-is
<jfng[m]>
merge, if `lib.Memory` helpers can be done separately
<Wanda[cis]>
there's also a few guarantees I'd like to have wrt how trigger objects behave
<Wanda[cis]>
eg. something like "if you async for ... in sim.posedge(clk1).posedge(clk2) and await no other trigger objects, you'll get exactly one True for every tick even if the exact grouping of those into iterations is unspecified"
<Wanda[cis]>
but I think that's QoI stuff that we can just hash out in implementation
<galibert[m]>
Curiosity, if you have time, what async for ... in sim.posedge(clk1).posedge(clk2) means, programatically?
<Chips4MakersakaS>
merge
<tpw_rules>
whitequark[cis]: opened with merge, bikeshed vote is to keep "posedge"
<whitequark[cis]>
anyone else has opinions on "posedge" vs "pos_edge"?
<zyp[m]>
galibert[m]: wait for an edge on either of the two clocks
<whitequark[cis]>
(personally leaning towards "posedge")
<zyp[m]>
I think I prefer posedge
<tpw_rules>
whitequark[cis]: Wanda[cis] had said posedge too
<Wanda[cis]>
galibert: wake up your process whenever posedge on one or the oither happens, give you information on which one (or both) was actually hit
<jfng[m]>
posedge
<Chips4MakersakaS>
Also prefer posedge
<jfng[m]>
rising ?
<whitequark[cis]>
ok, waiting on cr1901
<whitequark[cis]>
oh wait, abstain
<galibert[m]>
Wanda: you have a block after that controlled by the for?
<Wanda[cis]>
yeah
<galibert[m]>
fun
<whitequark[cis]>
all right, disposition on RFC 36: merge, resolving the bikeshed to "posedge"
<Wanda[cis]>
haven't you... read the RFC?
<galibert[m]>
Wanda: yes, but I don't know much about async syntax in python
<whitequark[cis]>
please respond with your comments or disposition: merge or close
<galibert[m]>
honestly, I don't understand async :-)
<galibert[m]>
must have missed it, sorry
<whitequark[cis]>
note that RFC 55 is a hard requirement for expanding amaranth-stdio
<cr1901>
merge, it's a natural extension to 52
<whitequark[cis]>
(or something like it that changes how buffers work)
<Wanda[cis]>
53 but yes
<galibert[m]>
merge, happy with the current state
<zyp[m]>
merge
<tpw_rules>
merge, slightly confused about including so much non-implementable stuff but it seems all worthwhile
<Wanda[cis]>
merge
<mcc111[m]>
abstain
<Wanda[cis]>
(with a side note that I'd like to visit a general solution for introspecting clock domains in elaboratables for the purpose of vendor primitive instantiation soon)
<whitequark[cis]>
tpw_rules: we're kind of trading off some holes in the language/stdlib for some other holes here, I think
<galibert[m]>
EnableSignal raises its head again
<Chips4MakersakaS>
merge, liked introduction of direction
<Wanda[cis]>
(because the problem is much wider than just RFC 55, and affects eg. lib.Memory custom lowerings too)
<Wanda[cis]>
(and affected previous implementation of memories until it was fixed with a big hammer)
<jfng[m]>
merge
<Wanda[cis]>
tpw_rules: "so much" is ... a little bit of overstatement, the only thing missing for this RFC in particular is clock domain edge introspection (and we can implement it right now with the restriction that it only works for posedge domains)
<whitequark[cis]>
disposition on RFC 55: merge
<tpw_rules>
Wanda[cis]: the RFC does say 2/3 of the buffers are "not actually currently implementable in many cases"
<Wanda[cis]>
(I'm also inclined to just throw in a private API that does "assert this domain is posedge" so that it at least fails loudly in the meantime)
<tpw_rules>
i'd be okay with an assertion like that
<zyp[m]>
sounds reasonable
<whitequark[cis]>
there's an interesting aspect where clearly documenting that something is ambiguous/unimplementable/etc gets raised eyebrows while silently raising an AssertionError on an undocumented invariant doesn't
<galibert[m]>
(just throwing ideas to see if one resonates)
<cr1901>
I have a slight preference for aggregate, but multiple is fine too
<whitequark[cis]>
I don't like rows either
<whitequark[cis]>
please respond with your comments on RFC 54, or disposition: merge or defer (RFC 54 closes a soundness hole in the language, and this hole must be closed in one way or another; I would generally consider it a bug that it exists, which doesn't strictly speaking require an RFC at all)
<tpw_rules>
i like the idea of rfc 54, but am not ready to vote merge because i need my shitty vendor tools to infer memory on my designs. until there's an escape hatch in place to get exactly the previous behavior, or an implementation to test, i vote defer
<zyp[m]>
I thought I had a concern about #54 when I read it yesterday, but it turned out I just had to read better, so it's a merge from me
<galibert[m]>
I wonder if we really want non-zero init, since no hardware seems to do it?
<galibert[m]>
(merge anyway)
<tpw_rules>
(or an implementation of a vendor specific memory lowering, which i could supply)
<cr1901>
You intend to alter reset-less signals to return an unspecified value. How do you distinguish a power-on-reset from a non power-on-reset?
<whitequark[cis]>
galibert: it could be useful to have a CPU that starts executing from a ROM from the very first instruction, perhaps
<whitequark[cis]>
cr1901: you don't
<tpw_rules>
cr1901: me? i don't at this time, i just don't want my designs broken
<Wanda[cis]>
galibert[m]: Xilinx can do it just fine
<Wanda[cis]>
it's literally in the RFC, please read it.
<galibert[m]>
Ah, I misremenbered what I had read friday
<whitequark[cis]>
cr1901: more specifically, as your design can be reset at any point, you generally can't assume anything about the value of reset-less signals at your domain reset, and have to treat them as potentially having any value
<Wanda[cis]>
tpw_rules: the RFC is implementable, the vendor primitives support the common case, the non-zero init can be emulated, and we have the hook to implement it even if inference doesn't work; what more do you want?
<Chips4MakersakaS>
merge if support for unitilized ports is introduced later in same way as for Signals.
<whitequark[cis]>
Chips4Makers (aka Staf Verhaegen): that is the plan, yes (read port is a weird Signal)
<whitequark[cis]>
tpw_rules: note that the implementation of this RFC will not be merged until after Amaranth 0.5 is released
<galibert[m]>
Chips4Makers (aka Staf Verhaegen): undefined is going to be an... interesting addition to the language
<whitequark[cis]>
reset_less domain is not the same as "domain where every signal is reset_less"
<whitequark[cis]>
the delay signal is reset_less=False (by default) and is defined to reset to the init value at power-on reset
<cr1901>
Okay, I have no problems with the RFC then. Merge.
<galibert[m]>
cr1901: async reset can happen anytime, including in the middle of toggling a ff. Async reset is in fact really problematic, because it's a severely uncontrolled CDC
<Wanda[cis]>
or rather
<whitequark[cis]>
mcc111? jfng?
<mcc111[m]>
abstain
<jfng[m]>
abstain
<Wanda[cis]>
tpw_rules: do you want to block the RFC, or the implementation of the RFC?
<tpw_rules>
Wanda[cis]: i suppose implementation, conditional on the hook or the test
<whitequark[cis]>
we can add that as a condition
<whitequark[cis]>
(that was the original plan after all; it doesn't change anything)
<tpw_rules>
okay
<Wanda[cis]>
we do kinda have the hook already
<tpw_rules>
then i can change to merge if it makes the bookkeeping simpler
<Wanda[cis]>
(though it runs into the same problem as RFC 55)
<whitequark[cis]>
it means we commit to a specific interface
<whitequark[cis]>
so we can start writing implementation, docs, etc without worrying that it will change later
<whitequark[cis]>
so, yes, that is significant
<tpw_rules>
well i thought votes didn't have to be unanimous
<whitequark[cis]>
they don't
<galibert[m]>
We can always do a new rfc if the implementation shows that it doesn't entirely work out
<whitequark[cis]>
but I do want to know whether you're OK with the interface or not
<whitequark[cis]>
because I prefer strong consensus over rough consensus
<tpw_rules>
yes, i'm okay with the interface. it can be implemented with a future hook and the future possibility can remain that way
<whitequark[cis]>
OK, disposition on RFC 54: merge (noting that the implementation must allow inference on major platforms)
<whitequark[cis]>
🎉 we actually finished ahead of time today!
<whitequark[cis]>
great job everyone
<galibert[m]>
yay
<whitequark[cis]>
tpw_rules: note that we recently broke inference on Lattice and fixed it on Xilinx without ever going through the RFC process
<Wanda[cis]>
<del>there's still bikeshedding on RFC 56 to do!</del>
<whitequark[cis]>
... because we don't have any tests for infreence
<galibert[m]>
we need a rfc #666, rename every intel back to altera now :-)
<whitequark[cis]>
that's already done
<tpw_rules>
whitequark[cis]: my DSP inference is still broken oo
<tpw_rules>
too*
<tpw_rules>
would like 0.5 to not go out without a fix
<whitequark[cis]>
tpw_rules: yeah, I think that one is currently listed as a release blocker?
<Wanda[cis]>
tpw_rules: your issue is already filed and it is a release blocker
<mcc111[m]>
I have a process question— the agenda for these meetings is posted ahead of time right? Is it in a specific location or do we just need to find a Catherine comment in here from the preceding Friday?
<whitequark[cis]>
mcc111: latter
<tpw_rules>
Wanda[cis]: okay, did not know that being added to the milestone meant it was a blocker, thanks
<whitequark[cis]>
(announcing the agenda at all is kind of a new thing)
<tpw_rules>
i think announcing the agenda is very helpful
<mcc111[m]>
whitequark[cis]: Could I request, when you post the agenda on the preceding friday, you also add it to the Matrix room topic? That's what we do on the Tusky project meetings
<Wanda[cis]>
for transparency: the PR is still pending; I'm blocking it on having proper tests for it, which are blocked on RTLIL backend refactor, which in turn is blocked on RFC 53 impl which does a bunch of RTLIL changes
<mcc111[m]>
* also add a link to it to
<galibert[m]>
well yeah, I could have my last questions on friday so I didn't annoy everyone today :-)
<whitequark[cis]>
mcc111: this will break on IRC because IRC topics are limited to 240 chars or something
<whitequark[cis]>
so I can't do that, I think
<tpw_rules>
Wanda[cis]: please also ping me when you get a testing quality implementation of RFC 54.
<Wanda[cis]>
ack, will do
<whitequark[cis]>
galibert: I think I announced the agenda twice, on Monday and Friday iirc
<tpw_rules>
Wanda[cis]: yes, i understood that was the case, thanks for the work again
<mcc111[m]>
ah, hm
<mcc111[m]>
well then I request you pin it … … … … when Matrix gets around to implementing pinning D:
<galibert[m]>
Possible, I saw the friday one in any case
<whitequark[cis]>
I think we have significant unresolved questions in RFC 52 (the if= bit)
<tpw_rules>
whitequark[cis]: re the hook, is the platform obliged to implement all cases or can it punt back to the generic code? can/should it throw an exception if the case is not supported?
<galibert[m]>
Sure, I did a bunch of tests with them, would be happy to help
<whitequark[cis]>
tpw_rules: the platform can always create a `MemoryInstance`
<galibert[m]>
I only have access to cyclone V (two different ones) though
<tpw_rules>
galibert[m]: so altsyncram is used for all the families?
<galibert[m]>
yeah
<galibert[m]>
there's a parameter to tell the type of ram you want, in the cv case it can be m10k or mlab
adamgreig[m] has joined #amaranth-lang
<adamgreig[m]>
Sorry to miss the meeting. I've been working on an lfsr RFC and made some progress but not enough to get it in for this meeting, hopefully in the next few days!
<Wanda[cis]>
the platform can also just do return mem.elaborate(None) to use the default impl
<tpw_rules>
i guess i never quite grokked what exactly a "megafunction" is
<galibert[m]>
it's a code generator
<mcc111[m]>
The Microsoft Foundation Classes live!!!
<galibert[m]>
the generated code may or may not include instances of funky things
<Wanda[cis]>
also: what did we settle on with RFC 56 bikeshed? I can just go with aggregate by majority, but I don't think I've seen an explicit decision
<galibert[m]>
hey, another idea, lanes=
<tpw_rules>
galibert[m]: should we try to go lower or just set up an altsyncram?
<galibert[m]>
just for fun
<jfng[m]>
Wanda[cis]: i prefer `multiple`, but i don't really like either one
<galibert[m]>
there's nothing lower than altsyncram
<tpw_rules>
galibert[m]: there's whatever it generates right? or does quartus not let you describe that in your design?
<galibert[m]>
I mean altsyncram is the module name in the verilog. No code generation involved
<mcc111[m]>
I think multiple is clearer than aggregate. I think my issue with aggregate is it feels initially nonobvious from that wording what it is you're aggregating (I'm aggregating 4. 4 of what?). If multiple is less "honest" than aggregate I think aggregate is fine. Once I think about it for a moment it starts to feel clear. I don't think it matters so much.
<whitequark[cis]>
I dislike how "multiple" is ambiguous with multiplication (the arithmetic operation)
<whitequark[cis]>
aggregating 4 of ports
<whitequark[cis]>
(with sequential addresses)
<jfng[m]>
combine?
<galibert[m]>
count?
<Wanda[cis]>
(I have updated RFC 54 and 55 for merging; RFC 56 still waiting for bikeshed resolution)
<whitequark[cis]>
count is really generic to the point where it says almost nothing
<whitequark[cis]>
aggregate has the desirable property of both saying that you're gluing something together, and also relating to lib.data
<galibert[m]>
Catherine: which would be the point. No wrong idea, just look at the doc
<tpw_rules>
what about just "wide"? could be interpreted as a boolean
<whitequark[cis]>
yes, "wide" sounds like a yes/no thing
<Wanda[cis]>
wide is used in yosys, and I... dislike it
<Wanda[cis]>
(actually wide_log2, which is somehow even worse)
<jfng[m]>
row_size ?
<galibert[m]>
unit issues
<galibert[m]>
people are going to put 8 or 32 in there
<tpw_rules>
"combine"?
<tpw_rules>
which is just aggregate but less technical
<jfng[m]>
yeah, suggested that too
frgo has quit [Ping timeout: 252 seconds]
<whitequark[cis]>
I'm not opposed to combine
<tpw_rules>
i don't know if that resolves mcc111[m]'s issue though
<whitequark[cis]>
it doesn't evoke lib.data but it still seems basically fine to me
<galibert[m]>
Here's what I think about immediatly, at least it's not giving a wrong idea :-)
<mcc111[m]>
uhh i think the problem i've got is that "aggregate" can be any of verb, adjective or noun and that's not a problem with "combine".
<tpw_rules>
whitequark[cis]: is there a problem with shoving whatever vendor defined garbage in Memory's attrs? i'm not sure how the user should communicate settings like memory block preference to the platform. i asked before but couldn't find our discussion
<jfng[m]>
mcc111[m]: "combine" is a noun... in french :p
<mcc111[m]>
I think "combine" is clearer than aggregate but I think aggregate is ok and I think "aggregate" is more precise than any option other than "combine". If I had an objection I withdraw it.
<mcc111[m]>
I don't remember offhand, where does lib.data use `aggregate`? is it in API?
<mcc111[m]>
jfng[m]: ay de mi
<Wanda[cis]>
I think it may be in the documentation chapter title?
<Wanda[cis]>
I don't think it's anywhere in the actual API
<tpw_rules>
i like aggregate
<Wanda[cis]>
ah no, it's not even in the title
<whitequark[cis]>
mcc111: "Aggregate data structure library"
<whitequark[cis]>
that was the RFC title
<tpw_rules>
whitequark[cis]: maybe a Memory subclass would be better? it's kind of an unlikely use case but it would be good to define it
<Wanda[cis]>
what you're asking for is, essentially, adding a way to control the targetted block type
<tpw_rules>
for example, yes
<whitequark[cis]>
mcc111: I think I ended up excising it in favor of using plainer language in the docs
<whitequark[cis]>
and shorter titles
<Wanda[cis]>
which is not vendor-specific functionality; most of FPGAs have a choice between FF, LUTRAM, BRAM, and maybe huge RAM
<whitequark[cis]>
"Aggregate data structure" -> "Data structure"
<Wanda[cis]>
so I object, on principle, to adding something AlteraPlatform-specific for that usecase
<tpw_rules>
Wanda[cis]: ok, i see. we can come back to it then, it's currently not possible anyway
<Wanda[cis]>
you want a switch like this, propose an RFC.
<whitequark[cis]>
Wanda: tpw_rules: wait hold on, switch like what?
<galibert[m]>
Note that if you go Memory, the synth infers mlab/m10k by itself. If you go altsyncram, you get what you ask for
<Wanda[cis]>
and if you want a proper altera-specific memory class that exposes actual vendor-specific functions, deriving from lib.Memory will only be an impediment
<tpw_rules>
whitequark[cis]: some way to control memory block lowering, like block type
<tpw_rules>
galibert[m]: hm, that's annoying
<Wanda[cis]>
whitequark[cis]: `ram_style` for Amaranth essentially
<galibert[m]>
(quartus, at least my version, is catastrophic at inferring it though)
<tpw_rules>
galibert[m]: the secret in mine seems to be to round to a power of 2 depth
<galibert[m]>
was iirc 64k*32
<tpw_rules>
there is an "auto" setting on altsyncram
<whitequark[cis]>
Wanda: but we have `attrs` on `amaranth.lib.memory.Memory` that you could shove `ram_style` into, right?
<whitequark[cis]>
literally that attribute
<Wanda[cis]>
yes
<tpw_rules>
whitequark[cis]: yes, that was my proposal
<tpw_rules>
well we'll see how it turns out. i have to sit down and inventory what capabilities altsyncram has and what Memory can do. i think we'll have to do stuff like duplicating the rams for lots of read ports or making several for large depths
<whitequark[cis]>
I'm confused about what's being disputed then
<galibert[m]>
tpw_rules: you have ug_ram_rom.pdf right?
<mcc111[m]>
Wanda[cis]: we've gone above my head but I have a current desire to make designs that can be built for either Altera/CycloneV or another platform, so it seems like having the option to derive from lib.Memory and selectively turn on the Altera special features when targeting Altera while still using the generic interface when I'm not specifically using those features would be good
<tpw_rules>
galibert[m]: yes
<Wanda[cis]>
mcc111[m]: that's not, in general, possible.
<whitequark[cis]>
I'm reasonably certain that we should make class Memory final
<mcc111[m]>
Wanda[cis]: ok. i'll take your word for it
<Wanda[cis]>
because they tend to change interface in complex ways
<tpw_rules>
there is also random junk like controlling the maximum depth per block for power tradeoffs
<Wanda[cis]>
if you want something like that, feel free to isinstance(platform, AlteraPlatform) and branch in your code
<tpw_rules>
but if we want to leave the initial attempt at having no settings and not looking at attrs, i'm okay with that
<tpw_rules>
what about unsupported modes? should that be best effort, punt, or error?
<galibert[m]>
tpw_rules: old but perhaps useful, dual-ported dual-clock ram for a framebuffer: https://og.kervella.org/ram.txt
<_whitenotifier-6>
[amaranth-lang/rfcs] whitequark 1510920 - RFC #55: New `lib.io` components.
<galibert[m]>
Loved how easy amaranth made it too, wouldn't want to try in raw verilog
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] whitequark 6f46f42 - Deploying to main from @ amaranth-lang/rfcs@151092022c1ad3e01f419bcfb1cbdc54a6b74718 🚀
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] whitequark 45e02fc - Deploying to main from @ amaranth-lang/rfcs@bdea3375e25fb61cf40e3609436f9ec8e8fff667 🚀
<tpw_rules>
i guess the platform does know what chip you have
<galibert[m]>
Well, the platform should know in some way. Now do we want smarts in the platform to choose the type by itself?
<tpw_rules>
i'll see what happens if we just set everything to "auto"
<galibert[m]>
have fun :-)
<tpw_rules>
what is M9K? 256x36?
<jfng[m]>
<galibert[m]> "tpw_rules: old but perhaps..." <- a peripheral that asserts`ack` when the initiator only asserted `cyc` (and not `stb` too) is invalid Wishbone, iirc
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] whitequark 2f58c31 - Deploying to main from @ amaranth-lang/rfcs@b4e32355d410f5a46f7a4ab02770155fce5ef9fb 🚀
<_whitenotifier-6>
[amaranth-lang/rfcs] ... and 6 more commits.
<whitequark[cis]>
🎉
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] whitequark 4892c87 - Deploying to main from @ amaranth-lang/rfcs@0604ada781da8c65ba97f69ad4c33397237c5977 🚀
<tpw_rules>
memory transparency refers to store-to-load forwarding, right?
<tpw_rules>
also, Memory doesn't support true dual port does it? you'd have to infer it based on two ports that share the same connections?
<Wanda[cis]>
that is what yosys does, yes
<Wanda[cis]>
hmmm.
<Wanda[cis]>
okay, yeah, I see the problem here
<galibert[m]>
tpw_rules: oh yeah, that was the other reason I needed Instance, I wanted true bidir dual-port
<tpw_rules>
is not transparent mean "don't care" or "old value"?
<Wanda[cis]>
"old value" when within same domain, "undefined" when different domains
<tpw_rules>
then what does transparent mean?
<Wanda[cis]>
"new value"
<tpw_rules>
and the old memory before the .libification defaulted to true?
<Wanda[cis]>
yeah
<tpw_rules>
ok, hm, i wonder if altera has a weird definition of new value. "New Data—New data is available on the rising edge of the same clock cycle on which it was written". it also says that "old value" is not supported
<tpw_rules>
but new data there sounds like they mean it will re-read the memory and be available on the next cycle, not sure why they talk about edges
<whitequark[cis]>
Wanda: so, for RFC 52, the big one is the case syntax
<Wanda[cis]>
I have outlined a primary possibility and a few alternatives?
<whitequark[cis]>
I have absolutely no idea what the syntax should be so that if= is viable
<tpw_rules>
(i say this is weird because i wasn't able to get inference to work unless i had transparent=False)
<Wanda[cis]>
oh, I actually liked your Guard idea
<whitequark[cis]>
I dislike that Guard would have to be in the prelude too
<Wanda[cis]>
hrm
<Wanda[cis]>
true
<whitequark[cis]>
oh, the 2nd alternative listed won't work
<whitequark[cis]>
this is just reinventing migen, and autoformatters absolutely hate that kinda code, and mangle it horribly
<whitequark[cis]>
I also don't love DiscardOnLhs() a whole lot
<whitequark[cis]>
I feel like people are just going to use it and be up to no good
<whitequark[cis]>
(despite it being private)
<Wanda[cis]>
<del>we can put it in a `__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED` module</del>
<Wanda[cis]>
<del>definitely noone will try to use it then</del>
<whitequark[cis]>
actually, can we not make that part of the base Choice semantics? like... if any of the things inside is signed, then all of the are sign-extended and writes to sign bit past the highest one are ignored
<whitequark[cis]>
* actually, can we not make that part of the base Choice semantics? like... if any of the things inside it are signed, then all of them are sign-extended and writes to sign bit past the highest one are ignored
<Wanda[cis]>
it's not part of the base Choice semantics already
<whitequark[cis]>
I'm suggesting that it be
<Wanda[cis]>
it's just musing over what it'd take to implement Part in terms of it
<Wanda[cis]>
we don't actually have to do it
<whitequark[cis]>
oh, I see, it's not really a blocker in your eyes
<whitequark[cis]>
actually my question stands
<whitequark[cis]>
> The shape of the expression is the minimum shape that can represent the shapes of all cases values and default (ie. determined the same as for Array proxy or Mux tree).
<whitequark[cis]>
this suggests that the shape of a choice between unsigned(4) and signed(3) is signed(5) and the elements are sign-extended
<Wanda[cis]>
that is true, for RHS
<whitequark[cis]>
unless we make oddly shaped Choices unassignable, we need a semantic for using it on LHS
<Wanda[cis]>
for LHS, it just switches which target you assign to, and then normal rules take over
<whitequark[cis]>
hm. can we not exploit that implementing Part?
<Wanda[cis]>
no, because specifically of how sign bit works
<Wanda[cis]>
when you go out of bounds
<whitequark[cis]>
I don't understand why
<Wanda[cis]>
oh
<Wanda[cis]>
actually no
<Wanda[cis]>
it's worse
<Wanda[cis]>
because Part result is unsigned even when done on a signed number
<whitequark[cis]>
but we can cast the result to unsigned in .bit_select (we don't export Part)
<Wanda[cis]>
but we also have to extend it first
<whitequark[cis]>
and we already have the cast be transparent on LHS
<Wanda[cis]>
and we don't have a primitive for extension, other than something like | Const(0, width)
<Wanda[cis]>
(or Cat)
<Wanda[cis]>
* (or Cat with Const)
<Wanda[cis]>
and both of those create something non-assignable
<whitequark[cis]>
(u (choice sel (s (slice ...)) (s (slice ...)) ...)
<Wanda[cis]>
hmmm
<Wanda[cis]>
... okay, that almost works
<Wanda[cis]>
yeah, okay, I didn't think of that
<Wanda[cis]>
there still is the case of fully OOB index though
<whitequark[cis]>
I have a solution but it's horrific
<Wanda[cis]>
go on
<whitequark[cis]>
pad the LSB of every arm, using the sign bit as the LSB for the OOB indices, then cut it off the choice
<whitequark[cis]>
like literally just concat the sign bit as bit 0 of every arm and then slice off bit 0
<Wanda[cis]>
oh gods
<whitequark[cis]>
therefore my final proposal is:
<whitequark[cis]>
- we put this in the RFC to assert that `Choice()` is indeed fully genera in a way that lets us replace `Part` with it
<whitequark[cis]>
- we keep `Part`
<whitequark[cis]>
s/genera/general/
<Wanda[cis]>
yeaaahhhh
<Wanda[cis]>
okay
<Wanda[cis]>
horrifying
<whitequark[cis]>
thanks
<whitequark[cis]>
some more things about the RFC
<whitequark[cis]>
- can `sel` be only a `Value`, never `ValueCastable`?
<zyp[m]>
what about enum?
<whitequark[cis]>
that seems limiting, and I wonder if maybe we should have something like sel.choose() instead
<whitequark[cis]>
which you can override on your ValueCastable
<Wanda[cis]>
anyway, the answer for sel feels easy: it should accept the same thing as m.Switch, which takes ValueLike, and if we want to change it, it's a separate RFC
<Wanda[cis]>
hmmmm.
<Wanda[cis]>
so... it returns something like a chainable ValueCastable?
<whitequark[cis]>
this doesn't have the downside of being violently incompatible with formatters
<Wanda[cis]>
or even a Value I suppose
<whitequark[cis]>
Wanda[cis]: moooore or less
<whitequark[cis]>
well Choice(sel).case().case()... also works
<whitequark[cis]>
and is maybe actually less horrible than the previous option (although less customizable i guess)
<whitequark[cis]>
I suppose we can define a match override for value-castables or shape-castables
<Wanda[cis]>
I feel we should
<Wanda[cis]>
but that's not for this RFC
<whitequark[cis]>
I think we should think about it a little before adding Choice, to make sure it wouldn't be too horrible to add
<whitequark[cis]>
but not introduce in this RFC
<Wanda[cis]>
and for this RFC we should mirror m.Switch functionality, just with a different syntax
<Wanda[cis]>
right
<Wanda[cis]>
so I feel m.Switch actually has the right syntax for that
<Wanda[cis]>
it puts no requirements (like hashability) on the selection value or the case values, so we can add an extension that lets the value-castable drive the process
<Wanda[cis]>
your proposal of Choice(sel).case().case() has the same property
<Wanda[cis]>
also it leaves place for extra keyword arguments on case, which is likewise good
<Wanda[cis]>
I kinda don't like that it doesn't have an explicit "closing point" where the Choice is explicitly converted into a Value, so we need to make it a ValueCastable
<whitequark[cis]>
not necessarily
<whitequark[cis]>
we can just make it return a new Choice each time
<Wanda[cis]>
that sounds quadratic
<whitequark[cis]>
quadratic in the amount of lexical tokens isn't so bad I think
<whitequark[cis]>
but also it doesn't have to be
<whitequark[cis]>
it's basically a linked list
<whitequark[cis]>
like you could implement it as literally a chain of Choice where the previous one is the default for the next one
<whitequark[cis]>
which is horrific but not quadratic
<Wanda[cis]>
oof
<Wanda[cis]>
whitequark[cis]: this has the wrong eval order
<whitequark[cis]>
well, we could fold it for repr purposes, in which case it's basically ok
<whitequark[cis]>
(and codegen)
<Wanda[cis]>
but yeah, okay, I can implement it like that
<whitequark[cis]>
oh
<Wanda[cis]>
that can be done
<Wanda[cis]>
plus maybe some lazy-creation of the final list
<Wanda[cis]>
okay. fine. doable.
<whitequark[cis]>
the reason I didn't want Match for it and went with Choice is that I want Match to be free for eventual more-complex pattern matching syntax
<whitequark[cis]>
if we decide that we don't ever want that, or that it's compatible, we can deprecate both m.Switch and Choice in favor of [m.]Match
<Wanda[cis]>
mmm.
<Wanda[cis]>
okay, reasonable
<whitequark[cis]>
we do have .matches already but I feel that that won't pose much issue
<Wanda[cis]>
I thiiiink it should be compatible, but don't quote me on that
<whitequark[cis]>
since it's definitionally on a single Value
<Wanda[cis]>
okay then; change to Choice().case().case().default(), change Value to ValueLike, anything else?
<whitequark[cis]>
> The design is also not quite capable of replicating the current ArrayProxy, likewise due to the out-of-bounds case.
<whitequark[cis]>
is this actually the case?
<whitequark[cis]>
iirc the current ArrayProxy behavior is that an out-of-bounds write writes to the last element
<Wanda[cis]>
not anymore, since we decided to just change it
<whitequark[cis]>
this seems... representable?
<Wanda[cis]>
no; it reads the last element, but writes nothing
<whitequark[cis]>
oh
<whitequark[cis]>
oh oka
<whitequark[cis]>
* oh okay
<whitequark[cis]>
... how did I ever think that's a good idea
<Wanda[cis]>
(or maybe the first element? I don't remember; either way the decision was to change ti to return 0)
<Wanda[cis]>
(which is implementable by using Cat() so that it's also assignable as a discard target)
<whitequark[cis]>
please do not use cat as a discard target :<
<Wanda[cis]>
meow
<whitequark[cis]>
I think the last remaining question is the ShapeCastable behavior
<galibert[m]>
Depends what you discard
<whitequark[cis]>
I think we should definitely disallow mismatched ShapeCastable
<whitequark[cis]>
we can always allow it later, so there's no reason not to do it
<Wanda[cis]>
fine
<Wanda[cis]>
what should we do for matched ShapeCastable then?
<_whitenotifier-5>
[amaranth-soc] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-79-bf8dbd342befec0d9e9ddb898b644b623afb2a81 - https://github.com/amaranth-lang/amaranth-soc
<whitequark[cis]>
there's no real precedent for ShapeCastable introspection currently I think
<whitequark[cis]>
but we may also just, make one
<Wanda[cis]>
well
<Wanda[cis]>
there aren't many operations that can actually operate on ShapeCastables generically
<Wanda[cis]>
muxing is actually unique, I think?
<Wanda[cis]>
I meant there's also assignment and equality, but those are "driven" by a valuecastable of the relevant shape, while muxing is "driven" externally
<Wanda[cis]>
there's the (unfortunate) precedent of Mux not doing that
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 9837f10 - Deploying to main from @ amaranth-lang/amaranth-soc@9b90d7233889ed41ca8128f4cfcd14633b1ade91 🚀
<Wanda[cis]>
tpw_rules: yes (though it's more complicated for RFCs that also deprecate things, where you close the issue once the old stuff is fully removed)
<zyp[m]>
and some with separate documentation PRs
<tpw_rules>
ok, i suppose "tracking" = "somebody needs to do something based on still"
<_whitenotifier-6>
[amaranth] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-1203-18b54ded0ab9d5dc1ca7955137e8306c90b75b24 - https://github.com/amaranth-lang/amaranth
<_whitenotifier-5>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 1885702 - Deploying to main from @ amaranth-lang/amaranth@744576011f5e183d5de412d3bfb3c12733ae7cdf 🚀
<cr1901>
I just realized that Matrix implements replies on the IRC side as <user> "<quote>" <- <response> and I've subconsciously started doing the arrow part myself
<Wanda[cis]>
RFC 53 has landed! it passed my smoke tests, and also comes with plenty of unit tests of its own, but due to the magnitude and kind of changes I cannot guarantee I haven't broken anyone's designs with it; I'd appreciate some testing, particularly of the "synthesis for actual FPGAs with funny platform usage" and "weird non-platform Verilog output" kinds
<zyp[m]>
<Wanda[cis]> "RFC 53 has landed! it passed..." <- `TypeError: object of type 'SingleEndedPort' has no len()`
<Wanda[cis]>
... that was quick
<whitequark[cis]>
what's the code?
<zyp[m]>
somewhere in my platform simulation stuff, I just rebased #990 to current main and was going to run through my existing testbenches before I start modifying stuff
<zyp[m]>
might very well be I'm doing something weird, but RFC #55 says SingleEndedPort should have __len__
<Wanda[cis]>
the RFC 55 is not actually implemented yet; *Port are right now just private implementation details of platform and haven't been fully fleshed out
<Wanda[cis]>
but yeah, ok, I can just add the __len__
<Wanda[cis]>
I'm curious where the exception comes from though, is this platform code?
<zyp[m]>
I pushed an update to #990, rebased and updated, but it's still rough, incomplete and mostly untested, so I don't need any feedback on it yet :)