whitequark[cis] changed the topic of #amaranth-lang to: Amaranth hardware definition language · weekly meetings: Amaranth each Mon 1700 UTC, Amaranth SoC each Fri 1700 UTC · code https://github.com/amaranth-lang · logs https://libera.irclog.whitequark.org/amaranth-lang · Matrix #amaranth-lang:matrix.org
Degi_ has joined #amaranth-lang
Degi has quit [Ping timeout: 255 seconds]
Degi_ is now known as Degi
jjsuperpower has quit [Ping timeout: 260 seconds]
Guest31 has joined #amaranth-lang
Guest31 has quit [Client Quit]
urja has quit [Read error: Connection reset by peer]
cr1901 has quit [Remote host closed the connection]
cr1901 has joined #amaranth-lang
urja has joined #amaranth-lang
cr1901 has quit [Remote host closed the connection]
cr1901 has joined #amaranth-lang
cr1901 has quit [Remote host closed the connection]
cr1901 has joined #amaranth-lang
cr1901 has quit [Remote host closed the connection]
cr1901 has joined #amaranth-lang
<_whitenotifier-f> [rfcs] whitequark commented on pull request #27: Testbench functions for the simulator - https://github.com/amaranth-lang/rfcs/pull/27#issuecomment-1775620856
<whitequark[cis]> good evening everyone, it is time for our regular weekly meeting
<whitequark[cis]> the sole item on the agenda is RFC 27. let's take a roll call. who is present?
zyp[m] has joined #amaranth-lang
<zyp[m]> o/
Chips4MakersakaS has joined #amaranth-lang
<Chips4MakersakaS> I am
<galibert[m]> ^-
<whitequark[cis]> adamgreig? cr1901? jfng?
<adamgreig[m]> around
<zyp[m]> if there's time afterwards, I could do with some feedback on RFC 28 too
jfng[m] has joined #amaranth-lang
<jfng[m]> o/
<whitequark[cis]> as before I'm particularly interested in objections to RFC 27
<cr1901> whitequark[cis]: Dealing w/ personal issues this week, I'm physically around but need to skip the meeting. I'm sure whatever you all decide will be fine.
<whitequark[cis]> (@adamgreig's comment is not yet in the RFC but is noted)
<adamgreig[m]> besides that, none from me, seems like a good addition and I expect is what most people expected add_sync_process did anyway
<whitequark[cis]> Chips4Makers (aka Staf Verhaegen) and galibert, if I recall, you had objections before; are these satisfactorily resolved or do you continue to have concerns about the RFC?
<galibert[m]> Imagine Settle() is gone, can you still test a pure-comb module?
<adamgreig[m]> I guess the confusion from the last couple meetings suggests it'd be worth some time on the docs for how the simulator works / what the differences between them are, though
<whitequark[cis]> galibert: yes, using the new API
<galibert[m]> (e.g. you have no domain added)
<adamgreig[m]> if you use add_testbench then comb logic tests would just work fine without any special yielding/settling, right
<galibert[m]> which new api?
<whitequark[cis]> since it implicitly settles after every command, and does not require a domain, you can set the inputs and then immediately read the outputs without any additional work
<whitequark[cis]> the one proposed in the RFC 27 (add_testbench)
<whitequark[cis]> adamgreig: yes
<whitequark[cis]> adamgreig[m]: I agree on this, too; I will have to take some time to add these explanations
<galibert[m]> Ok, I now understoof add_testbench to auto-Settle after a domain ticked (e.g. yield or yield Tick(domain)). What did I miss this time ?
<galibert[m]> s/understoof/understood/, s/add_testbench/add\_testbench/
<zyp[m]> it auto-Settles after any yield, not just yield Tick
<whitequark[cis]> galibert: add_testbench auto-settles every time before it returns control to the testbench
<zyp[m]> (AIUI)
<whitequark[cis]> that includes after yield or yield Tick(domain) by extension of it being any time
<galibert[m]> Well, you demontrated that was kind of racy?
<galibert[m]> (And now I understand the demonstration too)
<Chips4MakersakaS> I still don't like how you need to do `Settle` in `add_sync_process()`. I think the reason is that you want to keep the value as it was during the event. I am thinking something like:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/jKuBeHbhnuGWBONxfXBXnDZk>)
<whitequark[cis]> if you are using add_sync_process with yield Settle() as most people currently do in testbenches making use of combiantional feedback, you have the same races, just with a worse API
<whitequark[cis]> if you are not replacing sync logic with a behavioral implementation you probably do not care about this particular race, which is rather specific to that scope
<whitequark[cis]> does that explain anything?
<whitequark[cis]> Chips4Makers (aka Staf Verhaegen): could you elaborate? it sounds like we are in agreement re: "I still don't like how you need to do Settle in add_sync_process()." and considering the proposed `add_testbench` API removes that need, shouldn't it fulfill that want?
<whitequark[cis]> * @galibert: if you are using add_sync_process with yield Settle() as most people currently do in testbenches making use of combiantional feedback, you have the same races, just with a worse API
<Chips4MakersakaS> What I don't like is that you still need Settle() for multiple clock domain tests with add_process().
<whitequark[cis]> Chips4Makers (aka Staf Verhaegen): you would use `add_testbench` for multiple clock domain tests too
<whitequark[cis]> (without explicitly specifying a domain and then using yield Tick("domain"))
<zyp[m]> FWIW I've spent some time with the simulator since last meeting, and I now feel I understand the mechanics well enough to appreciate the proposed change
<galibert[m]> hmmm, imagine you a process that does:... (full message at <https://catircservices.org/_matrix/media/v3/download/catircservices.org/xtNphIdfvtakFKcpdJjyESxW>)
<galibert[m]> and a second that does the same thing with Tick("b") and mumble2
<galibert[m]> If you add_testbench and yield Tick("a"), is mumble2 (on b) going to change?
<whitequark[cis]> adamgreig: I wonder if maybe it is best to completely omit `domain="..."` from `add_testbench` and make everyone write out `yield Tick("sync")` or `yield Tick()` or something, which eliminates this idea that `add_testbench` is specific to a single domain where in reality it doesn't care
<whitequark[cis]> galibert: can you write a more complete example? I'm struggling to understand exactly what the code would do
<adamgreig[m]> Catherine: so long as `Tick(domain="sync")` then sure
<whitequark[cis]> zyp[m]: nice! how do you feel about the change now?
<jfng[m]> whitequark[cis]: i actually prefer `yield Tick()`; it is more explicit than just `yield` imo
<Chips4MakersakaS> OK, I think I am fine now with adding add_testbench() and maybe extend it later to also allow it to describe flipflop in a behaviourly with it.
<zyp[m]> whitequark[cis]: no objections from me
<Chips4MakersakaS> Agree with removing domain param from add_testbench()
<adamgreig[m]> it's more explicit but I'm doing it so often that I like yield to imply yield Tick, I think it's worth the little bit of magic (and with Tick domain defaulting to sync)
<whitequark[cis]> one of my other thoughts on this is to pass an argument to add_testbench(simulator) representing the simulator itself, so that you can e.g.: get simulation time, do await simulator.tick("sync") without introducing new global names, and so on
<whitequark[cis]> adamgreig: ^
<adamgreig[m]> being able to get simulator time would be nice. right now I have a separate process that just counts ticks into a shared variable and it's pretty stupid
<whitequark[cis]> Chips4MakersakaS: the idea is that `add_sync_process` stays, exclusively for behavioral description of flops, and `add_testbench` is introduced, exclusively for writing testbenches; does that clarify the plan?
<adamgreig[m]> but I think "yield" and "yield Tick" are a bit simpler than needing the testbench to be an async fn that can await sim.tick
<adamgreig[m]> hmm
<adamgreig[m]> well, idk, strike that
<whitequark[cis]> adamgreig: the testbench will eventually be an async fn either way (this is another planned change that I am confident we need)
<whitequark[cis]> RFC 27 is just a way to introduce things piecemeal
<adamgreig[m]> yea, fair
<whitequark[cis]> and I do think the proliferation of global names for the simulator alone is a problem, considering name clashes
<whitequark[cis]> I think adding a context argument is a good way to solve both of these problems
<zyp[m]> await will read better than yield from when a testbench is calling subfunctions
<zyp[m]> (and let subfunctions return values)
<whitequark[cis]> whitequark[cis]: in addition there will be `add_comb_process` for behavioral description of comb logic, but not yet, since the simulator internals are not quite ready for it
<whitequark[cis]> eventually that will probably all be packaged in something like SimulatorModule that you can add to the design normally or something
<whitequark[cis]> in any case, in add_comb_process only the things that normal comb logic can do will be legal, in add_sync_process only the kind of things that normal flops can do will be legal (i.e. no settling), and in add_testbench anything goes, but there is mandatory settling
<jfng[m]> whitequark[cis]: is this going to be a behavioral model that you can use to e.g. stub modules ?
<whitequark[cis]> jfng: that is the idea, yes (it is concordant with our ongoing work on -stdio and -soc)
<jfng[m]> nice
<Chips4MakersakaS> <whitequark[cis]> "the idea is that `add_sync_proce..." <- I am OK with adding `add_testbench()` but still dislike how `add_sync_process()` works and I think that can be done better. Unfortunately I don't see myself having time to actually make a RFC on it. So I am OK with RFC 27 now.
<whitequark[cis]> Chips4Makers (aka Staf Verhaegen): right, we do have to keep the existing `add_sync_process()` effectively indefinitely because of backwards compat, but I of course have no objection to improvements in that area of the simulator that change the interface more radically
<whitequark[cis]> galibert: any remaining objection to RFC 27?
<whitequark[cis]> the question applies to anyone else
<galibert[m]> Objection no
<galibert[m]> Some aspects are still a little weird, is all
<whitequark[cis]> that is kind of how it is, yeah
<whitequark[cis]> all right! thanks everyone, I am glad that we can move on with that RFC. I will rewrite it fairly substantially to incorporate feedback and then present again
<zyp[m]> <whitequark[cis]> "adamgreig: the testbench will..." <- what do you plan for this wrt. backwards compatibility?
<zyp[m]> will add_testbench() detect whether it's passed a sync generator or async function, or will that be yet another add_*?
<whitequark[cis]> the idea was to use the former
<whitequark[cis]> but thinking about it, I am now wondering if maybe I should introduce add_testbench as something accepting an async fn right away
<whitequark[cis]> with the context argument, it is clear how to handle Tick, Passive, etc, which are the only reason not to introduce it as using an async fn
<whitequark[cis]> and as a motivation to migrate to add_testbench, functions like async def FIFOInterface.read will be introduced
<zyp[m]> that sounds reasonable to me
<adamgreig[m]> yea, if the plan is to make it async eventually, and this is an entirely new API anyway, could definitely make it async to begin with
<_whitenotifier-f> [rfcs] whitequark commented on pull request #27: Testbench functions for the simulator - https://github.com/amaranth-lang/rfcs/pull/27#issuecomment-1775684023
<whitequark[cis]> yeah
<whitequark[cis]> we have half a hour left; happy to discuss @zyp's RFC 28 (https://github.com/zyp/amaranth-rfcs/blob/override-value-operators/text/0028-override-value-operators.md)
<zyp[m]> I very much want a way to await arbitrary edges, like I mentioned before, and if that's blocked until we've got an async API, I'd be inclined to start on the async stuff myself if you're not
<whitequark[cis]> zyp: that will be blocked on the async API, yes
nelgau has quit [Ping timeout: 255 seconds]
<whitequark[cis]> basically, if we have a simulator context, I am very happy to introduce something like await sim.edge(signal)
<whitequark[cis]> but I do not see myself introducing e.g. yield Edge(signal)
lofty has quit [Quit: I need a break from the internet]
<whitequark[cis]> zyp: are you open to pairing on the RFC and/or implementation? that will definitely make this happen faster
<Wanda[cis]> Catherine: in minor things, I also had some questions about non-fwft fifo deprecation in the scrollback
<whitequark[cis]> Wanda: ah yes, sure
<zyp[m]> whitequark[cis]: sure
<whitequark[cis]> zyp: perfect, I'll ping you then
<whitequark[cis]> regarding RFC 28: there is no way ValueCastable will inherit from Value; its raison d'etre is that it does not
<whitequark[cis]> this is a namespacing issue, essentially; it does not in general make sense for value-castables to carry all the random stuff Value does
<zyp[m]> hence «or a common base»
<whitequark[cis]> even if, by virtue of their value-castability, the operators can technically apply, that doesn't mean they would do the right thing
<whitequark[cis]> I also value the fact that right now ValueCastable is an interface (abstract class inheriting from nothing and only defining signatures)
<whitequark[cis]> so I don't see us moving to using a common base, and I'm happy to go ahead with your approach where there is an explicit check
<whitequark[cis]> the inheritance based operator overloading checks you are talking about make sense for implementing a numeric tower, which is something you are doing
<whitequark[cis]> however for e.g. structs that does not make sense
<whitequark[cis]> I would much rather have a special case for the relatively small subset of use cases (implementing numerics) than change how ValueCastable fundamentally works for the same
<whitequark[cis]> makes sense?
<zyp[m]> yep
<whitequark[cis]> I am completely happy to proceed with RFC 28 once it's finished, it is something I had on my own roadmap
<whitequark[cis]> and furthermore happy to proceed with an RFC introducing fixed point numbers into stdlib
<whitequark[cis]> anyone else has any comments or objections to RFC 28?
<Chips4MakersakaS> lgtm
<zyp[m]> okay, I'll try to have a final look at it and have it ready for a vote at the next meeting
<whitequark[cis]> perfect ^_^
<whitequark[cis]> any other matters to discuss? Wanda ?
<galibert[m]> Any timeline for 0.4?
<Wanda[cis]> well the fwft thing
<Wanda[cis]> <Wanda[cis]> "Catherine: wrt RFC 20 (non-..." <- here
<whitequark[cis]> galibert: "when it's done", as always; at work I only spend ~20% of my time on Amaranth (remaining 80% being on CXXRTL), so there'll be some slowdown
<whitequark[cis]> Wanda[cis]: oh yes, agree
<galibert[m]> Ok :-)
<whitequark[cis]> is that something you've been implementing?
<Wanda[cis]> yes
<Wanda[cis]> well, I started
<whitequark[cis]> cool! this doesn't require an RFC amendment though if you want to make a PR against it I don't see anyone objecting
<Wanda[cis]> <Wanda[cis]> "also: any preferences on the new..." <- also this
<whitequark[cis]> that is what I would do, yes
<Wanda[cis]> and a papercut
<whitequark[cis]> I want to drop non-FWFT functionality ASAP and that enables this path
<Wanda[cis]> SyncFIFOBuffered with depth=1 is broken currently
<Wanda[cis]> fix by specialcase or error?
<whitequark[cis]> fix by specialcase, that can be very useful for skid buffers and such
<Wanda[cis]> mhm
<Wanda[cis]> what about depth 0
<whitequark[cis]> is that well-defined?
<whitequark[cis]> well
<whitequark[cis]> just directly connect the wires?
<Wanda[cis]> could be done by just cross-wiring, but that changes some of the contract
<whitequark[cis]> I feel like for when we have streams, the stream interconnect should handle that as a special case, connecting source to sink
<Wanda[cis]> comb paths
<whitequark[cis]> and for SyncFIFOBuffered we preserve the contract
<Wanda[cis]> sorry, am on walk, terse
<whitequark[cis]> for AsyncFIFO this is just not possible to do, so we have the two cases matching up, which is nice
<whitequark[cis]> all makes sense to me
<Wanda[cis]> error then?
<whitequark[cis]> error on depth==0 (aren't we doing that already?), special case depth==1
<Wanda[cis]> ack, thanks
lofty has joined #amaranth-lang
<whitequark[cis]> thanks everyone (in particular galibert and Chips4Makers (aka Staf Verhaegen)) ! this has been an exceptionally productive meeting
FFY00_ has joined #amaranth-lang
FFY00 has quit [Ping timeout: 264 seconds]
nemanjan00[m] has joined #amaranth-lang
<nemanjan00[m]> I am learning amaranth as my firsh HDL (I did read some verilog before) and I have to say I am loving the docs
<whitequark[cis]> I'm glad! they're unfortunately still incomplete, but we're getting there
<nemanjan00[m]> Comparing verilog code and python that generated it, I would go as far as saying it makes more sense to me, once you go over understanding that you are generating HDL, not writing it
<whitequark[cis]> nice!
<whitequark[cis]> the original goal for Amaranth was to produce human-written-equivalent Verilog
<whitequark[cis]> we've regressed on that on the way to the current version mostly due to lack of resources and the fact that Verilog syntax rules are... cursed
<whitequark[cis]> some day we'll get back there
<zyp[m]> Catherine: I played a bit with async testbenches, looks like something like this is all it takes to be able to transparently switch to `async`/`await` instead of `yield`
<nemanjan00[m]> ```Const.cast(Cat(C(10, 4), C(1, 2)))```
<nemanjan00[m]> Maybe there should be explanation what C and Cat are? I assume C is alias for Const, but, I have no idea what Cat is
<zyp[m]> Cat is concatenation
<nemanjan00[m]> Cat is concatenate?
<nemanjan00[m]> Yeah, but I guess entry into it would be easier if tutorial did not require referencing docs
<nemanjan00[m]> I guess that is something I can contribite when I find time, since as a beginner I will notice lack of context
<whitequark[cis]> zyp: yes, however I explicitly do not want to drag the old API into the new async fn based interface
<whitequark[cis]> nemanjan00: there is currently no first-party tutorial
<whitequark[cis]> whatever you're reading is written by a community member
<zyp[m]> yeah, I figure the natural next step is to take out Command.__await__ and pass in a simulator context instead
<zyp[m]> to force async functions to interact through the context
<nemanjan00[m]> I was refering to this page:
<nemanjan00[m]> It references Cat in example before it in explained
<whitequark[cis]> zyp: I'm also not sure about making values awaitable, but that's at least much easier to justify
<whitequark[cis]> nemanjan00: ohh, I see
<whitequark[cis]> that page is not a tutorial; it is an exhaustive guide to how the language works for people who already have a general familiarity and want to understand how exactly a part of the language functions
<whitequark[cis]> it is not possible to lay out a guide like that in a strictly linear way, which is why there will be (some day) a first-party tutorial too
<nemanjan00[m]> Oh, ok. Thanks for clarification
<whitequark[cis]> zyp: the alternatives aren't great; `await sim.get(x)` vs `await x`...
<whitequark[cis]> I think await x / await x.eq are easy to justify (this shouldn't be defined on Statement by the way, we don't implement await Switch() for example)
<whitequark[cis]> just because of their sheer commonality
<zyp[m]> could add an awaitable property or something, e.g. await x.value
<whitequark[cis]> so, a decent option is await x.get() / await x.set(...) and this is what cocotb does, IIRC, bringing us closer to this widespread library
<whitequark[cis]> I think distinguishing x.set(...) vs x.eq(...) is useful
<zyp[m]> hmm, yeah, sounds reasonable
<whitequark[cis]> the only thing that's actually needed is some way to pass the simulator itself to the statement. await/yield do this implicitly (the simulator lives on the stack), the context argument would be an explicit way to do that
<whitequark[cis]> the problem with the context argument is that if you let people capture it you now have to deal with all sorts of nastiness
<whitequark[cis]> which is a motivation to not have normal .get/.set go through the context argument
<whitequark[cis]> (like... what happens if the context from a testbench is captured elsewhere and used inside add_sync_process replacing a flop behaviorally? nightmare.)
<zyp[m]> having .get()/.set() also allows value castables to implement them to do something reasonable for the type
<whitequark[cis]> !
<whitequark[cis]> that's actually a really good point.
<whitequark[cis]> (strictly speaking nothing stops value-castables to do all sorts of weird trickery in __aiter__ and their eq method but this looks strictly better, in addition to alignment with cocotb)
<whitequark[cis]> so I guess simulator commands would stay in this redesign, since you do need to have some pseudo-awaitable returned from the coroutine
<zyp[m]> I think it makes sense to build the new API on top of the old first, and then rip out the old and simplify later
<zyp[m]> i.e. turn simulator commands to an implementation detail
<whitequark[cis]> they would be, mm... GetValue, SetValue, WaitValue (wait by mask, including waiting for an edge, optimizable for CXXRTL purposes once the CXXRTL reactor lands), WaitDomain (what we currently have as Tick; it's not actually accurate wrt what it does in presence of async resets, iirc)
<whitequark[cis]> zyp[m]: we do need to implement `Value.get` and `Value.set` somehow for the time being, and returning `self` and `self.eq` from `__aiter__` means we can't easily show a warning on non-internal use of those patterns
<zyp[m]> true
<whitequark[cis]> API-wise, I guess we could also have Value.wait to complement the first two
<whitequark[cis]> is a simple Value.wait(expected, mask) enough?
<whitequark[cis]> or should we do something more complex for the wait for edge case?
<zyp[m]> hmm, for awaiting arbitrary edges, I think it's convenient to be able to specify rising, falling or either
<zyp[m]> either could be expected=None, which IIRC is how the underlying code already works
<whitequark[cis]> that has weird semantics for multiple-bit values
<zyp[m]> not necessarily, «await any change» sounds useful
<whitequark[cis]> what is "wait for rising edge for a 2-bit value"?
<whitequark[cis]> * what is "wait for rising edge" for a 2-bit value?
<zyp[m]> what does add_trigger() currently do on multibit signals?
<whitequark[cis]> dunno (does that matter?)
<whitequark[cis]> we're not discussing technical feasibility but how we are teaching this
<whitequark[cis]> and how usable it is
<zyp[m]> right
<whitequark[cis]> maybe `await value.changed(expected=None, mask=None)` plus `await value.posedge()` & `await value.negedge()` are a good idea
<zyp[m]> anyway, your question is backwards, rising and falling edges is effectively .wait(1) and .wait(0) respectively on a 1-bit signal
<whitequark[cis]> no I don't think so
<whitequark[cis]> await value.wait(0) returns instantly if value already is set to 0
<whitequark[cis]> but await value.negedge() must not
<whitequark[cis]> (be back in 30m)
<zyp[m]> should it also return instantly if you call it await value.changed(0)?
<whitequark[cis]> yes, which made me realize (shortly after the proposal) that changed is a bad name for this command
<whitequark[cis]> tbh, it would be nice if instead of requiring people to use `mask` explicitly we could recognize and optimize the AST with a `&` on the toplevel
<whitequark[cis]> and the same for edge detectors
<whitequark[cis]> but there's probably a limit to how far this can be taken and I don't know where it is
_whitelogger has joined #amaranth-lang
<zyp[m]> proof of concept that implements both wait() and changed() semantics: https://paste.jvnv.net/view/OLASx
<zyp[m]> which works and does the expected thing:
<zyp[m]> changed() immediately feels more useful than wait(), given that it's trivial to implement the latter on top of the former, but not the other way around
<whitequark[cis]> so one thing to keep in mind here is that the underlying command set is something that will be eventually passed to the CXXRTL reactor, so that it can be implemented in C++ instead of Python
<whitequark[cis]> this is absolutely critical for cxxsim being useful
<whitequark[cis]> this is why I'm putting so much thought into something that would otherwise be a mere implementation detail
<zyp[m]> yes, I understand that
<whitequark[cis]> right, okay
<zyp[m]> what's the current state of cxxsim? IIRC there's a branch for it?
<whitequark[cis]> yes
<whitequark[cis]> and it gives no performance advantage whatsoever
<zyp[m]> what's it currently bottlenecked by?
<whitequark[cis]> clock generation being done in Python
<zyp[m]> is it gonna help moving just the clock generation if you've got test benches in python that's waiting for the next clock tick?
<zyp[m]> sounds like we need a way to do a synchronous wait
<whitequark[cis]> and that's exactly why we need wait on edge
<zyp[m]> yeah, but for a lot of stuff we don't want wait on edge, we want wait on level at clock tick
<whitequark[cis]> hm.
<whitequark[cis]> the idea I had is that it would be possible to wait on a complex expression
<whitequark[cis]> `await (sim.tick() & (v == 1))` or whatever
<whitequark[cis]> I don't have concrete ideas for the syntax, only that the implementation would be essentially a simple logic expression minimizer that would give CXXRTL something like an AIG and then that would be accelerated in C++
<whitequark[cis]> or not even minimize anything, just translate the Amaranth AST directly to some structures and give to CXXRTL
<whitequark[cis]> the fastest way to do this without invoking too much ctypes overhead on every call is to define a string serialization, lolsob
<Wanda[cis]> ah yes, char *, the finest FFI interface invented
<whitequark[cis]> but you can see why that would be preferable over a complex AST where you have to allocate, right
<Wanda[cis]> of course
<whitequark[cis]> :')
notgull has quit [Ping timeout: 240 seconds]
notgull has joined #amaranth-lang
<zyp[m]> Catherine: by the way, did you implement RFC 27 yet?
<whitequark[cis]> there's a branch, yeah
<zyp[m]> I can't find it
<whitequark[cis]> it's in the main repo
<zyp[m]> ah, right
<Wanda[cis]> wow, that's been ... a while ago
<whitequark[cis]> yeah.
lf has quit [Ping timeout: 258 seconds]
lf has joined #amaranth-lang
toshywoshy has quit [Ping timeout: 240 seconds]
Raito_Bezarius has quit [Ping timeout: 258 seconds]