<whitequark[cis]>
Wanda: cursed: nothing currently prevents you from importing ShapeCastable and then doing `ShapeCastable()`.
<Wanda[cis]>
heh
<Wanda[cis]>
good old python.
<whitequark[cis]>
I think we should disallow that
<whitequark[cis]>
not sure how, overriding __init__ is technically a breaking change I guess?
<Wanda[cis]>
hmm
<Wanda[cis]>
why?
<whitequark[cis]>
because you could have done class Foo(ShapeCastable, Bar): and then done super().__init__
<Wanda[cis]>
uhhh.
<whitequark[cis]>
I guess we could uh
<Wanda[cis]>
and I suppose we may even be doing something like that in enum
<whitequark[cis]>
do nothing in overridden __init__ only if type(self) is ShapeCastable
<whitequark[cis]>
also, something that is annoying: Python 3.11 renamed EnumMeta to EnumType
<Wanda[cis]>
... yeah
<Wanda[cis]>
I know
<whitequark[cis]>
I just learned that
<Wanda[cis]>
bumped into it a few times, didn't know what to do about it
<whitequark[cis]>
I guess we follow suit?
<whitequark[cis]>
rename and leave EnumMeta as an alias
<Wanda[cis]>
I think I considered it and it caused some problems if done before we drop support for old pythons?
<whitequark[cis]>
right now our enum wrapper is broken bc importing EnumType from it doesn't wrap it
<whitequark[cis]>
so I think we actually kinda have to fix it
<Wanda[cis]>
ugh.
<Wanda[cis]>
I'll look into it, then
<jfng[m]>
<whitequark[cis]> "not sure how, overriding __init..." <- raise a TypeError in `__new__` if `cls` is a ShapeCastable ?
<whitequark[cis]>
we could do it in either new or init
<Wanda[cis]>
overriding __new__ is technically breaking in the exact same way, I think
<whitequark[cis]>
however I'm now thinking that we should switch to overriding __new__ from __init_subclass__
<Wanda[cis]>
idk
<whitequark[cis]>
Wanda[cis]: it's not breaking if we override `__init__` and do only a super call when `type(self) is not ShapeCastable`, and the same for `__new__`
<Wanda[cis]>
hm.
<Wanda[cis]>
actually why is always calling super bad?
<whitequark[cis]>
well the other branch always raises
<whitequark[cis]>
so it kinda doesn't matter if it calls super
<Wanda[cis]>
yeah
<whitequark[cis]>
and also, the current implementation with a check in __init_subclass__ is ... suboptimal
<whitequark[cis]>
for two reasons
<whitequark[cis]>
the main one is that i wanted to keep the documentation for the abstract methods within ShapeCastable itself
<Wanda[cis]>
why don't we use abc... oh right metaclass conflicts
<whitequark[cis]>
I added a check is_documentation() that checks if sphinx is running, to define them only then
<whitequark[cis]>
this worked... but ... it's also defined in doctests... so all the doctests break violently
<Wanda[cis]>
hilarious
<whitequark[cis]>
I can't fathom why sphinx doctests run in the same python process as sphinx itself and I don't want to know
<whitequark[cis]>
hilarious is definitely not a way i would describe it at 0030
<whitequark[cis]>
but it is an adjective you could use
tarmoo has quit [Quit: leaving]
<whitequark[cis]>
I want to finish this up, give the newly hatched reference docs to you for review, merge it, and then try to improve things maybe
<whitequark[cis]>
the full text of the ShapeCastable's contract is ... expansive
<whitequark[cis]>
and involves a lot of very precise language
<whitequark[cis]>
and I've never actually considered it before in its entirety rather than as a set of RFCs
<Wanda[cis]>
mhm
<Wanda[cis]>
it is quite expansive
<whitequark[cis]>
i'm going to fix the doctests now by applying incredible violence
<whitequark[cis]>
re: abc, i think this would create a situation where in lib.data, there would be custom metaclasses stacked two levels deep
<whitequark[cis]>
and I think if your Python metaclass includes a metaclass you need to touch grass
<Wanda[cis]>
metaclasses are like violence
<Wanda[cis]>
clearly we need to be using more of them
<whitequark[cis]>
I think I'm all for proportional use of force in this case
lf has quit [Ping timeout: 246 seconds]
lf has joined #amaranth-lang
<whitequark[cis]>
so it looks like adding implementations (even empty) to ShapeCastable, especially including __call__, is in and of itself problematic
<whitequark[cis]>
I'm considering just giving up and putting those docstrings into reference.rst
<Wanda[cis]>
what does that break?
<whitequark[cis]>
there is some code in lib.data which calls super which ends up hitting ShapeCastable.call which has the wrong signature cause it's called on the class instead of the instance
<_whitenotifier-3>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 2632102 - Deploying to main from @ amaranth-lang/amaranth@e9299ccd0e3d3a32cdeb79810c29ae2458f67a23 🚀
<_whitenotifier-e>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 33871c1 - Deploying to main from @ amaranth-lang/amaranth@8678d5fa14f1baa3ffbaba525bdbe60db5864e72 🚀
<_whitenotifier-e>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 5c1e0bd - Deploying to main from @ amaranth-lang/amaranth-soc@6f59eec798bd287b28480c013e2885c6cb4f895e 🚀
<johninbaltimore>
"also the with: body can only run once (it's not like a closure) so that creates more restrictions on things" I'm not sure what this means. In the most basic sense, I'm thinking something that notes something was written to a signal inside the pipeline, then if that signal is read at a later pipeline stage it just generates a number of registers
<johninbaltimore>
equal to the number of stages in between and copies each forward into the next
<johninbaltimore>
typically for a pipeline where I need a value 5 stages later, I'll have an array of values e.g. intermediate_result = [unsigned(8) for x in range(0,5)] and I'll just put intermediate_result[4].eq(intermediate_result[3]), intermediate_result[3].eq(intermediate_result[2]), etc.
<johninbaltimore>
array of bundles of signals (shapes?) rather
<johninbaltimore>
the whole idea is to stop manually counting up how many delays are needed along the way and adjusting everything if you add a stage in the middle to do some other kind of work and writing code to tell it to copy each value forward each clock tick
<johninbaltimore>
push all of those details out of view
<whitequark[cis]>
yeah, I get why this is desirable
<johninbaltimore>
in any case I am not a language designer, it's just something I do a lot because i pipeline everything due to an obsession with doing stuff that chokes a 4GHz CPU to death but on an FPGA running at 4MHz
<_whitenotifier-3>
[amaranth] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-1060-8678d5fa14f1baa3ffbaba525bdbe60db5864e72 - https://github.com/amaranth-lang/amaranth
<whitequark[cis]>
what kind of stuff is it?
<johninbaltimore>
DSP stuff
<whitequark[cis]>
figured
<johninbaltimore>
generating square waves or phase modulation with LFOs
<whitequark[cis]>
we're going to add streams (and fixed point numbers) soon, do you think streams would fit into the mental model you're using?
<johninbaltimore>
be like "oh yeah, so I have a thousand operators going at 96kHz sample rate and the chip runs at 40MHz and consumes 0.0015w of power so it's suitable for portable things"
<johninbaltimore>
don't know what streams are, fixed point is going to be WAY WAY WAY WAY WAY helpful
<johninbaltimore>
I hate manually managing fixed point calculations
<johninbaltimore>
I'm not understanding what a stream is from that, unless a stream is just a way of transferring data quickly over a bus
<whitequark[cis]>
a stream is a bus, with handshaking
<johninbaltimore>
what I'm doing is more like if you write software in a loop to iterate through a table of things and perform several mathematical operations (= algorithm)
<whitequark[cis]>
the idea is that you can use streams and reusable components to build a flowgraph
<johninbaltimore>
except looping has to execute n CPU instructions for each thing it's calculating
<johninbaltimore>
whereas in the FPGA, I just perform the first step, send that forward, immediately start the first step on the next input (= pipeline)
<_whitenotifier-e>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] ead98d7 - Deploying to main from @ amaranth-lang/amaranth@e88ff1335e09a809b6cf8cb90f49a94762824890 🚀
<johninbaltimore>
I have no trouble moving data, it's just that a multiply may take 3 cycles and an add may take 1 and a few other things are happening and they all go out of sync and then when I want to use them I need to converge them back into the same pipeline stages using delays and then do further operations. I don't see how streams automatically calculate
<johninbaltimore>
the delays for each piece of data that's being moved?
<johninbaltimore>
it's notable that my designs are absolutely infeasible if a pipeline ever stalls
<whitequark[cis]>
they don't. however, you could build an automation that accepts a flowgraph and adds delays where necessary
<whitequark[cis]>
then you wouldn't need language changes
<johninbaltimore>
what's a flow graph
<johninbaltimore>
are there docs on this?
<whitequark[cis]>
a flow graph is a general concept in computing
<whitequark[cis]>
it's a list of nodes and of connections between them (output to input). in this case you would also have annotations on each node, how long it takes to process its input
<johninbaltimore>
oh
<whitequark[cis]>
it's a different way to express what you're building. it's kind of more verbose, but doesn't need language changes
<whitequark[cis]>
so you could get there faster if you went that route
<johninbaltimore>
I guess. I'd need to see docs for that feature
<johninbaltimore>
and the documentation for amaranth is kind of lacking in a few places…
<whitequark[cis]>
neither streams nor flowgraphs actually exist yet, it's something that's still in the future
<whitequark[cis]>
streams will probably exist soonish, flowgraphs aren't on my personal roadmap
<whitequark[cis]>
as for the docs, yeah. I've been handling that bit by bit over the last weeks, so the situation has been improving quite rapidly
<johninbaltimore>
also I'm not in much of a hurry, I'm always thinking long term
<whitequark[cis]>
the language guide is basically complete, save for a section on memories
<johninbaltimore>
I didn't go back and check the simulation stuff to see if you added information about simulating combinational designs
<whitequark[cis]>
I haven't touched that
<johninbaltimore>
I had to do that a couple months ago (I was trolling my TA by submitting generated verilog because she HATES verilog) and had to dig through the source code to figure out how that worked
<whitequark[cis]>
the simulator's interface is going to be redesigned soon, so it would have become obsolete if i did
<johninbaltimore>
ah ok
<johninbaltimore>
I'm guessing breaking changes are allowed
<whitequark[cis]>
the old one will be kept for compatibility
<johninbaltimore>
well. 0.* versions tend to imply breaking changes are allowed
<whitequark[cis]>
however one of the reasons we're redesigning it is that it's very difficult to explain
<whitequark[cis]>
Amaranth does provide a backwards compatibility guarantee even in the 0.* range, and has been for many years
<johninbaltimore>
what about state machines? I saw some comments in the docs suggesting the syntax should be different
<whitequark[cis]>
no one's spearheading the changes to FSMs at the moment
<zyp[m]>
I've considering writing a sort of math processor that can take a sequence of operations that can reference signals directly, and turn it into a little cpu with instruction memory generated and io registers hooked up
<zyp[m]>
for stuff where the sequencing is complex enough that writing out a FSM for everything is annoying, but not big enough that I want to write a whole separate firmware in C/C++/Rust/whatever to do it on a general purpose cpu
<_whitenotifier-3>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 93c8b8f - Deploying to main from @ amaranth-lang/amaranth@5dd1223cf8b2fc1accbc4d0e53dc702cce4a19af 🚀
<zyp[m]>
<whitequark[cis]> "take a look at what Layout...." <- huh, I didn't realize until now that `.as_shape()` is allowed to return another shape-castable
<zyp[m]>
I assume the dance you refer to is what happens in Layout.cast(), which AIUI exists to turn the _AggregateMeta based stuff into a plain Layout
<zyp[m]>
there's a similar dance in Layout.__eq__() which I think could be simplified to call Layout.cast(), but otherwise I don't see any iterative calling of .as_shape() in lib.data
tarmoo has joined #amaranth-lang
<zyp[m]>
and Layout.cast() is lib.data specific, so that's not going anywhere else
<zyp[m]>
all shape-castables are already required to implement .const(), so I figure Const() should be able to just forward to that without much magic required
<zyp[m]>
that is, the required magic will be something analog to _SignalMeta, but for Const instead
<whitequark[cis]>
one sec
<zyp[m]>
I implemented and tested it, everything seems to work like I expect it to
tarmoo has quit [Remote host closed the connection]
<whitequark[cis]>
zyp: ok yeah, I think the functionality I was thinking of got removed somewhere in the meantime
<whitequark[cis]>
oh, I somehow completely missed that this would make Const() have the same weird behavior as Signal()
<whitequark[cis]>
well, it's not making things any worse really
<zyp[m]>
I'll take consistently weird over the alternative
<whitequark[cis]>
yeah
<whitequark[cis]>
thanks for writing this!
<zyp[m]>
this should simplify the RFC 40 implementation slightly as well
<crzwdjk>
What are the problems with the current FSM syntax? Well other than assignment to m.next being annoying and different from everything else in the language.
<whitequark[cis]>
no composition; no way to use an enum for the state signal; typoing state names silently introduces dead ends into the FSM; no way to reset without doing weird stuff with modules and ResetInserter or adding stuff to every state; annoying to do things like "when entering this state do X"; ...
tarmoo has quit [Remote host closed the connection]
tarmoo has joined #amaranth-lang
<_whitenotifier-c>
[amaranth] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-1058-5dd1223cf8b2fc1accbc4d0e53dc702cce4a19af - https://github.com/amaranth-lang/amaranth
tarmoo has quit [Remote host closed the connection]
<_whitenotifier-c>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 23bca98 - Deploying to main from @ amaranth-lang/amaranth@ea3d6c95571e89c35cbbf4e2a7c0e2804a272411 🚀
<crzwdjk>
What could composition look like? Composition certainly seems like a thing I would want.
<_whitenotifier-3>
[amaranth] github-merge-queue[bot] created branch gh-readonly-queue/main/pr-1064-ea3d6c95571e89c35cbbf4e2a7c0e2804a272411 - https://github.com/amaranth-lang/amaranth
<crzwdjk>
And yeah having the option to reset the FSM certainly seems like a desirable thing. Right now I use ResetInserter but not having to necessarily make a submodule would be nice in other cases.
<whitequark[cis]>
I don't really know. I think FSM library design is an art of its own and it's not one I'm very familiar with
<crzwdjk>
Hm. I suppose I should try to find and study some literature or examples from other HDLs
<_whitenotifier-c>
[amaranth-lang/amaranth-lang.github.io] github-merge-queue[bot] 8a48d08 - Deploying to main from @ amaranth-lang/amaranth@1506f08b81fab25087bb449c96d276d294617576 🚀