ChanServ changed the topic of #prjunnamed to: FPGA toolchain project · rule #0 of prjunnamed: no one should ever burn out building software · https://github.com/prjunnamed/prjunnamed · logs: https://libera.irclog.whitequark.org/prjunnamed
<Wanda[cis]> perfect. you'll be in charge of squeezing out LUTs.
<whitequark[cis]> hm, should that have a test?
<whitequark[cis]> im genuinely unsure
<Wanda[cis]> well, we should have tests for stuff that's not unsextable, yeah
<Wanda[cis]> specifically for 2-long runs
<mei[m]> <Wanda[cis]> "you just love your microoptimiza..." <- well it's not even about optimizing the thing, it's just cleaner
<Wanda[cis]> I guesssss
<mei[m]> like, if you ask the question of "why shift by 1 specifically", it starts staring you in the face that actually shifting by same_count would make more sense
<Wanda[cis]> I was thinking in terms of "this offset sucks, let's just move on to the next one"
<Wanda[cis]> but yeah, you're kinda right
<mei[m]> true
<mei[m]> we don't really have any negative tests for "optimization not applicable" cases, do we?
<_whitenotifier-4> [prjunnamed] wanda-phi closed pull request #7: adc_unsext: shift by same_count when it is not enough (NFC) - https://github.com/prjunnamed/prjunnamed/pull/7
<_whitenotifier-4> [prjunnamed/prjunnamed] wanda-phi pushed 1 commit to main [+0/-0/±1] https://github.com/prjunnamed/prjunnamed/compare/3dc7ac8a17fd...22f5c1c5da8b
<_whitenotifier-4> [prjunnamed/prjunnamed] meithecatte 22f5c1c - adc_unsext: shift by same_count when it is not enough (NFC)
<Wanda[cis]> mei[m]: yeah and we should, for stuff where it's easily possible to screw up such as `adc_*`
<whitequark[cis]> i still wonder if we should have a way to create a Design with a verifier
<whitequark[cis]> so that it can be done per test rather than globally
<whitequark[cis]> that also potentially allows not having easy-smt as a dependency of netlist, which as it is just won't work
<Wanda[cis]> that sounds vaguely useful, yes
<_whitenotifier-4> [prjunnamed] meithecatte reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950076467
<_whitenotifier-4> [prjunnamed] meithecatte opened pull request #8: adc_unsext: add tests right at the boundary of applicability - https://github.com/prjunnamed/prjunnamed/pull/8
<mei[m]> are there any actual examples of the PBlah@y (arg) syntax anywhere?
<mei[m]> oh, right, there's PInput
<_whitenotifier-4> [prjunnamed/prjunnamed] wanda-phi pushed 1 commit to main [+0/-0/±1] https://github.com/prjunnamed/prjunnamed/compare/22f5c1c5da8b...f7e4246b7439
<_whitenotifier-4> [prjunnamed/prjunnamed] wanda-phi f7e4246 - Implement memory read port clear/reset/init/transparency unmapping.
<Wanda[cis]> well. untested. but still, with that out of the way, I can get to the iCE40 memory mapping proper.
<Wanda[cis]> though I'm not convinced I want to fuck that particular hedgehog right now
<Wanda[cis]> I still don't quite know how the read/write modes should be navigated in all cases
<Wanda[cis]> (the ice40 memory primitives are... weird about their supported write widths.)
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950109853
<_whitenotifier-4> [prjunnamed] whitequark closed pull request #8: adc_unsext: add tests right at the boundary of applicability - https://github.com/prjunnamed/prjunnamed/pull/8
<_whitenotifier-4> [prjunnamed/prjunnamed] whitequark pushed 1 commit to main [+0/-0/±1] https://github.com/prjunnamed/prjunnamed/compare/f7e4246b7439...f4ff85b97470
<_whitenotifier-4> [prjunnamed/prjunnamed] meithecatte f4ff85b - adc_unsext: add tests right at the boundary of applicability
<_whitenotifier-4> [prjunnamed] meithecatte reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950112954
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950120414
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950120760
<_whitenotifier-4> [prjunnamed] meithecatte reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950125063
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #3 commit - https://github.com/prjunnamed/prjunnamed/pull/3#discussion_r1950127268
<_whitenotifier-4> [prjunnamed] meithecatte synchronize pull request #3: Add doc comments explaining some aspects of the IR - https://github.com/prjunnamed/prjunnamed/pull/3
<_whitenotifier-4> [prjunnamed] meithecatte synchronize pull request #3: Add doc comments explaining some aspects of the IR - https://github.com/prjunnamed/prjunnamed/pull/3
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #3 commit - https://github.com/prjunnamed/prjunnamed/pull/3#discussion_r1950137507
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #3 commit - https://github.com/prjunnamed/prjunnamed/pull/3#discussion_r1950137644
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #3 commit - https://github.com/prjunnamed/prjunnamed/pull/3#discussion_r1950138468
<_whitenotifier-4> [prjunnamed] wanda-phi reviewed pull request #3 commit - https://github.com/prjunnamed/prjunnamed/pull/3#discussion_r1950139035
<Wanda[cis]> mei: thanks for the docs PR
<Wanda[cis]> ... I think I'll go ahead and doucment memories and FFs once this is merged, since they have a fair bit of unobvious stuff
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #4 commit - https://github.com/prjunnamed/prjunnamed/pull/4#discussion_r1950140533
<mei[m]> writing some docs for prjunnamed_pattern right now, should that go in a separate PR?
<whitequark[cis]> yes, tagged on me since i wrote the crate originally
<mei[m]> as in, I should at you on the PR?
<whitequark[cis]> yep
<whitequark[cis]> or set me as a reviewer maybe
<_whitenotifier-4> [prjunnamed] meithecatte opened pull request #9: Document the pattern language - https://github.com/prjunnamed/prjunnamed/pull/9
<mei[m]> i don't seem to have the necessary permissions to assign a reviewer
sdomi has quit [Ping timeout: 265 seconds]
sdomi has joined #prjunnamed
<Wanda[cis]> hrm haven't I added you as a collaborator a while ago
<Wanda[cis]> like... months ago
<mei[m]> i think you gave me read-only access back when the repo was private?
<Wanda[cis]> oh right, read-only
<Wanda[cis]> very well then
<Wanda[cis]> I believe you already have at least one accepted PR, so let's apply some other LLVM rules then.
<whitequark[cis]> :+1:
mupuf_ has joined #prjunnamed
mupuf_ has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
jleightcap has quit [Remote host closed the connection]
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
mupuf-soju has quit [Remote host closed the connection]
mupuf-soju has joined #prjunnamed
jleightcap has joined #prjunnamed
<_whitenotifier-4> [prjunnamed/prjunnamed] wanda-phi pushed 1 commit to main [+0/-0/±1] https://github.com/prjunnamed/prjunnamed/compare/f4ff85b97470...d4867da2e894
<_whitenotifier-4> [prjunnamed/prjunnamed] wanda-phi d4867da - simplify: fix concatenation order in a doc comment. NFCI
catlos has joined #prjunnamed
<catlos> i was briefly looking at the stuff you have been writing and saw how you are modelling x-prop semantics for smt. defining x-propagation operators for every cell is a pretty verilog-brained way of doing things and something I would avoid if at all possible
<Wanda[cis]> it is. yet, without a formal model, you have no way to reason about optimization correctness and will run into trouble sooner or later.
<catlos> i think its broadly nicer to work with satisfiability/observability don't cares. i.e. a signal is X iff there is some pair of assignments to X bits in its COI that cause it to have two different valuations
<catlos> also if you do this you just transparently copy the standard model for the cells out twice to check satisfiability with x prop, rather than having the nasty task of trying to work out a reasonable model for x prop over division or whatever
<mei[m]> what you're describing is how one would derive the most restrictive x-prop model
<Wanda[cis]> so, have you read the discussion backlog here about this?
<catlos> which allows the most optimization
<catlos> I have not because there is a lot of backlog here lol
<catlos> (or at least a lot for someone just skimming)
<Wanda[cis]> because the model where each X must actually be 0 or 1 tends to inhibit optimizations in ways that make it practically useless
<Wanda[cis]> do you have something more advanced that what was proposed?
<catlos> i'm not sure I follow? the model I am proposing is more precise than the one you are using, which allows more refinements
<catlos> (unless i misunderstand)
<Wanda[cis]> the previous discussion starts here: https://libera.irclog.whitequark.org/prjunnamed/2025-02-08#37718866;
<mei[m]> catlos: no, it allows less refinements
<Wanda[cis]> "allows more/less refinements" is rarely ever true.
<Wanda[cis]> the thing about undef-models broadly is that they define varying sets of refinements
<Wanda[cis]> you gain some, you lose some
<catlos> ahh yeah my reasoning is the same as jix's so I have not that much to add beyond that
<mei[m]> if you have X on input to a division, and your X-prop model is super precise, you need to make sure that any transformations don't make use of the excluded middle for any bits
<Wanda[cis]> what I'm kinda considering now is some sort of hybrid model with Verilog-like X, but also a freeze cell that, when strategically placed, allows you to just not worry about lack of excluded middle
<Wanda[cis]> but I have absolutely no idea if it works out
<catlos> the alternative of defining x prop per operator means that either you accept that there are many possible x prop implementations per operator (which is bad because this allows at the limit Xs to be entirely tainting so any X in a COI makes a node X, making Xs effectively meaningless), or you try to define a specific x prop model individually for each cell, but this gets you again into verilog style messes where the
<catlos> optimizations your tool is allowed to do are strongly tied to a slightly arbitrary choice of semantics
<whitequark[cis]> i'm not sure i see it as "verilog style"? llvm has much the same undef model
<whitequark[cis]> in fact i was thinking entirely of llvm when i was writing the rules, i don't super care for verilog
<catlos> i'm afraid im not so familiar with that. do users not complain about UB propagation allowing non-obvious optimizations with llvm too though?
<Wanda[cis]> UB is a different concept than undef
<catlos> (and from the formal verification tooling perspective it makes life a pain because people expect that once they formally verify the design, the behaviours of the synthesised design are a refinement of the behaviours that were formally verified. formal tools have to use the style jix mentions because there aren't really any performant alternatives)
<catlos> s/refinement/refinement under the semantics used for formal verif/
<Wanda[cis]> UB is "do this and the fabric of causality falls apart, forwards and backwards in time", undef merely propagates itself until muxed off
<catlos> Wanda[cis]: i mean i think it is fair for you to say your semantics are "undef can propagate freely except at muxes", but i do still think defining it for every operator gets really messy
<Wanda[cis]> defining x-prop rules for operators is not entirely trivial, but it's not particularly messy either. this is not the problem that makes dealing with X values messy.
<Wanda[cis]> the thing that makes dealing with undef values messy is a simple question
<Wanda[cis]> the question is: if you have a := undef, is a == a?
<Wanda[cis]> if your answer to that is yes, you run into the problems we have outlined in the logs: any optimization that actually makes use of the undef has to consume it to maintain the equality semantics, causing a whole lot of problems
<Wanda[cis]> if your answer to that is no (or rather, undef), you are now operating in a world where the law of excluded middle doesn't hold, and a bunch of useful logic transformations cannot be done
<catlos> I mean im not writing this so do whatever makes sense for the project, but I would argue for keeping LEM, I think it makes life much more predictable for the user even if you lose some optimization opportunity
<catlos> I would also point out that e.g. the example of duplicating a non-reset flop is perfectly allowed under ODC semantics as long as the reset value doesn't propagate out, although I will be the first to admit that languages provide no good capabilities for describing when you don't care about an output and also checking ODCs becomes a more global problem
galibert[m] has joined #prjunnamed
<galibert[m]> I’m in Cat’s camp there, where DK/DC and NaB are very different things (don’t know, don’t care, not a Boolean)
<Wanda[cis]> and I'm going to remind you that just naming three different things doesn't mean anything until you actually define proper formal semantics for them that can be reasoned about
<catlos> galibert: yeah I think this is also true but somewhat orthogonal
<galibert[m]> True
<whitequark[cis]> catlos: yeah you basically can't use this for optimization
<Wanda[cis]> by the way, as for duplicating flops
<catlos> I think I don't have much more to add to the conversation, but I would just point out again that formal verification is done assuming LEM and thus synthesis without it can introduce violating behaviours that it won't catch, but maybe this is me showing my biases towards the verification side of things
<Wanda[cis]> there's actually a worse case
<Wanda[cis]> consider a target with non-initializable memories.
<whitequark[cis]> there is a tension between "you can't write formal tools performant enough" and "you can't write synthesis tools performant enough" and like obviously the formal tools people pick what favors formal tools and the synthesis catgirls pick what favors synthesis tools
<Wanda[cis]> further, consider a memory with 1 write port and 10 read ports
<Wanda[cis]> there are no memories with 10 read ports. the only way to realize such a memory is by duplicating it per read port (or maybe a pair of read ports or something).
<whitequark[cis]> this is not a new tension
<whitequark[cis]> anyway, may i suggest doing FV on the post synthesis netlist?
<Wanda[cis]> but when you duplicate it, you have no way to ensure that the initial value is actually the same across the duplicates.
<Wanda[cis]> hence, the initial value of such a memory can only be formally described as a LEM-failing poisonous X. either that, or you fail synthesis.
<catlos> whitequark[cis]: i mean there is another question there about people that want to do (S)LEC and the semantics those tools will use, but maybe a conversation for another day
<whitequark[cis]> catlos: we do have a LEC checker in the synthesizer already!
<whitequark[cis]> * we do have a LEC in the synthesizer already!
<catlos> that uses the same SMT lowering as the optimizer. if i were a paranoid engineer doing a tapeout i might choose to use a different LEC tool haha
<whitequark[cis]> the optimizer doesn't use SMT at all at the moment
<whitequark[cis]> (and it's not clear if it will ever reuse the lowering, actually)
<catlos> oh fair enough
<whitequark[cis]> it might! in which case let's consider CEC and SEC separately
<catlos> still, same frontends etc. i am under the impression its quite common to try to use relatively separate/diverse EC tools for this reason
<catlos> ~~although maybe every other frontend is just verific anyway~~
<whitequark[cis]> CEC is "easy": you cut the flops into PIs+POs then you check if the logic is indeed equivalent under whatever model you like the most
<whitequark[cis]> the answer is either yes or no and if it is no you hunt down and disable the offending optimization
<whitequark[cis]> I mean, that seems to me like the only reasonable approach, is that not so?
<whitequark[cis]> (you do need to match up duplicated flops but we can output a file that lists all of them somehow. or add a witness cell into the netlist that says "we assume these two nets are equivalent")
<whitequark[cis]> so you can use any tools you want, with some additional legwork (extra assertions)
<catlos> theres extra stuff (lots of rewriting etc) done in commercial CEC tools to handle the fact that SAT methods are terrible at comparing some arithmetic functions but yeah thats the general idea. also benefits I believe from incrementally pairing up internal equivalences FRAIGing style
<whitequark[cis]> SEC, as far as i can tell, is not tractable in the general case of "here is a netlist before and here is a netlist after, good luck, they may or may not be the same"
<whitequark[cis]> s/be/work/
<catlos> it is in many ways not too different from CEC, you want to find as many internal equivalences as possible except you are using induction rather than pure SAT. but yes it is generally harder
<whitequark[cis]> in order to do SEC without waiting for hours best case, you have to let your optimizer guide the process by emitting additional clauses indicating which state transformations it used
<catlos> or in particular state equivalences are great because then you can do CEC on nodes in their fanout
<Wanda[cis]> whitequark[cis]: <del>as long as you don't happen to have multiplied two numbers somewhere</del>
<whitequark[cis]> at least, that is my understanding from reading the literature so far
<catlos> i mean in general in verification any hints you can give the verifier can cut time down a lot
catlos has quit [Quit: Leaving]
<mei[m]> <Wanda[cis]> "hence, the initial value of such..." <- i really like this example, btw
<mei[m]> <catlos> "that uses the same SMT lowering..." <- If we lower to SAT, verifying the LRAT unsat proofs produced by the SAT solver wouldn't be too hard
<jix> fwiw, the argument about initial state convinced me that you need undef for synthesis so I'd say FV needs to adapt
<galibert[m]> Oh, I thought it was needed more for simulation and verification
<galibert[m]> Synthesis just doing as it’s told
<jix> say you do functional verficiation as before, i.e. interpreting x as frozen undef, and you know synthesis does refinement under some specified x propagation rules (i.e. what you started verifying using SMT solvers now)
<jix> that does leave a potential gap, but I'd argue you need to verify that this gap doesn't matter for your design, because otherwise your design is written in a way that allows synth to break it
<jix> and I think any x-prop model that doesn't have stuff like SV's `===` or `if ('x) ...` will behave quite nicely in that context
<jix> in particular I think what catlos said about the standard FV model (i.e. x is frozen undef) always being a refinement of of such an x-prop model is true
<mei[m]> or you can do your FV post-techmapping
<jix> unless you have a complete spec that fully constrains the intended circuit behavior, that doesn't help you
<mei[m]> i mean, if you do your FV post-syntesis, your FV don't need to support X-prop and will give you as much assurance as a pre-synthesis FV toolchain that does support X-prop semantics exactly matching the synthesis toolchain
<jix> yeah it gives you the same assurances, but I'd argue you'd often need/want more assurances if you're using a synthesis that does x-prop vs one that doesn't
<jix> it makes no difference if you have a complete spec, but if you're only using FV to check certain properties I'd argue that optimzing with x-prop during synthesis can introduce behavior that's not captured by the spec, wasn't intended and isn't obvious from looking at the design
<jix> and that's ignoring performance concerns
<jix> but the only thing you need to verify to be able to use the standard FV model for functional verification is that under the x-prop model your design doesn't produce any x outputs post reset (or some restrictions of that + maybe something to deal with memories that don't get reset)
<whitequark[cis]> mm, yeah
<jix> so the point is that you can usually come up with some conditions that ensure that the difference between the two models doesn't make a difference for your design and these conditions are really simple compared to many functional properties and certainly to a full design spec
<jix> and the same conditions then also make sure you don't have any reset bugs due to unintended x propagations
catlos has joined #prjunnamed
<catlos> yeah having thought about it a bit more, I wonder if a sensible model that makes concessions for practicality whilst still being relatively neatly constrained would be something like "multiple uses of a state element's initial value do not need to see the same value" and something like "comb operators are only required to produce non-X values when all inputs are non-X with the exceptions of boolean operators and muxes", then some
<catlos> kind of rule around concat/extract handling is probably useful too
<catlos> i just think otherwise you end up tied to whatever model of x prop you happened to implement for division or whatever on the day you implemented it when its not even clear if that permits useful optimizations
<catlos> <mei[m]> "If we lower to SAT, verifying the LRAT unsat proofs..." there are plenty of useful rewrites that can't easily be verified in anything like reasonable time like this, the most notable being anything to do with multipliers
<catlos> <mei[m]> "or you can do your FV post-techmapping..." this is an absolute pain to make happen, incredibly brittle if you are a library/ip provider and way slower because you optimise all the high level structure that is useful for formal verif away
<whitequark[cis]> concat/extract?
<catlos> uhh like {a, b} and a[7:0]
<catlos> the idea being that sometimes people end up squishing two things that are not the same into one word, e.g. a 2 bit register where only one bit has a reset value. you want to be able to consider the X ness of the bits individually
<catlos> alternatively you can do this just on the level of bits, but i think most of the time thinking just about the ranges that appear in concats/extracts tends to be sufficient
<mei[m]> catlos: yeah that's pretty much what happens already
<mei[m]> though i think the current semantics for div, at least, is "even one X on the input means the output is all X'
<whitequark[cis]> we generally consider the netlist on a bit level in unnamed
<mei[m]> yeah i think the netlist is more compact in memory when fine rather than coarse
<whitequark[cis]> i... don't think that is true?
<whitequark[cis]> coarse cells take the same amount of space as equivalent fine cells plus a bit more
<Wanda[cis]> <catlos> "i just think otherwise you end..." <- division has rather simple semantics of "any X in input gives you all-X output". this has the huge benefit of not tying you into any particular lowering.
<mei[m]> whitequark[cis]: that's what i said?
<Wanda[cis]> in particular, note that the model you're proposing, where you carefully track what sets of results are possible given the Xes on inputs, would be ridiculously overcomplicated, and you'd have to very carefully design your lowering to not screw this up. which is probably not possible anyway.
<whitequark[cis]> mei[m]: oh, sorry, i misread
<Wanda[cis]> basically division is simple because any other x-prop model immediately makes you go insane
<Wanda[cis]> multiply and add are more tricky.
<Wanda[cis]> for adc, the model I picked only poisons bits upwards of the X bit in input, and doesn't affect lower bits
<mei[m]> how to DSP cells handle this?
<mei[m]> (for mult, ofc)
<Wanda[cis]> this is opposed to Verilog model, which poisons the whole output
<Wanda[cis]> and I believe our model is more useful because it allows us to eg. merge a narrow addition into a wider one that happens to share a prefix
<Wanda[cis]> for mul... we currently have the all-poisoning semantics, but we may yet change it to be same as adc for similar reasons
<Wanda[cis]> mei[m]: are you asking what happens when you feed a `0.5` into the physical DSP cell?
<mei[m]> are they glitch-free?
<jix> does the upward poisoning only property hold for booth multipliers?
<Wanda[cis]> well, the answer is pretty obvious: the vendor does not document that.
<Wanda[cis]> what does glitch-free mean?
<mei[m]> if bit 1 goes from 0 to 1, does bit 0 stay stable
<Wanda[cis]> mmm.
<Wanda[cis]> booth multipliers are an excellent point.
<catlos> Wanda[cis]: I am not quite proposing that model anymore (although it is relatively straightforward to check if you use SAT throughout), I do see the benefit in allowing more tainting as with x prop but I think if you do so you should commit to things like arithmetic operators being fully tainting
<Wanda[cis]> so that'd be another thing that requires freeze because I guessssss you can construct contrived enough circumstances
<mei[m]> Wanda[cis]: at which point do you need the freeze, though?
<Wanda[cis]> I suppose you could technically try to divine glitch behavior for DSP cells from timing arcs
<whitequark[cis]> that's cursed
<Wanda[cis]> ie. if A[1] has any chance of affecting O[0], there must be a timing arc because otherwise the timing information would be a lie
<Wanda[cis]> (and as we know vendors never lie)
<mei[m]> ...
<galibert[m]> At least reverse-engineered fpga specs rarely lie :-)
<whitequark[cis]> unless it's icestorm, or mistral, or...
<Wanda[cis]> look. the only thing I'm willing to trust about FPGA behavior is a scope. and even that is qualified.
<whitequark[cis]> having watched Wanda redo the icestorm work I'm asking myself how come anything I've synthesized work at all (ok, this is a little exaggerated)
<whitequark[cis]> s/work/works/
<galibert[m]> Hey, mistral doesn’t lie, it just has an alternative truth or two
catlos has quit [Quit: Leaving]
<Wanda[cis]> I mean, icestorm doesn't really lie for the most part, it's just... a little incomprehensible
<Wanda[cis]> (sure I did find some problems, but those were in obscure corners)
gatecat[m] has joined #prjunnamed
<gatecat[m]> literally corners? ;)
<gatecat[m]> I definitely wouldn't be surprised if something wasn't right there
<Wanda[cis]> we'll see
<Wanda[cis]> I'm just hooking up this part
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951436782
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951443012
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951447984
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951446217
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951440689
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951440036
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951450591
<Wanda[cis]> gatecat[m]: ... actually middle of edges
<Wanda[cis]> the latch global input thing
<gatecat[m]> right
<gatecat[m]> I'd be a bit suspicious of the up5k corner routing, I really didn't know what I was doing when I attempted to figure that out
<Wanda[cis]> hm
<Wanda[cis]> corner routing?
<gatecat[m]> it was the first tricky problem in fpga re that I ever encountered
<Wanda[cis]> you mean the weird span4 thing?
<gatecat[m]> yup
<Wanda[cis]> oh I um.
<Wanda[cis]> I kinda decided to whack it with a big hammer.
<gatecat[m]> sensible
<Wanda[cis]> by which I mean I have verified my own thing against the .dev file
<Wanda[cis]> I haven't actually checked if it matches against icestorm
<_whitenotifier-4> [prjunnamed] meithecatte reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951460284
<_whitenotifier-4> [prjunnamed] whitequark reviewed pull request #9 commit - https://github.com/prjunnamed/prjunnamed/pull/9#discussion_r1951462217
<_whitenotifier-4> [prjunnamed] meithecatte commented on pull request #9: Document the pattern language - https://github.com/prjunnamed/prjunnamed/pull/9#issuecomment-2651871478
<_whitenotifier-4> [prjunnamed] whitequark commented on pull request #9: Document the pattern language - https://github.com/prjunnamed/prjunnamed/pull/9#issuecomment-2651886743
<_whitenotifier-4> [prjunnamed] whitequark commented on pull request #9: Document the pattern language - https://github.com/prjunnamed/prjunnamed/pull/9#issuecomment-2651905480
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark pushed 2 commits to main [+1/-0/±5] https://github.com/prjunnamed/vscode-syntax/compare/03dcdbf008ab...7cd1d7147279
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark f3f21d1 - Add `debug` keyword.
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark 7cd1d71 - Add CI workflow.
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark pushed 1 commit to main [+0/-0/±1] https://github.com/prjunnamed/vscode-syntax/compare/7cd1d7147279...d56f1f11d8cf
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark d56f1f1 - Prepare for publishing.
<_whitenotifier-4> [prjunnamed/vscode-syntax] whitequark tagged d56f1f1 as v0.2.3 https://github.com/prjunnamed/vscode-syntax/commit/d56f1f11d8cf80dfebeccdb68692fac0134baedc
<_whitenotifier-4> [vscode-syntax] whitequark created tag v0.2.3 - https://github.com/prjunnamed/vscode-syntax
<whitequark[cis]> the syntax is now available via vscode marketplace: https://marketplace.visualstudio.com/items?itemName=prjunnamed.prjunnamed-syntax
<Wanda[cis]> this involved incredible amounts of suffering
<Wanda[cis]> jesus fuck, microsoft has the single worst sign-in/sign-up flow I have ever seen