<d1b2>
<ld;cd> Is the idiomatic nmigen way to have a block what different parts of a design can request access ports for the way https://github.com/nmigen/nmigen/blob/b38b2cdad74c85c026a9313f8882f52460eb82e6/nmigen/hdl/mem.py does it? It seems slightly odd in that instead of being modules the ports seem to directly instantiate nmigen IR and I'm assuming theres a reason for it but I can't figure out what
<mwk>
the thing with memories is: they have special behavior when they're emitted into RTLIL
<d1b2>
<ld;cd> They are still subclasses of Elaboratable as one would hope, but interestingly Memory isn't which i guess makes sense
<mwk>
the nMigen IR doesn't really represent memories natively
<mwk>
so Memory cheats
<d1b2>
<ld;cd> ah ok, so I should always use the built in Memory class when making things like caches, etc?
<mwk>
it creates instances of $memrd and $memwr RTLIL cells
<mwk>
and also fills them with nMigen IR to be used for simulation only
<d1b2>
<ld;cd> ok that makes sense
<mwk>
when you emit RTLIL, the IR is ignored, and the $memrd/$memwr cells are emitted; when you're simulating, the IR is used
<mwk>
so yes, the Memory class and friends is the only reasonable way of dealing with memories in nMigen
<mwk>
particularly ones that you want to actually use hardware memory primitives, and not just get lowered to lots of FFs
<d1b2>
<ld;cd> so if I'm generating crossbar switch like objects I should just use modules but structure it in a similar way?
<d1b2>
<ld;cd> Yeah that would be unfortunate
<mwk>
hmm
<mwk>
don't really see what exactly you are doing
<mwk>
but yes, definitely you should use modules
<mwk>
Instance has one purpose only: ensure a particular cell gets emitted on RTLIL level
<mwk>
whether yosys internal cell, or vendor cell
<mwk>
(or an external module written in Verilog)
<d1b2>
<ld;cd> well me exact use case is I want a standardized interface from a cpu core to split or conjoined instruction and data buses
<d1b2>
<ld;cd> so I would pass an instance of a conjoined bus class to my cpu which would provide two read ports and two write ports with a read/valid and write/done signals respectively and the conjoined bus class would have some logic to mux requests onto a single Wishobone/AXI/etc. bus
<d1b2>
<ld;cd> wheras with a split bus class there would be two read ports and two write ports but with no arbitration required on the backend
<d1b2>
<ld;cd> *required in the bus adapter
<d1b2>
<ld;cd> sorry if this is confusing, this isn't all fully formed in my head and is kind of growing organically
<d1b2>
<ld;cd> ideally I'd like this scheme to be able to support more read and write ports in the case that the busses are split and the underlying memory is dual ported, so I can have more than one execution unit perform memory operations in a given cycle
<d1b2>
<ld;cd> Also on an unrelated note if I want to emit a latch on purpose should I just do that through the same escape hatch?
kmehall has quit [Remote host closed the connection]
kmehall has joined #nmigen
<mwk>
I'd not advise using a latch without a really good reason
urja has quit [Read error: Connection reset by peer]
<mwk>
but, in principle, yes you can emit an instance of $dlatch (or one of the others) that way
urja has joined #nmigen
<mwk>
and provide sim behavior
<mwk>
I don't know how well pysim can deal with those, though
<d1b2>
<ld;cd> Oh, the latch is for an ASIC, so as long as the placer is smart enough to understand the timing constraints (which wasn't the case using openlane about a year ago) it shouldn't be a footgun TM
<d1b2>
<ld;cd> yeah I don't expect the simulator to handle it, just looking for a quick way to get rid of some disgusting verilog generate statements in an existing codebase
urjaman has joined #nmigen
urja has quit [*.net *.split]
andresmanelli has quit [*.net *.split]
mithro has quit [*.net *.split]
wolfshappen has quit [*.net *.split]
wolfshappen has joined #nmigen
andresmanelli has joined #nmigen
mithro has joined #nmigen
wolfshappen has quit [Max SendQ exceeded]
wolfshappen has joined #nmigen
emeb_mac has quit [Quit: Leaving.]
andresmanelli_ has joined #nmigen
manelliandres has joined #nmigen
andresmanelli has quit [Read error: Connection reset by peer]
andresmanelli has joined #nmigen
manelliandres has quit [Ping timeout: 268 seconds]
peepsalot has joined #nmigen
cr1901 has quit [Ping timeout: 264 seconds]
<FL4SHK>
ld;cd: your project sounds interesting
cr1901 has joined #nmigen
cr1901 has quit [Quit: Leaving.]
andresmanelli has quit []
<d1b2>
<ld;cd> The latches are for a respin of https://github.com/ucb-cs250/caravel_fpga250 from a couple people who were in the class as it looks like the first shuttle run isn't going to work out
<whitequark>
pysim should be able to handle latches in principle but it hasn't ever been done before that i know of
<d1b2>
<ld;cd> I worked on the config and integration part of the projects and we initially used latches to store configuration data, but switched after it appeared that there were hold time violations (and apparently everywhere else but we only checked our stuff because we assumed that caravel had been tested), so we switched to flip flops at a pretty large area cost.