whitequark[cis] changed the topic of #amaranth-lang to: Amaranth hardware definition language · weekly meetings: Amaranth each Mon 1700 UTC, Amaranth SoC each Fri 1700 UTC · code https://github.com/amaranth-lang · logs https://libera.irclog.whitequark.org/amaranth-lang · Matrix #amaranth-lang:matrix.org
lf has quit [Ping timeout: 246 seconds]
lf has joined #amaranth-lang
<mithro> https://arxiv.org/pdf/2311.03489.pdf talks about LLMs to generate Amaranth
<tpw_rules> strange that it describes amaranth as HLS
lofty has quit [Ping timeout: 240 seconds]
lofty has joined #amaranth-lang
Lord_Nightmare has quit [Server closed connection]
Lord_Nightmare has joined #amaranth-lang
benreynwar has quit [Server closed connection]
benreynwar has joined #amaranth-lang
feldim2425_ has quit [Server closed connection]
feldim2425 has joined #amaranth-lang
Degi_ has joined #amaranth-lang
Degi has quit [Ping timeout: 252 seconds]
Degi_ is now known as Degi
balrog has quit [Quit: Bye]
balrog has joined #amaranth-lang
esden has quit [Server closed connection]
esden has joined #amaranth-lang
<tpw_rules> so i have 64 modules and every 16384 cycles at 50MHz they all produce a 16 bit word (on the same cycle, this can't be changed) and i want to shove them all into a FIFO in order. what's the best way to do this? a giant mux? some sort of OR tree? a giant shift register?
<tpw_rules> (most likely to pass timing and secondarily to reduce resource usage)
<tpw_rules> my money's on the shift register i think
Wanda[cis] has joined #amaranth-lang
<Wanda[cis]> yeah, just stuff them into a giant shift register; you'll effectively be doing a parallel write, serial read
<Wanda[cis]> doesn't really get cheaper than that
notgull has quit [Ping timeout: 240 seconds]
<Wanda[cis]> (nor easier to meet timing for that matter)
notgull has joined #amaranth-lang
jjsuperpower has quit [Ping timeout: 255 seconds]
nyanotech has quit [Server closed connection]
nyanotech has joined #amaranth-lang
notgull has quit [Ping timeout: 260 seconds]
notgull has joined #amaranth-lang
<galibert[m]> Wonder what it is, a little slow for a microphone array
qookie has quit [Server closed connection]
qookie has joined #amaranth-lang
catlosgay[m] has quit [Quit: Idle timeout reached: 172800s]
notgull has quit [Ping timeout: 246 seconds]
notgull has joined #amaranth-lang
<tpw_rules> the processing thereof :)
<galibert[m]> At 3Khz?
nelgau has quit [Read error: Connection reset by peer]
nelgau_ has joined #amaranth-lang
nelgau_ has quit [Read error: Connection reset by peer]
nelgau has joined #amaranth-lang
<tpw_rules> i just rounded the numbers heavily
<galibert[m]> Well, i any case, microphone arrays are cool
<tpw_rules> yeah some uni friends are handling the electronics and math. i signed up to tutor them on the fpga stuff
<galibert[m]> That's how I discovered fpgas 20 years ago, using one to read 32 stereo ADCs and turn the data into udp frames
<galibert[m]> At NIST
<tpw_rules> we're basically doing that + an ungodly quantity of processing
<tpw_rules> but i don't wanna talk too much, it's not my project
<galibert[m]> Yeah, at the time there was no way to do the processing on the fpga. The network was barely good enough to send the data in the first place
<galibert[m]> And the fpga was a spartan 2
<tpw_rules> how did you hook it to ethernet?
<galibert[m]> With a PHY. If was fast ethernet then, 25MHz was enough
<tpw_rules> we're passing it through a linux soc in large part because that's how the board is wired... and for learning
<galibert[m]> The bitstream did the ethernet frames (crc mostly), arp and udp
<galibert[m]> I don't remember if we had managed bootp
<tpw_rules> i also didn't want to try to do that stuff, i know linux networking is good
<galibert[m]> Well, I was working on MKII, not 3
<tpw_rules> mmm, java, perl, endian swapping...
<tpw_rules> eagle
<tpw_rules> did you run this on a Sun too?
<galibert[m]> (and 1, I made 1 work too, it was using a tms dsp for the a/d)
<galibert[m]> Nah, linux already
<tpw_rules> :P
jfng[m] has quit [Quit: Idle timeout reached: 172800s]
<galibert[m]> But yeah, nowadays going through a linux soc makes a lot of sense
<galibert[m]> the hardware and software panorama has changed a lot
gruetzkopf has quit [Server closed connection]
gruetzkopf has joined #amaranth-lang
urja has quit [Server closed connection]
urja has joined #amaranth-lang
Lord_Nightmare has quit [Quit: ZNC - http://znc.in]
Lord_Nightmare has joined #amaranth-lang
Lord_Nightmare has quit [Remote host closed the connection]
Lord_Nightmare has joined #amaranth-lang
FireFly has quit [Server closed connection]
FireFly has joined #amaranth-lang
omnitechnomancer has joined #amaranth-lang
<omnitechnomancer> Why do arp when multicast is right there
<galibert[m]> 2001, multicast was not really right there
<omnitechnomancer> I guess it depends on network topology and software
<ktemkin> just a heads up: I wrote up the /vendor code for Efinix FPGA support over the weekend, and plan on PR'ing soon
<whitequark[cis]> oh, nice! thank you ktemkin
<ktemkin> np ^^; hopefully it's palatable enough -- their toolchain is a PITA in that their required environment setup needs PYTHONHOME to be set to their python site
<whitequark[cis]> the build process is completely separate from the build tree construction process, so I think this is completely OK; there does not need to be any link between the two Python environments (in fact there should not be)
<whitequark[cis]> existing toolchains do similar things with environment variables, so I think something like AMARANTH_ENV_EFINIX=/path/to/toolchain/root would be the way to do it? with a PYTHONPATH=${AMARANTH_ENV_EFINIX}/lib/site-python or something like that in the build script
<whitequark[cis]> this is just a rough idea; I'm sure your implementation will be fine
<ktemkin> https://github.com/ktemkin/amaranth/blob/with_efinix/amaranth/vendor/_efinix.py#L149 <-- I'm trampolining, since I don't necessarily get the EFINITY_HOME variable until AMARANTH_ENV_EFINITY is evaluated
<whitequark[cis]> actually, I don't know if you've seen, but Glasgow now does something approximating hermetic builds based on Amaranth and yowasp; in these, the build script is executed in a controlled environment with almost (Windows stores processor type in the environment for some godforsaken reason) nothing but AMARANTH_* vars, and every input is checksummed
<ktemkin> neat
<whitequark[cis]> ktemkin: ah, are you essentially writing the small shell script I would normally make for this purpose (twice, once for *nix and once for Windows) in Python because Python will by definition be available if Efinity is installed?
<ktemkin> yep
<whitequark[cis]> sgtm
<ktemkin> (well, right now I'm running it using sys.executable, but by PR time, it should use Efinity's python so it doesn't break remote builds)
<whitequark[cis]> any chance the glob will return more than one value, or is it just "one version of Python but we don't know which
<whitequark[cis]> * know which"?
<ktemkin> not unless they start shipping more than one version of python, which they haven't yet
<whitequark[cis]> gotcha. seems fine to me
<whitequark[cis]> looking at line 203, I'm starting to understand how this works
<whitequark[cis]> this is kind of cursed
<whitequark[cis]> what the vendor is doing, I mean
<ktemkin> it's /almost/ not cursed; their whole toolchain is automatable from python, so you can do things like assign I/Os directly from a python API
<ktemkin> but, of course, it has to be _their_ python -_-
<whitequark[cis]> is run_efinity_platform_tool just something you are using for testing?
<whitequark[cis]> oh, and just a minor thing, you don't need a vendor/efinix.py; we stopped using vendor.platform per RFC 18 (https://amaranth-lang.org/rfcs/0018-reorganize-vendor-platforms.html) so now it's just from amaranth.vendor import EfinixPlatform
<whitequark[cis]> anyway, I can make a proper review when a PR comes in; don't have to do it right now >_>
<ktemkin> run_efinity_platform_tool> it's there until I figure out a better way to program their devices; honestly I might just write a programmer myself rather than use their deep script
<whitequark[cis]> ahhh I see
<whitequark[cis]> that's ... it's ... definitely cursed...
<whitequark[cis]> yeah that might be one of our weirdest platforms once it gets merged. I don't really see any significant issues with the approach, it's just weird
<tpw_rules> silly question: is this wrong? https://github.com/tpwrules/de10_nano_nixos_demo/blob/e92f9158b6c91803d0b49dc8ba365d0843b9e7f5/design/amaranth_top/amaranth_top/top.py#L40 should that be an instance of the platform rather than the class?
<whitequark[cis]> it should be an instance
<tpw_rules> ok
<whitequark[cis]> the way platforms work is that a platform file is used for dependency injection, and it contains all the state involved in doing so
<whitequark[cis]> er, platform class
<ktemkin> whitequark[cis]: it's a super weird toolchain -- they don't do verilog macro-instances for their hard IP; instead, they just configure the hard IP in an XML file and then treat the IP boundary like they're I/Os
<whitequark[cis]> tpw_rules: so the design is meant to be static during elaboration, and all of the mutable bits, and all of the platform specific bits go into the platform instance
<whitequark[cis]> in theory you should be able to create a design once and then build it several times with different platforms
<whitequark[cis]> though i don't know of anybody actually doing this
<tpw_rules> ok. cause it seemed to work not as an instance until it needed to call a function on the platform to generate a synchronizer
<whitequark[cis]> ktemkin: yeah I see. it makes some kind of sense but it also means you have to bring everything to the top because it's annoying
<ktemkin> their FPGAs are simple enough that the hard IP is only on I/O boundaries for now (e.g. MIPI CSI Rx), so it's not biting them in the ass Yet (TM)
<whitequark[cis]> ahhh
<whitequark[cis]> makes sense
<omnitechnomancer> I assume they have no DSP blocks then?
_whitelogger has quit [Server closed connection]
_whitelogger has joined #amaranth-lang