<tpw_rules>
strange that it describes amaranth as HLS
lofty has quit [Ping timeout: 240 seconds]
lofty has joined #amaranth-lang
Lord_Nightmare has quit [Server closed connection]
Lord_Nightmare has joined #amaranth-lang
benreynwar has quit [Server closed connection]
benreynwar has joined #amaranth-lang
feldim2425_ has quit [Server closed connection]
feldim2425 has joined #amaranth-lang
Degi_ has joined #amaranth-lang
Degi has quit [Ping timeout: 252 seconds]
Degi_ is now known as Degi
balrog has quit [Quit: Bye]
balrog has joined #amaranth-lang
esden has quit [Server closed connection]
esden has joined #amaranth-lang
<tpw_rules>
so i have 64 modules and every 16384 cycles at 50MHz they all produce a 16 bit word (on the same cycle, this can't be changed) and i want to shove them all into a FIFO in order. what's the best way to do this? a giant mux? some sort of OR tree? a giant shift register?
<tpw_rules>
(most likely to pass timing and secondarily to reduce resource usage)
<tpw_rules>
my money's on the shift register i think
Wanda[cis] has joined #amaranth-lang
<Wanda[cis]>
yeah, just stuff them into a giant shift register; you'll effectively be doing a parallel write, serial read
<Wanda[cis]>
doesn't really get cheaper than that
notgull has quit [Ping timeout: 240 seconds]
<Wanda[cis]>
(nor easier to meet timing for that matter)
notgull has joined #amaranth-lang
jjsuperpower has quit [Ping timeout: 255 seconds]
nyanotech has quit [Server closed connection]
nyanotech has joined #amaranth-lang
notgull has quit [Ping timeout: 260 seconds]
notgull has joined #amaranth-lang
<galibert[m]>
Wonder what it is, a little slow for a microphone array
qookie has quit [Server closed connection]
qookie has joined #amaranth-lang
catlosgay[m] has quit [Quit: Idle timeout reached: 172800s]
notgull has quit [Ping timeout: 246 seconds]
notgull has joined #amaranth-lang
<tpw_rules>
the processing thereof :)
<galibert[m]>
At 3Khz?
nelgau has quit [Read error: Connection reset by peer]
nelgau_ has joined #amaranth-lang
nelgau_ has quit [Read error: Connection reset by peer]
nelgau has joined #amaranth-lang
<tpw_rules>
i just rounded the numbers heavily
<galibert[m]>
Well, i any case, microphone arrays are cool
<tpw_rules>
yeah some uni friends are handling the electronics and math. i signed up to tutor them on the fpga stuff
<galibert[m]>
That's how I discovered fpgas 20 years ago, using one to read 32 stereo ADCs and turn the data into udp frames
<galibert[m]>
At NIST
<tpw_rules>
we're basically doing that + an ungodly quantity of processing
<tpw_rules>
but i don't wanna talk too much, it's not my project
<galibert[m]>
Yeah, at the time there was no way to do the processing on the fpga. The network was barely good enough to send the data in the first place
<galibert[m]>
And the fpga was a spartan 2
<tpw_rules>
how did you hook it to ethernet?
<galibert[m]>
With a PHY. If was fast ethernet then, 25MHz was enough
<tpw_rules>
we're passing it through a linux soc in large part because that's how the board is wired... and for learning
<galibert[m]>
The bitstream did the ethernet frames (crc mostly), arp and udp
<galibert[m]>
I don't remember if we had managed bootp
<tpw_rules>
i also didn't want to try to do that stuff, i know linux networking is good
Lord_Nightmare has quit [Remote host closed the connection]
Lord_Nightmare has joined #amaranth-lang
FireFly has quit [Server closed connection]
FireFly has joined #amaranth-lang
omnitechnomancer has joined #amaranth-lang
<omnitechnomancer>
Why do arp when multicast is right there
<galibert[m]>
2001, multicast was not really right there
<omnitechnomancer>
I guess it depends on network topology and software
<ktemkin>
just a heads up: I wrote up the /vendor code for Efinix FPGA support over the weekend, and plan on PR'ing soon
<whitequark[cis]>
oh, nice! thank you ktemkin
<ktemkin>
np ^^; hopefully it's palatable enough -- their toolchain is a PITA in that their required environment setup needs PYTHONHOME to be set to their python site
<whitequark[cis]>
the build process is completely separate from the build tree construction process, so I think this is completely OK; there does not need to be any link between the two Python environments (in fact there should not be)
<whitequark[cis]>
existing toolchains do similar things with environment variables, so I think something like AMARANTH_ENV_EFINIX=/path/to/toolchain/root would be the way to do it? with a PYTHONPATH=${AMARANTH_ENV_EFINIX}/lib/site-python or something like that in the build script
<whitequark[cis]>
this is just a rough idea; I'm sure your implementation will be fine
<whitequark[cis]>
actually, I don't know if you've seen, but Glasgow now does something approximating hermetic builds based on Amaranth and yowasp; in these, the build script is executed in a controlled environment with almost (Windows stores processor type in the environment for some godforsaken reason) nothing but AMARANTH_* vars, and every input is checksummed
<ktemkin>
neat
<whitequark[cis]>
ktemkin: ah, are you essentially writing the small shell script I would normally make for this purpose (twice, once for *nix and once for Windows) in Python because Python will by definition be available if Efinity is installed?
<ktemkin>
yep
<whitequark[cis]>
sgtm
<ktemkin>
(well, right now I'm running it using sys.executable, but by PR time, it should use Efinity's python so it doesn't break remote builds)
<whitequark[cis]>
any chance the glob will return more than one value, or is it just "one version of Python but we don't know which
<whitequark[cis]>
* know which"?
<ktemkin>
not unless they start shipping more than one version of python, which they haven't yet
<whitequark[cis]>
gotcha. seems fine to me
<whitequark[cis]>
looking at line 203, I'm starting to understand how this works
<whitequark[cis]>
this is kind of cursed
<whitequark[cis]>
what the vendor is doing, I mean
<ktemkin>
it's /almost/ not cursed; their whole toolchain is automatable from python, so you can do things like assign I/Os directly from a python API
<ktemkin>
but, of course, it has to be _their_ python -_-
<whitequark[cis]>
is run_efinity_platform_tool just something you are using for testing?
<whitequark[cis]>
anyway, I can make a proper review when a PR comes in; don't have to do it right now >_>
<ktemkin>
run_efinity_platform_tool> it's there until I figure out a better way to program their devices; honestly I might just write a programmer myself rather than use their deep script
<whitequark[cis]>
yeah that might be one of our weirdest platforms once it gets merged. I don't really see any significant issues with the approach, it's just weird
<whitequark[cis]>
the way platforms work is that a platform file is used for dependency injection, and it contains all the state involved in doing so
<whitequark[cis]>
er, platform class
<ktemkin>
whitequark[cis]: it's a super weird toolchain -- they don't do verilog macro-instances for their hard IP; instead, they just configure the hard IP in an XML file and then treat the IP boundary like they're I/Os
<whitequark[cis]>
tpw_rules: so the design is meant to be static during elaboration, and all of the mutable bits, and all of the platform specific bits go into the platform instance
<whitequark[cis]>
in theory you should be able to create a design once and then build it several times with different platforms
<whitequark[cis]>
though i don't know of anybody actually doing this
<tpw_rules>
ok. cause it seemed to work not as an instance until it needed to call a function on the platform to generate a synchronizer
<whitequark[cis]>
ktemkin: yeah I see. it makes some kind of sense but it also means you have to bring everything to the top because it's annoying
<ktemkin>
their FPGAs are simple enough that the hard IP is only on I/O boundaries for now (e.g. MIPI CSI Rx), so it's not biting them in the ass Yet (TM)
<whitequark[cis]>
ahhh
<whitequark[cis]>
makes sense
<omnitechnomancer>
I assume they have no DSP blocks then?