azonenberg changed the topic of ##openfpga to: Open source tools for FPGAs, CPLDs, etc. Silicon RE, bitfile RE, synthesis, place-and-route, and JTAG are all on topic. Channel logs: https://libera.irclog.whitequark.org/~h~openfpga
Degi_ has joined ##openfpga
Degi has quit [Ping timeout: 256 seconds]
Degi_ is now known as Degi
<pie_>
there really are a lot of dfferent tools these days
<pie_>
disregarding lack of skill, how do you go about choosing between cpu, fpga, gpu,...? for a compute workload
<pie_>
well hm i guess the cpu | fpga,gpu choice is relatively simple
<pie_>
I guess im also perhaps making too much of an assumption that fpga accelerators make sense? idk, i guess im not *quite* (unfounded) buying that fpgas as custom accelerators will be a thing
<pie_>
is it effectively a given that they will be and that it does make sense?
<sorear>
cpus optimize for latency when going through vast trees of branches that are difficult but not impossible to predict, gpus optimize for single-precision fp and array indexing throughput, fpgas optimize for latency of simple operations and rearranging large numbers of single bits
<sorear>
think about what kinds of hardware your compute workload needs, and how to leverage existing resources to that end
<sorear>
fpga accelerator cards for servers exist, ec2 has an instance type with them, intel sold at least one xeon+fpga product after the altera merger (there are other soc fpgas but I wouldn't describe them as being used for "compute workloads")
mewt has quit [Ping timeout: 255 seconds]
emeb_mac has quit [Quit: Leaving.]
sgstair has quit [Ping timeout: 246 seconds]
schaeg has joined ##openfpga
mewt has joined ##openfpga
sgstair has joined ##openfpga
<jn>
for low-latency I/O tasks, RP2040 PIO is also a nice option
schaeg has quit [Ping timeout: 240 seconds]
muuo has joined ##openfpga
Hammdist has joined ##openfpga
Hammdist has quit [Ping timeout: 250 seconds]
Hammdist has joined ##openfpga
Hammdist has quit [Ping timeout: 250 seconds]
emeb_mac has joined ##openfpga
<pie_>
sorear: hm thanks
<pie_>
yall have any opinion on HLS?
specing has quit [Remote host closed the connection]
<pie_>
this might also be another good review article, this time about fpga cloud stuff, and its publised february 2022 so...maybe its not that outdated? https://dl.acm.org/doi/pdf/10.1145/3506713 "The Future of FPGA Acceleration in Datacenters and the Cloud"
<pie_>
jn: thanks
<pie_>
(i keep thinking that i should really just stick to normal software if i want a salary that scaled :P x) )
<pie_>
(says the guy that still doesnt have a proper job either way)
<pie_>
*scales
GenTooMan has joined ##openfpga
GenTooMan has quit [Ping timeout: 268 seconds]
cybernaut has joined ##openfpga
<pie_>
> Alibaba has reported 75% savings in TCO by using FPGAs to oversee product images on its e-commerce site [161]. In 2018 it reported over $30 billion retail on its website in a single day (compared to $5 billion on all US online and in-store retail on Black Friday 2017); this was possible with its data center FPGAs being used to accelerate transactions and provide recommendations to users
<pie_>
huh
cr1901 has quit [Read error: Connection reset by peer]
cr1901 has joined ##openfpga
<pie_>
would swappable large fpgas like how you can swap cpus ever make sense, and youd have modular fpga motherboards?
<pie_>
is there technical obstacles to standard pinouts, or just greed?
<pie_>
I guess I might as well ask the same about gpus but for some reason that feels more like a "no"...?
Hammdist has joined ##openfpga
<tpw_rules>
utter generational incompatibility?
<sorear>
most server fpgas and gpus are pcie devices and you can hotswap pcie devices