<davidlt[m]>
I wonder how things will work with RVA20 (major), RVA22 (minor), RVA23 (major).
<davidlt[m]>
I didn't notice in the emails, but there is [PATCH v1 00/17] Basic StarFive JH7110 RISC-V SoC support for U-Boot too!
<davidlt[m]>
So development is happened in public on the OpenSBI, kernel and U-Boot now.
<davidlt[m]>
Conan Kudo: are there any work/ideas on providing optimized binaries for x86_64-v{2,3,4} via Fedora infra? Glibc bits are in place, but it seems there are missing bits on the RPM (and friends), not to mention actual build infra side.
<davidlt[m]>
I wouldn't expect this to be distro-wide, but a specific selected package list probably (based on benchmarks).
<Eighth_Doctor>
we can't really do anything until the arch levels PR is merged
<Eighth_Doctor>
but currently no plans, mostly because nobody has thought about what to do with the feature yet
<Eighth_Doctor>
most likely ELN will switch over to these new subarches immediately
<davidlt[m]>
I somehow expect we might want to do similar things once profiles land. Basically there is no performance in RVA20 so we would be loosing a lot here without optimized binaries (just an assumption).
<davidlt[m]>
From my point of view companies want to mostly at RVA23. That's there the fun begins.
<davidlt[m]>
I would bet the currently announced RVA22 compliant core IP is probably almost compliant to not-yet-defined RVA23.
<davidlt[m]>
Ah, SiFive do not have P650 and P670 documentation online :/
<davidlt[m]>
In general RVA23 requires Vector, Vector Crypto (deprecated scalar crypto IIRC) and 64K page support (most likely this list will change).
<davidlt[m]>
And the only reason RVA23 exist is because Vector Crypto didn't have in time. I think it's now frozen and is under 45-day public review, but that means it get ratified in 2023.
<davidlt[m]>
I have seen benchmarks where bitmanip alone is a huge performance bump in some benchmarks.
<davidlt[m]>
Technically JH7110 has it, so does HiFive Pro P550. T-HEAD has it too IIRC.
masami has joined #fedora-riscv
zsun has joined #fedora-riscv
masami has quit [Quit: Leaving]
zsun has quit [Quit: Leaving.]
<davidlt[m]>
nirik: what is the config for aarch64 VMs for builders?
<davidlt[m]>
IIRC these are running new machines. How many cores, RAM and storage do you give per builder on a new machines?
<davidlt[m]>
Looking at some, I see 8 cores, and 35.15G of RAM?
<davidlt[m]>
and 8G of swap
tg_ has joined #fedora-riscv
tg has quit [Ping timeout: 260 seconds]
khrbt_ has joined #fedora-riscv
khrbt has quit [Ping timeout: 252 seconds]
<nirik>
yes, 8 cores, 35GB ram, 8g swap. They are running on mt snow boxes. We also have some buildhw's that are lenovo emags (end of life) which we are replacing over time.
<davidlt[m]>
nirik: why strange 35G of RAM? Do you have any overcommitting on CPU side?
<nirik>
hosts have 384GB... 10 buildvm's per host... 35 each and some spare to make the host happy
<davidlt[m]>
I see, any what about the CPU overcommitting? Do you do that?
<davidlt[m]>
10 buildvms, that's 80 cores. How many there are in total?
<davidlt[m]>
I used to do 50% overcommitting, which OK and BAD at the same time depending on what was building.
<davidlt[m]>
On Ampere website I see "Using the Gigabyte 1P Mt. Snow platform, they measured the 80-Core Q80-30 [..]"
<davidlt[m]>
I assume the your boxes are 80-core system. In that case I guess there is no overcommitting on CPU side.
<nirik>
no overcommit... they have 80 cores
<nirik>
yep
<nirik>
overcomitting is ok normally, but in mass rebuilds or other busy times is a killer
<davidlt[m]>
Yeah, I learned that by experience ;)
<davidlt[m]>
nirik: how many total builders there are for aarch64 now?
<nirik>
40 buildvm's and 8 buildhw's
<davidlt[m]>
Interesting, don't you have 10 buildvms per server?
<davidlt[m]>
Or some buildvms are differently configured?
<davidlt[m]>
Like all those Python packages don't need 8 cores :)
<nirik>
yes, theres 4 bvmhosts that have 10 buildvm's per...
<nirik>
koji only has 'weight' to figure that out, its not much good for seperating builds
<davidlt[m]>
Yeah, weight is a terrible system (especially for riscv64).
<davidlt[m]>
Basically a lot of things are at 6.0 (max). Doesn't matter if that's 8 hours or 7 days to compile. 1 thread or 8 threads fully used.
<nirik>
they are redoing the sceduler... will be interesting to see what they come up with
<davidlt[m]>
Uh, for which release version? Is that already in the roadmap/schedule?
<nirik>
its a complex problem... it may be that doing small things faster helps more than small builders for them
<davidlt[m]>
So what are the other 4 machine used for? Some other Koji services?
<nirik>
I dont think they have it on roadmap, just from discussions with them...
<davidlt[m]>
I was thinking about something smarter. Using the channels for different performance/parameters HW and then assign task based on policy.