<omac777_2022>
For all you rust/cargo users out there, the sccache stuff isn't working: Caused by:
<omac777_2022>
could not execute process `sccache rustc - --crate-name ___ --print=file-names --crate-type bin --crate-type rlib --crate-type dylib --crate-type cdylib --crate-type staticlib --crate-type proc-macro --print=sysroot --print=split-debuginfo --print=crate-name --print=cfg` (never executed)
Kevinsadminaccou has joined #fedora-riscv
<omac777_2022>
To circumvent this issue you can entirely not use sccache
<omac777_2022>
usually this is implicit: export RUSTC_WRAPPER=sccache
<omac777_2022>
To make it work on VF2 and circumvent this sccache missing issue: export RUSTC_WRAPPER=
<omac777_2022>
Usually it is implicit that it contains sccache, export RUSTC_WRAPPER=sccache, but on the VF2, "cargo install sccache" fails because of legacy/deprecated macro stuff in the ring crate that sccache depends on. Oddly enough I discovered sccache has distcc-like capabilities distributed compilation on other nodes.
<davidlt[m]>
Did a mistake yesterday with GHC packages and forgot to update SRPM macros. Basically that means I manage to rebuild ~10% of GHC land with -prof subpackages disabled.
<davidlt[m]>
Hack to add some more hacks to a crappy switch to properly do migration to our dist-git + bump to rebuild them.
<davidlt[m]>
Then I noticed that most of those actually migrated to rpmautospec stuff. Thus had to add more hacks to a bumping script to support old (well traditional) stuff and new autorpmspec thingie.
<davidlt[m]>
Packages were rebuilt, and the rest of GHC land is cooking too.
<davidlt[m]>
Also I decided to bumping to move ".rvreX" from after %{?dist} to before it.
<davidlt[m]>
That works nicely with rpmautospec stuff.
<davidlt[m]>
I also lowered the maxjobs to 96 for the main Kojid builder (that basically builds nothing, just opens the tasks).
<davidlt[m]>
This should low build failure rate a bit as builds get more time to land before other tasks get their repo assigned.
<davidlt[m]>
So it's 200-300 successful builds for this weekend, but it's a weekend π¨οΈπ₯οΈπ₯Ύπ²
<davidlt[m]>
Hopefully switching GHC from unregister build to LLVM-backed will improve performance (if it works to begin with).
zsun has joined #fedora-riscv
zsun has quit [Quit: Leaving.]
<PrathamPatel[m]>
<davidlt[m]> "Hack to add some more hacks to a..." <- Hehe
<PrathamPatel[m]>
Since F39 is releasing soon, any news on when RISC-V will be an officially supported arch?
<davidlt[m]>
F38 final target date is end of April. If that gets delayed by at least one week that's May already.
<davidlt[m]>
Fedora/RISCV moving forward into a more proper thing is blocked by lack of proper hardware (in the datacenter).
<davidlt[m]>
Until we get can a desired hardware (features, configuration, support, performance, etc.) and deploy that in DC it's hard to move forward.
<PrathamPatel[m]>
Oof, looks like a long time... :(
<davidlt[m]>
Yes and no. Look at SOPHON SG2042 server platform (to be related probably sometime this year, probably another 6-18 months for upstream support). Look what Ventana is doing, etc.
<davidlt[m]>
At least you can see the end and a new beginning approaching now.
<davidlt[m]>
It's no longer some mystical future point :)
<PrathamPatel[m]>
6 months is not a long time. 18 months is haha
<PrathamPatel[m]>
Well, at least for me
<davidlt[m]>
Well Intel/AMD/IBM start upstreaming 1-3 years before the product hits the market.
<davidlt[m]>
These things take time, especially if you are a new kid of the block.
<davidlt[m]>
Like StarFive is learning their upstreaming 101 with JH7110.
<davidlt[m]>
So far majority of T-HEAD stuff was done by the community members if I am not mistaken.
<davidlt[m]>
So JH7110 based boards might be go-to board for majority of this is fully upstream supported.
<davidlt[m]>
I bet not everything will be upstreamed before JH8100-series boards arrive.
<PrathamPatel[m]>
And me being new to the RISC-V space doesn't help either, I guess
<davidlt[m]>
Well it depends. There are folks that wrote GPUs drivers from ARM Mali and even Apple M silicon without prior knowledge about GPUs and drivers.
<davidlt[m]>
So it really depends on your personal driver, time investment, etc.
<davidlt[m]>
Like there are parts of kernel that no one is working on right now, but could be beneficial :)
<davidlt[m]>
There are still some packages that need porting.
<PrathamPatel[m]>
I looked at a few build logs on the Arch Linux side (since that's what I'm daily driving on the VF2) and the most common issue is still upstream support.
<PrathamPatel[m]>
Most notably LuaJIT, because that's what Neovim needs. Debian outright prevents me from even installing Neovim π₯²
<davidlt[m]>
Most things work without much effort these days. Most of that comes from aarch64, ppc64le, s390 stuff porting a long time ago.
<davidlt[m]>
You don't need LuatJIT for neovim.
<davidlt[m]>
But LuatJIT is mostly ported now IIRC.
<davidlt[m]>
PLCT Lab in China is working on LuaJIT.
<PrathamPatel[m]>
Not for the editor itself but for plugins :)
<PrathamPatel[m]>
I checked their repo. The last commit is quite old. Or maybe that's the default branch and a different branch has the progress.