<thefossguy>
Looking at github/orgs/openbouffalo; damn, the ecosystem players really want to merge every possible thing upstream.
<thefossguy>
I'm happy now :)
<dtometzki>
And Jisheng Zhang is for everything maintainer :-)
<davidlt[m]>
Well, it's not Buffalo Lab that is doing the upstreaming.
<davidlt[m]>
So no action from the vendor.
<dtometzki>
i don't know how something like this can work ? Putting a board on the market and not being involved in upstream support as a manufacturer?
<dtometzki>
Yes, maybe you have sub companies that do that for you.
jcajka has joined #fedora-riscv
<javierm>
dtometzki: that's the case for like 99% of the board manufacturers :)
<javierm>
they seem to be happy shipping some vendor tree and forked distro
<thefossguy>
But still 🥺
<rwmjones>
davidlt[m]: not much progress on ghc; what's the LLVM backend full rebuild?
<davidlt[m]>
rwmjones: well, one potential fix (and something we need to do anyways) is to rebuild GHC with LLVM backend.
<davidlt[m]>
This is because we cannot mix GHC backends as that seems to affect ABI.
<davidlt[m]>
It's just a bit more annoying if we find that LLVM backend is worse than C backend in some strange way.
esv has quit [Ping timeout: 240 seconds]
<rwmjones>
davidlt[m]: ok so IIRC the issue is that ghc doesn't support newer LLVM?
<davidlt[m]>
Well, GHC 9.2 officially support max LLVM 12, but unofficially LLVM 13. Which is still old.
<davidlt[m]>
LLVM 12 causes some issues early one, but other distros seem to be happy with LLVM 13. Tested that and LLVM 13 is already ways better than LLVM 12.
<rwmjones>
but problem is we're on llvm 16 :-)
<davidlt[m]>
Rebuilding failed packages with GHC + LLVM backend I noticed that mixing packages build with different backends don't work.
<davidlt[m]>
Nope.
<davidlt[m]>
Fedora incl. multiple LLVM versions.
<davidlt[m]>
And GHC is compiled with llvm13, so that's not a problem.
esv_ is now known as esv
<davidlt[m]>
I think, I read somewhere more official that backends affects GHC ABI, and that's a reason why mixing didn't work.
<davidlt[m]>
So the only I didn't do is a large scale rebuild, which would cost a few days.
<davidlt[m]>
And to attempt that I would need (well, prefer) is to drain the build queue so I could take a snapshot.
<davidlt[m]>
In case things go terrible, just revert a few 100s of builds back to that snapshot.
<davidlt[m]>
It's just annoying.
<rwmjones>
hmm, I wasn't aware that Fedora had multiple LLVMs because the only package I deal with that uses LLVM is AFL++; but it's definitely right, I see llvm{10,11,12,13,14,15}
<davidlt[m]>
rwmjones: stupid question
<davidlt[m]>
I am playing with nbd-client -b value.
<davidlt[m]>
512 is what I typically use, and wanted to see what happens with 4096.
<rwmjones>
davidlt[m]: yeah it'll be weird if you do that
<rwmjones>
-b 512 is actually the default in any recent version
<davidlt[m]>
I see that reported size of partitions change!
<rwmjones>
it'll basically act like a 4K sector disk and it'll confuse lots of partitioning tools
<davidlt[m]>
man pages states default is 1024
<rwmjones>
ok you may have an old version
<davidlt[m]>
Yeah, I noticed because I didn't have enough space to dd /boot partition.
<rwmjones>
ISTR this was fixed in the kernel
<rwmjones>
this was a few years ago though
<rwmjones>
adding -b 512 is always safe, but might not be neeed
<rwmjones>
needed
<davidlt[m]>
Yeah, I am still on Fedora 37 as I don't want to break my setup just in case.
<rwmjones>
but don't use any other size because it'll make things act weird
<rwmjones>
and it makes no difference to performance anywa
<davidlt[m]>
-b 512 seems to report the proper partition sizes.
<rwmjones>
the critical thing for performance is to use -C 4 (multi-conn)
<davidlt[m]>
I will test that later :) I want to attempt booting a new disk image on the board 1st.
<rwmjones>
what server are you going to be using?
<davidlt[m]>
I use nbdkit to avoid unxz Fedora disk images from Koji
<rwmjones>
good choice :-) (but so is qemu-nbd)
<davidlt[m]>
I use that too sometimes
<davidlt[m]>
But nbdkit is a bit of a swiss army knife.
<davidlt[m]>
Oh this should be fun if want to work on FS and you want to test specific software setup
<rwmjones>
yeah don't use it on your regular machines :-)
<davidlt[m]>
Actually -C4 is worse.
<davidlt[m]>
but my nbd I didn't configure nbd server with multiple threads too
<davidlt[m]>
Oh, this new disk image booted a lot better than the previous one.
zsun has joined #fedora-riscv
<davidlt[m]>
The only thing I forgot was to pull in a newer linux-firmware stuff :)
<davidlt[m]>
Not that it would be a big problem.
<davidlt[m]>
and stress-ng is running, let's see if it survives
<davidlt[m]>
Oh, mock v4.0 is landing.
kalev has joined #fedora-riscv
jcajka has quit [Quit: Leaving]
zsun has quit [Quit: Leaving.]
cyberpear has joined #fedora-riscv
zsun has joined #fedora-riscv
<davidlt[m]>
Oh no, I lost VTK build.
<davidlt[m]>
Stupid, atomic crap.
<davidlt[m]>
We are days to weeks away from a GCC build that should solve this atomic crap.
<davidlt[m]>
We have reached that nice 200+ successful builds a day mark again.
<davidlt[m]>
We probably gonna loose that I am shift again my attention towards making disk image.
<davidlt[m]>
Reminder, my idea to ship image with 6.2.16 and follow that up soon another disk image running 6.3.X.
<davidlt[m]>
Just in case we would need to test something with different kernel versions.
<davidlt[m]>
Then do F39, which will be very similar to F38, but with 6.4-RC kernel and VF2 support hopefully.
<thefossguy>
6.4-rc1 didn't support the MMC for some reason even though the config option was set to =y. I also tested with =m but no positive results.
<davidlt[m]>
Was it listed in DT?
<davidlt[m]>
IIRC some drivers landed, but didn't make into VF2 DT.
<thefossguy>
davidlt[m]: in arch/riscv/boot/?
<davidlt[m]>
The patches were sent recently to add missing DT nodes for merged drivers, but that's outside of 6.4.