<somlo>
davidlt[m]: with only SOC_STARFIVE, ERRATA_THEAD, and KEXEC_FILE disabled I'm getting a new error, possibly even earlier in the boot process than before: https://pastebin.com/GhEjrk5q
<somlo>
I've now added EFI_COCO_SECRET, FB_EFI, and IMA_KEXEC to the previous three, and am rebuilding with all six disabled
<somlo>
I need to isolate the minimal set that "must be disabled" before I can start studying what each of the "guilty" ones do, and whether rocket *should* be OK with them
<davidlt[m]>
somlo: stupid question, does the kernel get DTB?
<davidlt[m]>
I am just wondering about [ 0.000000] No DTB found in kernel mappings
<davidlt[m]>
and what would happen if the kernel is booted but the pointer to DTB points to nothing/garbage?
<davidlt[m]>
Ah, I see there are patches for StarFive JH7100 temperature sensor too on the mailing list. That's another part.
rwmjones|HOLS is now known as rwmjones
jcajka has joined #fedora-riscv
zsun has joined #fedora-riscv
davidlt has joined #fedora-riscv
<somlo>
davidlt[m]: I'm loading an opensbi blob with built-in dtb (same one from when the kernel actually booted when built with different CONFIG_* options)
<somlo>
I also load the kernel Image and initramfs.img, then jump to the opensbi blob which starts the boot sequence
<somlo>
the only thing different across tests is the set of CONFIG_ options I'm "bisecting"
<davidlt[m]>
Ah, that's something I haven't tried for a long time.
<davidlt[m]>
Keep digging and document each interesting issue.
<somlo>
turniing off all 16 options listed earlier in this channel gets me a working setup; turning off just the 3 you pointed out got me the DTB failure
<somlo>
now I've added three more to the list (leaving 10 turned on), and I'm currently in the `make modules_install` phase :)
<davidlt[m]>
Remind me, you are compiling this on a large QEMU machine? :)
<davidlt[m]>
I will be waiting for updates while doing server stuff (which seems to take so long...)
zsun has quit [Quit: Leaving.]
<somlo>
yeah, 8 cores, 32GB RAM, 128gb disk image (but the underlying hardware is rather old, 2014-vintage 24core xeon server)
<davidlt[m]>
I wonder if AMD 3D cache helps with QEMU stuff.
<somlo>
gcc appears to do some caching (the overnight build I started last night and finished this morning, before I started `modules_install` -- obviously took less than the 2 days I needed for the iteration *before* that
<davidlt[m]>
Yeah, I wonder if QEMU runs faster on a CPU with a large cache.
<davidlt[m]>
I wonder if cheaper 5800X3D could be an epic choice for QEMU VM.
<davidlt[m]>
Or higher freq is something more desirable for QEMU to run faster.
<davidlt[m]>
Something for the future as AMD hasn't announced new CPUs with Zen 4 + 3D cache.
<somlo>
I usually work with hand-me-down hardware on the server side, so I'm not in a position to run any tests on new gear :(
<davidlt[m]>
Yeah, that's the usual case. I want to refresh my setup.
<davidlt[m]>
I wanted to do that 2 years ago, but things happened.
<davidlt[m]>
Now I wonder what would run QEMU faster, what would compile code faster, etc.
<somlo>
then there's the x86 hardware-assisted virtualization vs. actually emulating a (risc-v) core on x86 -- you want the latter, I assume :)
<somlo>
*optimize for* the latter, I mean
<davidlt[m]>
Well, I bet we will have higher performing native RISC-V hardware before that happens.
<davidlt[m]>
Ventana mentioned 16-core high-perf development kit (details unknown on pricing and availability)
tg_ has joined #fedora-riscv
tg_ has quit [Quit: tg_]
tg has joined #fedora-riscv
jcajka_ has joined #fedora-riscv
jcajka has quit [Ping timeout: 265 seconds]
jcajka_ is now known as jcajka
<davidlt[m]>
Ech, NVIDIA CES 2023 talk
<davidlt[m]>
GPU prices are out of control
<davidlt[m]>
wow, waydroid landed in Fedora
jcajka has quit [Quit: Leaving]
<somlo>
still getting the DTB not found, virtual address 000...040...04 error with the newest test
<somlo>
either I'm going nuts, I screwed up something else, or I don't know -- I'm building with all 16 CONFIG_* options disabled again, because I clearly remember seeing that work
<somlo>
and I really need to start building a working demo and a slide deck, and record some video, so I'm running out of time doing experiments
<somlo>
If I get lucky and turning all 16 new config* options off still works after the latest rebuild, I'll put this on hold until later in February :)
<davidlt[m]>
Time for some JTAG debugging? :)
<somlo>
even more of a project I can't aford to detour into :)
<somlo>
at least I saved the sdcard containing the working, booting f33 setup -- only problem with that one is I don't have yosys/trellis/nextpnr RPMs I can either install or build for it anymore
<somlo>
so f37 it is, I hope :)
<somlo>
with any luck, caching will help make this latest build run fast as well...
<davidlt[m]>
oh, do you have a list of packages? I might need to make sure those build fine for you
<Guest86>
running via qemu. Please you could mention the steps and the actual cli that you actually succeeded with? Thanks in advance.
<somlo>
davidlt: I mainly need python3-migen, yosys, trellis, and nextpnr
<somlo>
Guest86: I needed to grow the image (to 128G) and then create a virt-manager (emulated) VM, steps for that (preliminary) are here: https://pastebin.com/xrFQepCr
<Guest86>
I looked at the cp commands the source is clear foo/
<Guest86>
but the destination "." what is the assumed current working directory? I didn't see and change directory in there so I'm wondering if you're in bar/ or bar parent directory.
<Guest86>
somlo: lines 23-24 are where I'm foggy.
Guest86 has quit [Quit: Client closed]
omac777_2022 has joined #fedora-riscv
omac777_2022 has quit [Client Quit]
omac777_2022 has joined #fedora-riscv
omac777_2022 has quit [Client Quit]
omac777_2022 has joined #fedora-riscv
<somlo>
Guest86: that's you extracting the kernel and initramfs image from the /boot partition; you're going to "side-load" those into qemu as part of the .xml vm configuration (they end up being passed in as `--kernel=...` and `--initrd=...` arguments to the qemu command line when virt-manager fires up the guest vm
<somlo>
I never managed to stand up a qemu riscv64 guest the "canonical" way (using u-boot)
<somlo>
oh well, they quit, hopefully the channel logs will help later :)
<somlo>
also, for the record, the xml file I posted is set up to "pass-through" (macvtap) networking for a "public" dhcp server to give my VM a real lease
<somlo>
might want to switch that over to internal virt-manager based NAT (but one can do that from the virt-manager gui after defining the guest VM
<omac777_2022>
somlo: I just renamed my Guest86 nick.
<omac777_2022>
Thank you.
<omac777_2022>
fyi I'm a VisionFive 2 sbc backer. I'll be getting it sometime February. I'm looking forward to using it.
<davidlt[m]>
Yeah, they are doing a bunch of upstreaming on all fronts. Well, also learning a lot in the process (that's their 1st upstreaming effort).
<davidlt[m]>
But so far a number of blocks have patches posted. The major exception is PCIe, but that's WIP.
<omac777_2022>
Attempting to build VF2 image from Fedora Silverblue 37 podman ubuntu 20.04: https://pastebin.com/hDSK5SC2
<omac777_2022>
I've only got a lot of questions: Running within Fedora Silverblue 37, I attempted to build the VF2(visionfive 2) image from within a podman ubuntu 20.04(focal) container rather than from straight ubuntu focal.
<omac777_2022>
Unfortunately, I have not been able to successfully repeat the build to get all the described VF2 build binaries and images and such with the steps described below.
<omac777_2022>
Afterwards, how does one slip in/overlay the Fedora Silverblue 37 packages as much as possible into the VF2 image using the existing working VF2 kernel?
<omac777_2022>
Why is it that buildroot also annoying downloads all the dependant packages from different repositories within the build process?
<omac777_2022>
Why couldn't all these buildroot packages all be fetched beforehand AND THEN build?
<omac777_2022>
Why is it that we don't separate the package documentation from the package binaries and optionally build documentation afterwards? This could save enormous amounts of build time when iterating through all this.
<omac777_2022>
I also noticed about GPU stuff...intel has their new GPU with the "ONE" api. AMD has their GPU with the ROC/OPENCL api. NVIDIA has their GPU with their CUDA api. Not insulting anybody, but it's a real mess because in order to build apps the support all this order correctly and optimally developers have to custom code for each.
<omac777_2022>
Now with RISC-V, there is an opportunity via the open-source to get it right and provide one api for all Risc-v based GPU's that everybody can conform and code for. Is that the direction all the members of https://riscv.org/ are aiming for? Or are we going to end up with another ecosystem full of different apis that overlap in capabilities and complicate matters like Intel/AMD/Cuda did?
<omac777_2022>
I haven't even gotten into the new mess introduced in high-end gpus as well with respect to DirectStorage(MS)/GpuDirect(NVIDIA)/DirectGMA(AMD)/CXL aiming to make IO faster for GPU's in everybody's different way of doing things.
<davidlt[m]>
GPU APIs are not related to CPU stuff.
<davidlt[m]>
There is a GPU inside, but I don't expect that to be supported any time soon.
<omac777_2022>
True, but RISCV and their CPU extensions are enabling the possibility of integrating all GPU instruction sets within the risc-v CPU/SOC.
<davidlt[m]>
Well in that case it would be called a GPU or GPGPU :)
<omac777_2022>
I've read a spec mentioning this intention and company exists to do exactly this.
<davidlt[m]>
There are designs with 1-4K tiny cores that contain vectors, you can think of it as GPGPU.
<davidlt[m]>
There was a company that wanted to do that, but they switched to PowerPC IIRC.
<davidlt[m]>
It's not a company actually, it's a project or something with multiple funding parts, but not an actual high capital company.
<davidlt[m]>
The way you get to use GPU on RISC-V is the same as on x86_64 and aarch64.
<davidlt[m]>
For example I am running AMD GPU, 4K monitor, getting H.264 video decode accelerated, VAAPI, VDPAU too.
<davidlt[m]>
As long as you have open source drivers, firmware bits available that don't depend on strange things, the kernel implements all the features needed for the driver, etc.
<omac777_2022>
davidlt, you're one of the lucky ones with an unmatched board I think.
<davidlt[m]>
There were additional data types for vector that are related to 3D, but those are not part of the initial spec. All these kind of things will be defined later (once folks can agree on them) and will be different extensions.
<davidlt[m]>
Yeah, I am running Unmatched, but desktop also worked with Unleashed too.
<davidlt[m]>
Well, it required and FPGA to provide PCIe.
<davidlt[m]>
Icikle Kit can do that too. PCIe slot exist on that one.
<davidlt[m]>
So old Nvidia GPUs up to and incl. 700-series should work. All Radeon HD 6000-series, and newer AMD RX400, RX500 (which are all the same).
<davidlt[m]>
What you cannot run is RNDA1/2/3.
<davidlt[m]>
Until someone implements kernel float point mode. There are matches for kernel vector mode (part of V ext. support) thus someone will do that for floating point too (probably).
<davidlt[m]>
You are mainly limited by PCIe bandwidth and later CPU horse power (there aren't much here).
<davidlt[m]>
Vulkan works fine too.
<omac777_2022>
How about this? Could RISCV make one GPU plunks everything from Intel/NVIDIA/AMD? That way you code what you want and you can guarantee it will work on that riscv gpu?
<omac777_2022>
Intel/Nvidia/AMD have data-center, workstation and home-use profiles essentially so that would imply there could be 3 riscv offerings with a CPU instructions and all the modes to offer Intel/AMD/NVIDIA GPU apis in one SOC.
<omac777_2022>
Vulkan is cool and I understand this was supposed to replace opengl. It also alleviates from coding directly to DirectX/Cuda/OpenCL and such. Well that's the thing. There's a lot of legacy code out there that nobody wan't to rewrite in short-term. The only way to make everything interoperable is to make sure the SOC's offer that interoperability from the start.
<davidlt[m]>
I am not sure we will be a commercial product, i.e. GPU with RV ISA.