handsome_feng has quit [Quit: Connection closed for inactivity]
conordooley has joined #riscv
<conordooley>
drmpeg please CC "conor.dooley@microchip.com" on the regression report since I have a board that can repro it too :)
cwebber has joined #riscv
<conordooley>
or mail@conchuod.ie, whichever you fancy!
<drmpeg>
Roger that.
<conordooley>
What's your toolchain & binutils?
<conordooley>
I can't remember if I tried gcc 11.1 or clang15
<drmpeg>
gcc 11.2.0
<conordooley>
smaeul Yeah, but the invalid stuff just doesn't get used so shouldn't matter? Anything I had I had from d1-wip on github previously.
<conordooley>
I'll give it all another go tonight smaeul but it could just be the zihintpause stuff getting in the way - since I do get issues with that on another config.
<conordooley>
ye that's the source of my GCC too. Just I have 11.1
<conordooley>
Hopefully it's the LDO stuff missing and not something bigger than that, might just do what Heiko does and hack in the memory node. Reusing the dtb is just annoying...
stefanct has quit [Ping timeout: 255 seconds]
stefanct has joined #riscv
<conordooley>
I'll tack on my logs when I get home drmpeg :)
<drmpeg>
:)
BootLayer has quit [Quit: Leaving]
aerkiaga has quit [Remote host closed the connection]
<drmpeg>
Now that I look at the right place, it's really gcc 11.1.0
<conordooley>
Prob doesn't really matter that much, I was more curious if you had a toolchain that knew of Zihintpause or not
conordooley has quit [Quit: Client closed]
<geertu>
conchuod: mpfs_rtc 20124000.rtc: timed out uploading time to rtc (ad infinitum)
<geertu>
conchuod: Interestingly, it seems to work after a reboot
<geertu>
oops, spoke too soon
<jrtc27>
so uh cpu_relax is in arch/riscv/include/asm/vdso/processor.h
<jrtc27>
and AFAICT really does get called from the vdso, not just kernel-side vdso management bits
<jrtc27>
I don't think a static branch there is a good idea...
<jrtc27>
if so
BootLayer has joined #riscv
PaulePanter has quit [Quit: Lost terminal]
fedorafan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
eroux has joined #riscv
fedorafan has joined #riscv
conordooley has joined #riscv
<conordooley>
geertu about to head home so I'll look into that tomorrow - out of curiosity do you have the reset controller series applied?
<conordooley>
and does a hwclock -w work if you get booted into userspace?
dramforever_ has joined #riscv
dramforever__ has quit [Read error: Connection reset by peer]
dor has quit [Ping timeout: 268 seconds]
conordooley has quit [Quit: Client closed]
<geertu>
conchuod: I do have (a version of) the reset controller series
dramforever__ has joined #riscv
dramforever_ has quit [Read error: Connection reset by peer]
dor has joined #riscv
EchelonX has joined #riscv
<conchuod>
geertu: I wonder if your HSS is too old to take the rtc out of reset but I didn't hook up the reset in the dt for the rtc
<conchuod>
Prior to the reset controller stuff, the clk driver took the peripherals out of reset in .enable()
<conchuod>
If you revert that series, I suspect it will work :?
<conchuod>
(I didn't hook everything up, just the macb as POC, as I didn't/don't know if Stephen will hate it & I'd have to start over.
jacklsw has joined #riscv
dramforever_ has joined #riscv
dramforever__ has quit [Read error: Connection reset by peer]
indy has quit [Ping timeout: 248 seconds]
matt__ is now known as freakazoid333
indy has joined #riscv
dramforever__ has joined #riscv
<conchuod>
jrtc27: do you want to transcribe your VDSO comment here: 20220816163058.3004536-1-ajones@ventanamicro.com
<dh`>
my understanding of the way things typically work is that the goal is to load the kernel at or near the base of physical memory so there's only one range of physical memory to manage (rather than one below and one above the kernel image)
<dramforever__>
Ah I should have specified, I meant the very early boot process
indy has quit [Ping timeout: 256 seconds]
<dh`>
but if you're working on an opensbi fork, don't you get to decide how that works?
<dh`>
that is, it's a platform matter and not a property of the machine architecture
<dramforever__>
well that's why i wanted to know if e.g. freebsd also behaves this way
<dramforever__>
I'm curious because at least currently RustSBI does not add a node /reserved-memory that covers up itself, and it runs Linux fine
<dh`>
so basically it sits at the bottom of memory, loads linux above itself, and linux assumes that it can't touch that space so it gets away with it?
<dh`>
seems like a mistake
indy has joined #riscv
<dramforever__>
that's why i'm asking :P
<dh`>
also if you're making a hypervisor you don't need to expose the hypervisor image :-)
<dh`>
(and shouldn't, really)
<palmer>
dh`: there's just a bunch of unspecified stuff in the boot process, it's not really thought out
<dh`>
IIRC I set up sys161 so physical memory starts at 0xc000_0000 and it loads the kernel there, but I was going for the path of least resistance and not compliance or compatibility with anything else
<dramforever__>
i guess some day we'll all be on uefi and it's going to be much more well-specified
Andre_H has quit [Quit: Leaving.]
<dh`>
and it loads the kernel from inside the emulator so there's no firmware image to worry about (there's a space reserved for a firmware rom somewhere, but nothing actually in it)
<dramforever__>
for the record when doing opensbi-h i need an area for the shadow page table so i can 'fake' address translation, and it's a bit of a weird situation because i don't want S-mode to touch it, but at the same time page table memory must be S-mode readable
<dh`>
if you're trying to do a real hypervisor you need, effectively, your own memory map
<dramforever__>
well that's HS-mode's job
<dh`>
but I have yet to look at any of the hypervisor extensions so I shouldn't be blabbing :-)
<dramforever__>
so what i did is i just reserved a region at the end of RAM so it can be aligned, added it to /reserved-memory, and PMP'd it to be S-mode read-only
<dramforever__>
If I didn't do the reserved-memory bit, XVisor seems okay with it but Linux crashes fairly early at startup
<dh`>
it's not uncommon for kernels to allocate starting from the far end of physical memory
<dramforever__>
And that's why I got curious how RustSBI got away with not dealing with FDT at all
aerkiaga has joined #riscv
<dramforever__>
i guess the conclusion is that it's really just a lucky coincidence
<dh`>
what you describe sounds like their bug
<dh`>
anyway, for a hypervisor you shouldn't need to reserve memory spaces because each guest should be running in a separate virtualized space
<dramforever__>
i'm not the hypervisor :P
ffcc has quit [Quit: Leaving]
<dh`>
ok, then I'm confused about what you're doing :-)
<dramforever__>
it is indeed a pretty confusing thing
ffcc has joined #riscv
ffcc has quit [Remote host closed the connection]
ffcc has joined #riscv
<dramforever__>
The privileged spec hypervisor chapter that you said you didn't read has a comment: The hypervisor extension has been designed to be efficiently emulable on platforms that do not implement the extension, by ... [goes on to outline what to do]
<dramforever__>
And I'm trying to do that because for some reason I don't think anyone else did
<dh`>
oh heh
<dramforever__>
it's pretty cursed. i have enough of it that i can run kvm on my visionfive, but the performance is abysmal (slower than qemu with tcg for neofetch! i guess emulating virtual memory is either inherently *bad*, or i'm just too lazy to have made it fast yet)
<dramforever__>
*i have *implemented* enough of it
<dh`>
emulating virtual memory is inherently bad, but it's possible you're doing something silly you haven't found
<dramforever__>
Well if you know MIPS, it's like that software-managed TLB
<dramforever__>
So it's not like I'm doing translation in software for every access, but I was doing a translation for basically every page
<dramforever__>
Still, could very well be a bug with interrupts or something. I know I didn't get those quite right.
GenTooMan has quit [Ping timeout: 255 seconds]
<dh`>
it should still happen only once per page mapping
<dh`>
that is, you get a trap, you look in the OS's pagetable and copy the entry, then it's good until something changes
<dramforever__>
then maybe because i'm clearing everything on every process switch or something (i don't see an easy way to avoid it)
<dh`>
that would do it
<dh`>
or could
<dramforever__>
again i have *no* data on what exactly is slow so i'm putting this off until i can figure out how to do such low level profiling
<dh`>
so what you're doing is providing the appearance of a hypervisor on a machine that doesn't have the hypervisor extensions?
<dramforever__>
providing the appearance of the hypervisor extension, yes
<dh`>
or wait, providing the appearance of the hypervisor extensions on a machine taht doesn't have them
<dramforever__>
yup
<dh`>
I should look at the hypervisor extension, but I think that's going to be inherently very slow
<dramforever__>
another reason i'm not particularly interested in making it blazing fast
<dramforever__>
good thing though, for kvm it doesn't slow down the host system
GenTooMan has joined #riscv
<dramforever__>
i'm still amazed that kvm is running, like, at all
cousteau has joined #riscv
<jrtc27>
dramforever__: historically, sbi implementations did not add themselves to the list of reserved memory regions. these implementations were loaded to start of dram, with the kernel loaded 1 superpage later. thus operating systems had to mark the first superpage as reserved too. freebsd does this, I assume linux still does otherwise it wouldn't work with bbl and, from the sounds of it, rustsbi.
vagrantc has quit [Quit: leaving]
<dramforever__>
jrtc27: that explained a lot, thanks
<conchuod>
jrtc27: at least you're paying attention..
<conchuod>
I don't even want to explain what my thought process was when I read that line in the driver
<jrtc27>
I know about this one specifically because I wrote the FreeBSD driver and swore at whoever decided to rename the clock
<conchuod>
hahaha
<jrtc27>
see also perstn-gpios vs reset-gpios...
<conchuod>
I am sorry, but I hate the former
<jrtc27>
the former is more descriptive, the latter is less ugly
<jrtc27>
I don't particularly care what they're called, I just hate that they're different
<conchuod>
perstn might be what it is in some doc, but my brain parses the latter a lot more easily
<conchuod>
tbh it is a common problem - people name the gpio after the line names on the schematic