sorear changed the topic of #riscv to: RISC-V instruction set architecture | https://riscv.org | Logs: https://libera.irclog.whitequark.org/riscv | Matrix: #riscv:catircservices.org
hightower4 has joined #riscv
hightower3 has quit [Ping timeout: 245 seconds]
terminalpusher has quit [Remote host closed the connection]
peepsalot has quit [Read error: Connection reset by peer]
leah2 has quit [Ping timeout: 246 seconds]
prabhakarlad has quit [Ping timeout: 246 seconds]
jacklsw has joined #riscv
stolen has joined #riscv
Starfoxxes has quit [Ping timeout: 256 seconds]
Starfoxxes has joined #riscv
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #riscv
handsome_feng has joined #riscv
crabbedhaloablut has joined #riscv
ntwk has quit [Quit: ntwk]
andyc has joined #riscv
davidlt has joined #riscv
Valeria22 has quit [Quit: Konversation terminated!]
MaxGanzII_ has joined #riscv
jedix has quit [Ping timeout: 248 seconds]
jedix has joined #riscv
stolen has quit [Quit: Connection closed for inactivity]
vigneshr has joined #riscv
jedix has quit [Ping timeout: 245 seconds]
jedix has joined #riscv
andyc has quit [Quit: Connection closed for inactivity]
jedix has quit [Ping timeout: 256 seconds]
jedix has joined #riscv
BootLayer has joined #riscv
davidlt has quit [Ping timeout: 245 seconds]
pabs3 has quit [Ping timeout: 245 seconds]
pabs3 has joined #riscv
MaxGanzII_ has quit [Remote host closed the connection]
junaid_ has joined #riscv
MaxGanzII_ has joined #riscv
peepsalot has joined #riscv
madge has joined #riscv
___nick___ has joined #riscv
___nick___ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
___nick___ has joined #riscv
___nick___ has quit [Client Quit]
___nick___ has joined #riscv
danilogondolfo has joined #riscv
ezulian has joined #riscv
leah2 has joined #riscv
prabhakarlad has joined #riscv
___nick___ has quit [Ping timeout: 250 seconds]
GenTooMan has joined #riscv
MarvelousWololo has quit [Read error: Connection reset by peer]
GenTooMan has quit [Ping timeout: 240 seconds]
davidlt has joined #riscv
pabs3 has quit [Ping timeout: 246 seconds]
markh has quit [Ping timeout: 248 seconds]
pabs3 has joined #riscv
Armand has quit [Ping timeout: 252 seconds]
markh has joined #riscv
jacklsw has quit [Ping timeout: 256 seconds]
stolen has joined #riscv
ntwk has joined #riscv
phoooo has joined #riscv
Tenkawa has joined #riscv
<unlord> what is the right way to get a high precision timer in RISCV?
pabs3 has quit [Ping timeout: 245 seconds]
pabs3 has joined #riscv
phoooo has quit [Ping timeout: 246 seconds]
pabs3 has quit [Ping timeout: 252 seconds]
GenTooMan has joined #riscv
Andre_Z has joined #riscv
pabs3 has joined #riscv
pabs3 has quit [Quit: Don't rest until all the world is paved in moss and greenery.]
pabs3 has joined #riscv
phoooo has joined #riscv
BootLayer has quit [Quit: Leaving]
davidlt has quit [Ping timeout: 248 seconds]
madge has quit [Quit: madge]
phoooo has quit [Ping timeout: 246 seconds]
jacklsw has joined #riscv
_whitelogger has joined #riscv
TMM_ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM_ has joined #riscv
Tenkawa has quit [Quit: Was I really ever here?]
BootLayer has joined #riscv
random-jellyfish has joined #riscv
djdelorie has quit [Ping timeout: 246 seconds]
phoooo has joined #riscv
GenTooMan has quit [Ping timeout: 260 seconds]
djdelorie has joined #riscv
Armand has joined #riscv
<palmer> unlord: it's kind of a mess right now
random-jellyfish has quit [Quit: Client closed]
phoooo has quit [Quit: Client closed]
<courmisch> for wall clock, I guess RDTIME
joev1 has quit [Ping timeout: 256 seconds]
joev1 has joined #riscv
<geertu> So now we have 'Falling back to deprecated "riscv,isa"', but "git grep riscv,isa- linux-next/master -- arch/riscv/boot/dts/" returns nothing?
Valeria22 has joined #riscv
phoooo has joined #riscv
joev1 has quit [Ping timeout: 260 seconds]
junaid_ has quit [Remote host closed the connection]
joev1 has joined #riscv
<conchuod> geertu: I was waiting for it to be merged before doing conversions & now it is too late :)
<geertu> conchuod: Well, let's consider it a regression in v6.6-rc1, and fix it for rc2 ;-)
davidlt has joined #riscv
<conchuod> The print or the properties?
<conchuod> geertu: I'll happily remove the print for a cycle...
<geertu> conchuod: there's also CONFIG_RISCV_ISA_FALLBACK. If anyone disables that, things will fail?
handsome_feng has quit [Quit: Connection closed for inactivity]
billchenchina has joined #riscv
<conchuod> geertu: I originally had it behind EXPERT, moving it out of that was suggested.
<unlord> palmer: I'm able to read RDTIME, RDCYCLE and RDINSTRET but these also include OS and kernel execution AIUI
<conchuod> I'll send a patch tomorrow removing the print Geert
<geertu> conchuod: I do not object to the printk
<geertu> conchuod: Without the print, debugging CONFIG_RISCV_ISA_FALLBACK=n will be even harder
<sorear> if you want to measure only U-mode execution, the only way to do that is the performance counter facility
heat has joined #riscv
<jrtc27> (but if you want to measure whole system time then rdtime is indeed the way to go; it's synchronised across harts)
<jrtc27> (like mrs cntvct_el0 on aarch64 and rdtsc on *modern* x86)
junaid_ has joined #riscv
<courmisch> is there a proxy kernel to run a static Linux ELF as UEFI app?
Stat_headcrabed has joined #riscv
<courmisch> sorear: I tried Linux perf in read mode (or whatever the mode is called), but it seems to add around 4k cycles, and it's not very stable
<courmisch> I guess that's pretty good for long enough tests in the millions of cycles
<courmisch> but for small benchmarks, ouch
MarvelousWololo has joined #riscv
<sorear> [UEFI proxy kernel] never heard of such a thing
<courmisch> I mean essentially something like PK but for running under UEFI firmware rather than Spike
<sorear> [read mode, 4k] not quite sure what you're referring to
<courmisch> where you read the counter with read()
<courmisch> (not mmap)
<sorear> the point of the "proxy kernel" is that it serializes syscalls and runs them in the spike process
<courmisch> well, it's a given that a system call is going to involve a lot of cycles. Just saving and restoring the context takes quite a bit
<sorear> a single-process linux-compatible kernel with a UEFI loader might be useful but it wouldn't really be a proxy kernel
<courmisch> well it would convert the Linux user ABI into the UEFI service ABI
<sorear> you'd have to actually implement most of the syscalls rather than passing them off to another kenrel
<courmisch> so it's a proxy
<courmisch> but it might be a lot more involved than pk, yeah
<sorear> Is "linux syscalls take 4k cycles" intended to be a response to anything I said?
<courmisch> it's a comment on the practicality of using Linux perf to count cycles
<jrtc27> pin yourself to a hart and use rdcycle then?
<sorear> so, not a reply to anything I said today
<courmisch> is there an API to do that
<jrtc27> linux devs' viewpoints on rdcycle being verboten be damned
junaid_ has quit [Quit: leaving]
<jrtc27> standard cpuset affinity API?
<courmisch> sorear: to your comment about U mode perf counters
<sorear> "I'm able to read RDTIME, RDCYCLE and RDINSTRET but these also include OS and kernel execution AIUI"
<courmisch> jrtc27: don't we also need a mechanism to mask interrupts, and reenable RDCYCLE?
<jrtc27> not if your benchmark is short enough
<jrtc27> run it multiple times and filter the outliers
<jrtc27> if your benchmark is long enough that that matters then the syscall overhead goes away
<Stat_headcrabed> A question: How to get RDTIME's frequency by software?
<jrtc27> uh "that you can't do that", not "that that matters"
<Stat_headcrabed> We can read that from device tree
<jrtc27> that's the neat part, you don't
<Stat_headcrabed> but how software without superuser auth do that?
<jrtc27> linux doesn't expose it for some unknown reason
<jrtc27> ("use perf" or something unhelpful...)
<Stat_headcrabed> conchuod: Any advice?
<jrtc27> you can measure it though by doing rdtime around a sleep
<Stat_headcrabed> emmmmm
<jrtc27> and some statistical analysis to measure the error
<jrtc27> there's code in abseil to do that
<courmisch> Stat_headcrabed: well, propose a new riscv_hwprobe() key to get the value (and jrtc27 will hate you)
<Stat_headcrabed> lol
<heat> hi, is riscv's behavior WRT self-modified code documented anywhere?
<heat> the nonpriviliged ISA seems to only mention it once in passing
<courmisch> heat: FENCE.I is documented in the nonpriv spec
<courmisch> heat: there is no cross-hart mechanism defined, AFAIU
<jrtc27> FENCE.I is broken in the ISA
<jrtc27> unless you're on a single core or you're an OS that can IPI
<Stat_headcrabed> That has no elegance :(
<jrtc27> RISC-V did not take inspiration from Arm there
<jrtc27> (when can we have a Zifenceithatworks?)
<sorear> heat: what behavior are you asking about specifically?
<jrtc27> (seems like something that the J extension people should have focused on before this pointer masking crud?)
<heat> sorear, just wanted to know if and how it's possible to do it safely
<Stat_headcrabed> Something is being worked on J extension
<heat> fence.I + IPIs all around seems like the way
<sorear> jrtc27: it's not like the Zjid spec is hard to find
<jrtc27> it's not like it has anything in it
<sorear> it kinda sucks but it was very definitely worked on before pointer masking
<jrtc27> only commit "Add placeholder for zjid specification."
<courmisch> isn't there a vDSO or whatever to do this?
<jrtc27> linux has a magic syscall
<sorear> this isn't the first version of the spec but I'm too tired to look for it
<jrtc27> I'm being slightly facetious, I remember seeing stuff years ago
<jrtc27> but that link is the first I'm hearing of anything happening in the past couple of years
<jrtc27> odd to be doing it by circulating pdfs of word documents on mailing lists rather than putting stuff in the repo though
<sorear> i need to pick holes in that later, i don't think it's going in a useful direction
<courmisch> some people can't VCS
<jrtc27> "Clean data to Point of Unification"
<jrtc27> ah so now we're going from no-Arm to full-Arm
<courmisch> that sounds like copy past from Arm ARM
<courmisch> copy-paste*
<sorear> "let's add new data cache flushing instructions to optimize fence.i on hardware with incoherent data caches" is not a direction that benefits any of the people currently having JIT problems
<jrtc27> clean.id is unhelpful for SMP
<jrtc27> oh
<jrtc27> ew
<jrtc27> their definition of PoU is different
<jrtc27> "Point of Unification – the point bin the memory hierarchy where cached data can be fetched by the instruction fetching mechanism of any HART if all instruction caching (if present) is invalid."
<jrtc27> that's not PoU, that's PoC...
* courmisch facedesks
<gurki> im not certain putting a desk in your face is a wise approach
stolen has quit [Quit: Connection closed for inactivity]
* gurki runs
<jrtc27> hm, although, PoU is for all IS in Arm, isn't it
<jrtc27> which in practice means all PEs...
<Stat_headcrabed> What kind of instruction is needed for SMP?
<jrtc27> so I guess that is the same given RISC-V has no IS/OS distinction
<jrtc27> (which is a waste of time; no OS supports outer shareable)
<courmisch> well, no TLBI to OS, so yeah writing an OS would be painful
<jrtc27> OS is really "there are some other cores over there you can run a separate OS on if you want"
<courmisch> is there even an actual hardware that differentiate OSH and NSH?
<jrtc27> not that I know of
vagrantc has joined #riscv
<courmisch> hmm... shouldn't it be a pin on the CPU though? I should check the TRMs
guerby has quit [Quit: Leaving]
<courmisch> Stat_headcrabed: I think the point is that there are no defined instructions for that case
<Stat_headcrabed> I mean, what should the needed function do?
<courmisch> flush the instruction cache across harts
<Stat_headcrabed> Currently we use IPI + zifencei in linux?
<courmisch> basically, we need Armv8's IC IVAU
<courmisch> Stat_headcrabed: I guess
prabhakarlad has quit [Quit: Client closed]
<Stat_headcrabed> Maybe we could ask J-spec group about this?
<jrtc27> linux does it lazily
<jrtc27> it'll IPI running threads, but any that aren't scheduled will instead have a flag set so they get a fence.i when they're next scheduled
phoooo has quit [Quit: Client closed]
<jrtc27> uh, amend that to: it'll IPI harts running threads from that process, but any that are doing something else will have a flag set so they get a fence.i when that process is next scheduled on them
<sorear> c910 has hardware support for a broadcast fence.i but it's kinda weirdly exposed
<Stat_headcrabed> And it's not efficient enough?
<jrtc27> for single-threaded things it's probably not awful
<jrtc27> for multi-threaded JITs it's awful
<Stat_headcrabed> OK
<Stat_headcrabed> thanks
Tenkawa has joined #riscv
<jrtc27> but IANA JIT dev (thank god)
<Stat_headcrabed> Have you ever propose a issue to j-spec or email for this problem?
<heat> jrtc27, can the CPU prefetch from non-executable memory?
<heat> cuz, you know, I don't see the big issue with JITs there. you grab a new block of writable memory from mmap, JIT your things, mprotect PROT_EXEC it, done?
ntwk has quit [Read error: Connection reset by peer]
<jrtc27> it can
<jrtc27> AFAIK
<jrtc27> so long as the PMA says so (which it will, because it's memory)
<jrtc27> but also, even if not, that's all well and good until you munmap and later mmap again
<jrtc27> (which may be two totally different libraries)
<jrtc27> although, I guess that needs to be made secure anyway
<jrtc27> mmap gives zeroes so jumping to it should be well-defined
Stat_headcrabed has quit [Quit: Stat_headcrabed]
<heat> i dont know about riscv but x86 seems to be weirdly natively protected in that regard
GenTooMan has joined #riscv
<courmisch> wasn't there an attack exploiting that fence.i flushes the entirety of the I-cache?
<heat> you don't need anything special in x86 between mmap/munmap PROT_EXEC
<courmisch> I thought x86 was using IPI for that, but that could be dated or incorrect
<heat> it uses IPI for cross modifying code, and for TLB shootdowns
<courmisch> maybe not due to IC, but due to TLB
<heat> does not use IPI for serialization across mmap
<heat> it has been mentioned in the lkml before, it's some sort of thing that Just Works despite not being documented anywhere
jacklsw has quit [Ping timeout: 245 seconds]
heat has quit [Remote host closed the connection]
stolen has joined #riscv
junaid_ has joined #riscv
pbsds has quit [Ping timeout: 246 seconds]
pbsds has joined #riscv
danlarkin has joined #riscv
billchenchina has quit [Quit: Leaving]
motherfsck has quit [Remote host closed the connection]
<oddcoder> not sure if this is worth the discussion here, but I was reading through Risc-V isa Volume II and some naming conventions are very confusing.
<oddcoder> in particular the fact that the supervisor OS runs in supervisor mode
<oddcoder> while the Supervisor Execution Environment runs in machine Mode
<oddcoder> My intution keeps telling me Supervisor OS runs in Supervisor Execution Environment, but apparently this is not how name selections was picked.
<oddcoder> I am not sure if it makes any sense to request different names for things (perhaps calling SEE something else)
<oddcoder> ofcourse that is assuming my understanding is correct.
<jrtc27> machine mode provides the supervisor execution environment
<jrtc27> the supervisor OS runs in supervisor mode in the supervisor execution environment
zkrx has quit [Ping timeout: 246 seconds]
<oddcoder> it didnt' feel like it runs inside SEE, it felt like SEE offers ABI (the SBI) and that ABI is accessed by the OS via syscall like interface
<oddcoder> which makes is sound like OS makes call into SEE rather than run inside it.
zkrx has joined #riscv
<jrtc27> hm, indeed, that's a strange way to define it
<jrtc27> it starts off fine with "Each application communicates over an ABI with the OS, which provides the AEE."
ezulian has quit [Ping timeout: 245 seconds]
<jrtc27> but "The SEE can be a simple boot loader and BIOS-style IO system in a low-end hardware platform, ..." is strange
davidlt has quit [Ping timeout: 248 seconds]
<oddcoder> wait, so you are trying to tell me that it is not intended to be simple bootloader/BIOS ?
<oddcoder> because that is the only usecase I had in mind, I am not sure if you meant to say that this is the strange case ? or did I misunderstand something
junaid_ has quit [Ping timeout: 260 seconds]
junaid_ has joined #riscv
<jrtc27> the wording / definition
guerby has joined #riscv
<oddcoder> do you think it would make sense to request name change (and where to make it) ? I mean BIOS will be much nicer name than SEE. At least everyone would understand what it is immediately.
<geist> but it might not be a BIOS, which is why it's named that way. it could be provided by a hypervisor that is hosting the supervisor OS
<geist> hence why it's generically named, its basically whatever is at the other end of the ecall into it
<oddcoder> what would be the difference if it is provided by hypervisor? it will still be a interface for configuring Basic Input and Output?
<geist> well, the SEE doesn't really provide basic io
<oddcoder> Basic might mean different things when talking about different layers.
<geist> its more of a cpu/system abstraction
<geist> or maybe another way of putting it is it does more and less than what a traditional BIOS would, which is probably why calling it a BIOS would not be great, because it would imply a bunch of things it isn't
<oddcoder> I will have to read more, the only issue I wanted to raise, is that SEE as a name with respect to what it does is confusing
<geist> so it uses a really generic term to describe it
<geist> i read it as this level of docs is implying that there *is* an enviroment that the supervisor OS calls into, without going into too much detail what its for or what it does
<geist> since that's another document
<geist> then when you get over to the other docs that describe the interface (SBI) you'll see it's not really a bootloader or a bios (except maybe the console routines) but more of a cpu/interrupt/timer abstraction
<jrtc27> BIOS is a very specific thing
<oddcoder> gesit: ofcourse you do because I suppose you are already familiar with the concept but for first time reading It was confusing, especially that every other ISA when they say XEE it mean X is running inside the environment.
<jrtc27> even your x86 PC doesn't have a BIOS any more
<oddcoder> jrtc27: you don't consider uefi as a BIOS?
<jrtc27> no
<oddcoder> I see your point, It is unlikely we are going to agree.
<jrtc27> BIOS is what grew out of CP/M and then became the IBM PC BIOS
<jrtc27> nothing else has ever been officially called a BIOS that I know of
BootLayer has quit [Quit: Leaving]
<jrtc27> semi-tech-literate people will call UEFI firmware a BIOS, though
<jrtc27> and QEMU unhelpfully uses -bios for firmware in general
<oddcoder> qemu people will be delighed t hear about that
<oddcoder> well it is not just qemu for the record, qemu and every software that runs on that level
<oddcoder> I ammend my last statement, not every I will just say many software*
<oddcoder> But this is not the point I am trying to make here,
Andre_Z has quit [Ping timeout: 260 seconds]
heat has joined #riscv
ezulian has joined #riscv
stolen has quit [Quit: Connection closed for inactivity]
junaid_ has quit [Ping timeout: 246 seconds]
junaid_ has joined #riscv
jrjsmrtn has quit [Quit: ZNC 1.8.2 - https://znc.in]
ezulian has quit [Ping timeout: 245 seconds]
ezulian has joined #riscv
crabbedhaloablut has quit []
ntwk has joined #riscv
junaid_ has quit [Ping timeout: 246 seconds]
joev1 has quit [Ping timeout: 260 seconds]
joev1 has joined #riscv
<muurkha> "SEE" is a pretty terrible name for two reasons
heat has quit [Ping timeout: 245 seconds]
<muurkha> - it's an existing, common English word
heat has joined #riscv
<muurkha> - even if it were obviously an acronym, it's only three letters, which means there are many existing expansions: Seeing Essential English, Signing Exact English, Society for Environment and Education, Special Enrollment Examination, standard error of the equation, southeast Europe, Square Enix Europe, small emplacement excavator, and dozens of lesser-known expansions
<geist> SDOS - supervisor doer of stuff, or SHA - supervisor higher authority
<muurkha> even another single letter would be helpful. I hesitate to suggest SEXxE but perhaps SEEn or SuEE,
ezulian has quit [Ping timeout: 256 seconds]
agent314 has joined #riscv
<muurkha> SDOS would be maybe better; it even suggests that it's a primitive operating system
danilogondolfo has quit [Remote host closed the connection]
joev1 has quit [Ping timeout: 246 seconds]
joev1 has joined #riscv
<oddcoder> geist: Sha exists elsewhere ... but SDOS is nicer than see because it force the reader to make no assumption
<muurkha> this is the first I've heard of XiangShan
<muurkha> this press release has an annoying degree of self-promoting hype in it
<gurki> thats the concept of a press release :p
heat has quit [Remote host closed the connection]
<clemens3> future semester project, to cook your own riscv processor
<clemens3> future future exercise, let the ai cook your processor
cousteau has joined #riscv
<muurkha> I don't think it should take an entire semester to get RV64I working unless you're just learning digital logic design
<muurkha> or wiring up SSI chips by hand
<cousteau> I should write my own RV32/64I just for fun
<muurkha> you can probably do it this week if you do it in C
<muurkha> since you don't have to learn things like "what does AUIPC mean?"
KombuchaKip has joined #riscv
<muurkha> geohot live-coded most of RV32I in Verilog on Twitch IIRC