foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
Burgundy has quit [Ping timeout: 248 seconds]
<heat>
windows is the only one that has a paged heap
<zid>
heat what's a paged heap
<heat>
heap that can get swapped out
<heat>
so every access to that memory is a russian roulette of pain
<zid>
why would you do such a thing
<heat>
i dont know
<heat>
ask dave cutler
<heat>
i had to check xnu wasn't doing something dumbass like that, but thankfully no
gog has quit [Ping timeout: 258 seconds]
<heat>
it's still mostly the bsd semantics of the krenel malloc
<zid>
krenal
Turn_Left has joined #osdev
<geist>
iirc beos had a paged heap too
<heat>
krenal marloc
<zid>
do we know why beos did it?
<heat>
self sabotage
<geist>
but as a result you needed to put all the VM structures in a separate heap that wasn't
<geist>
nah, it was because the world didn't have infinite memory back then
<zid>
It doesn't really explain anything to me though
<geist>
and/or the kernel was a more substantial footprint relative to all the web browsers and shit you run now
<geist>
therefore there's a bit more value to making the kernel itself pagable
<zid>
oh, so that they could page bits of the kernel out, I see
<childlikempress>
generally, the point of being able to apge things is so that you can page things, yes
<geist>
like if you had a 8MB system and the kernel footprint was like 4, was nice if you could pull 1MB out or whatnot
<zid>
that seems like a nightmare of fault conditions to fix it all back up correctly though, *shudder*
<geist>
yep, hence why it's really not worth the effort now
<childlikempress>
it is really nice that we have infinite memory now!
<zid>
childlikempress: Some of the time it is, some of the time it's so you can do weird tricks with dirty bits and crap
<geist>
hah speaking of i just fixed a zircon bug today where large pages were being split up in the mmu code and wasn't setting the A/D bit from the page it split
<geist>
and thus would fault in the kernel
<geist>
boop
<heat>
classic ARM64
<geist>
riscv this time
Left_Turn has quit [Ping timeout: 246 seconds]
<heat>
but riscv doesn't work like that though?
<geist>
i found a way to disable qemus full A/D bit emulation (svadu=off on the cpu command line)
<heat>
unless im super misremembering things
<geist>
riscv works very much like that
<zid>
riscv doesn't exist heat, so it does everything and nothing
<zid>
riscv is a *concept*
<childlikempress>
riscv is _two_ concepts--fragmentation and fusion
<bslsk05>
github.com: Onyx/kernel/arch/riscv64/mmu.cpp at master · heatd/Onyx · GitHub
<heat>
and it works?
<heat>
maybe you're enabling something i'm not?
gog has joined #osdev
<geist>
heat: have you run on real hardare yet?
<heat>
no
<geist>
there's yer problem
<heat>
riscv is not real
<geist>
so: riscv is the same as arm: the A/D bits may be implemented in hardware or not (see Svadu/Svade features)
<zid>
I want spring rolls :(
<geist>
qemu by default emulates full A/D support, so it by default acts like x86 and you dont have to worry about it
<heat>
ohhhhhh
<heat>
sweet
<geist>
real hardware, like anything sifive based, etc, generates a page fault on accesses and writes for the A/D bits
<heat>
x86!!!
<geist>
so you can avoid it by having the bits pre-set
<geist>
you can disable qemu's default behavior by setting `svadu=off` in the cpu command line
<geist>
'supervisor a/D updates`
heat has quit [Quit: Client closed]
heat has joined #osdev
<heat>
i finally got the courage to write more kernal
<heat>
what's the shortcut for "next-match" in nvim?
Turn_Left has quit [Ping timeout: 258 seconds]
gog has quit [Ping timeout: 246 seconds]
Beato has quit [Remote host closed the connection]
Beato has joined #osdev
orccoin has quit [Ping timeout: 250 seconds]
johanne has quit [Quit: CGI:IRC (Session timeout)]
freakazoid332 has quit [Read error: Connection reset by peer]
[itchyjunk] has quit [Ping timeout: 246 seconds]
<geist>
probalby just 'n'
<heat>
geist, does zircon always do shotgun mappings?
<geist>
not always, but a lot of time it does
<geist>
it's configurable
<heat>
ah
<heat>
do you play funny tricks with traditional mapping allocation?
<heat>
or is it just a boring O(n)
frkzoid has joined #osdev
netbsduser has quit [Ping timeout: 244 seconds]
<immibis>
heat: why would userspace processes have paged heaps?
<heat>
what?
<immibis>
heat: why would userspace processes have paged heaps?
<heat>
all userspace processes have paged heaps
<immibis>
yes. why should they?
<heat>
because most of them are actively useless
<heat>
and memory wasteful
<immibis>
that is also your answer for kernel paged heaps
<heat>
but that's wrong for the kernel
<immibis>
if a process opens 1000000 files do you want to hold all the file descriptors in memory at all times?
<heat>
the penalty for swapping something in thats *core to the kernel* is huge
<heat>
sure
<immibis>
the penalty for swapping something in is huge
<immibis>
why do you think i, as a user, care whether you're swapping in something in the kernel or something in firefox
<immibis>
either way I have to wait for it
<heat>
because the kernel may halt everything
<heat>
:)
<immibis>
(actually I don't think the penalty for swapping is that bad. It's the penalty for swapping 100000 different chunks of 4k that's huge. Do it in one contiguous sweep and it takes a few seconds)
<heat>
also introduces possible huge latency spikes
<immibis>
heat: okay so don't swap out the scheduler
<immibis>
that's why there's also a nonpaged heap
<heat>
ok so i want everything to be fast and simple
<heat>
which goes to... the non paged heap
<heat>
great, now everything's in the non paged heap
<immibis>
this also applies to userspace processes then. don't page them
<heat>
the kernel gives you tools to do so, if you want
<immibis>
if you page userspace heaps, it's slow and complex again
<heat>
wrong
<immibis>
wrong
<heat>
the cost of fucking trapping on a page fault on a random memory access that needs to swap things in is huge in complexity and performance
<heat>
better check your locks
<immibis>
yes so it's slow and complex
<immibis>
paging is slow and complex, we agree
<heat>
im talking about the kernel
<immibis>
so don't page
<heat>
yes, don't page the kernel
<heat>
swap userspace, because the kernel and userspace are not in parity
<immibis>
the cost of fucking trapping on a page fault on a random memory access that needs to swap things in is huge in complexity and performance
<heat>
the kernel may and does sacrifice userspace to keep itself or other more important processes alive
<heat>
and it does give userspace the tools to not swap certain pages up to whatever limit it has perms for
<immibis>
oh so that's when it decides that thunderbird is more important than xmonad
<immibis>
or worse Xorg
<immibis>
or wireshark running as root and capturing to tmpfs. It's root so it must be important right?
<heat>
should've configured your cgroups
<immibis>
so if you're adding so much complexity as cgroups and so much performance sacrifice as swapping, why are you not willing to also do those inside the kernel?
<immibis>
you're clearly not opposed to complexity or bad performance
<heat>
what
<heat>
i dont give a shit if chrome is 100ms slower once or twice because its using too much memory
<immibis>
you're saying kernel swapping is bad because of high complexity and bad performance, so let's add high complexity and bad performance everywhere but the kernel
<immibis>
i don't give a shit if CreateFile is 100ms slower once or twice because its using too much memory
<heat>
well, that's you
<immibis>
well, that's you
<heat>
swapping core kernel structures because a process is abusing them solves nothing
<heat>
you can't stop the abuser
<heat>
who owns a file? no one knows
<heat>
unless you have proper limits, then there's no problem
<immibis>
swapping userspace memory because a process is abusing it solves nothing. you can't stop the abuser. who owns a page? no one knows. unless you have proper limits, then there's no problem
<immibis>
have you ever actually looked at memory statistics in your favourite OS? they just fudge them to account for shared pages
<heat>
yeah im... not doing this
<heat>
i dont have energy for bad faith arguments at this time of th day
<immibis>
windows reported memory isn't vsz or commit charge, it's "private working set" - working set being some estimation of the pages actively used by the process
<immibis>
and private meaning it excludes shared pages entirely
<immibis>
everything you disagree with is bad faith? you are now blocked
<immibis>
if you reply to this message I won't see it on my screen
<heat>
lmao
<zid>
no idea who that was, they were already ignored :P
<immibis>
it was heat
<zid>
ah, no wonder, that fucker
<heat>
hahaha
<immibis>
apparently swapping is fine but only if it happens in the part of the system where it happens to be easier to implement swapping
<zid>
sounds like a viable strategy ngl
<heat>
easier, safer, faster
<immibis>
there is no reason to think that the arbitrary dividing line of where it happens to be easier to implement swapping lines up with the other arbitrary dividing line of where it's actually useful and good.
<heat>
but no, lets do the non-obvious thing
<geist>
re: kernel swappjg vs user swapping,i think the key is that kernels haven't generally increased in size at the rate at which memory has in the last 20-30 years
<heat>
i want pain for the sake of pain
<immibis>
he also said swapping is high complexity and low performance so don't do it here... but do do it there, where it's still high complexity and low performance, because apparenly that doesn't matter?
<geist>
so nowadays kernels use a much smaller part of the total system memory, and thus doesn't apply as much memory pressure to the system
<geist>
and thus you dont really get as much back from adding the feature
<immibis>
that is probably a factor, but user processes surely use more kernel data
<heat>
geist, right, and user processes have limits
<geist>
yes but not at the same scale
<heat>
you can't open 10k files, you can only open 1024 (by default in linux)
<geist>
say 30 years ago and yo uhad like 4 or 8MB of ram,. the kernel itself might use up 1/4 of it, or more
<geist>
now that'd be ridiculous if you had a 8GB system and the kernel used up a solid 2GB of ram
<immibis>
unless you're writing KeyKOS and eschewing all dynamic allocation in the kernel, you probably do want to treat kernel data allocated on behalf of a user process the same as you treat memory directly allocated by the process
<heat>
you can only have up to N vmas, etc
<zid>
The unfairness is ignorable, is how I'd put it
<kazinsal>
yeah if your kernel is taking up a significant fraction of your memory you're probably doing something uncommon that needs it
<zid>
i.e that a process with 100 open files technically uses more memory than one with 1 open
<kazinsal>
like, say, holding the full internet routing table
<geist>
right
<zid>
because on the scale of "system memory", the difference between 100 files open and 1 file open is approximately 0
<immibis>
there are probably bittorrent systems out there that keep millions of files open
<zid>
to the nearest few powers of ten
<geist>
also it was a different era in that in general you did and could seriously overcommit your system, frequently
<geist>
nowadays folks will scoff at how much swapping you may have done back then
<geist>
but at the time it was frequently just the price you paid to get stuff done
<immibis>
pretty sure i still have lots of swapping because of web browsers
<immibis>
DHTML was a mistake
<zid>
I think also there was a lot more.. timesharing going on
<zid>
we *do* do full kernel memory swapping between customers today
<zid>
we just do it with VMs
<geist>
right. at the time memory was relatively expensive
<heat>
ksm is nice
<geist>
of course depends on the use case, etc etc
<zid>
with more timesharing and 'customers' and 'clients' involved, but pre-kvm, you need internal mechanisms to dump out open file handles etc
<zid>
and suspend a user's memory useage to tape
<zid>
so that you can run jobs for a different user, that might need *all* of the memory
<geist>
indeed
<zid>
These days we just swap the entire OS image out so who cares if the kernel can individually dump out files from a single process to disk
<zid>
we're chucking the entire memoryset
<zid>
that code is just overhead to that process
<geist>
also we can spend more cpu cycles doing things ike page deduplication, zero detection, page compression
<geist>
since that's relatively cheap now
<heat>
page compression is veeryyyyyyyyy good
<heat>
you can actually enable it on servers
<heat>
and perf/latency doesnt go to the shitter
<kazinsal>
yeah my machine is currently compressing 14 GB down to 3 GB
<kazinsal>
it's handy
<heat>
is that windows?
<kazinsal>
yeah
<heat>
yeah windows always has some craaaaazy stats there
<zid>
could probably dedicate 127 cores to doing that
<zid>
while I play dwarf fortress on the real core
<kazinsal>
which technically means I'm overcommitted but in the magical future of cheap fast RAM and cheaper fast secondary storage, wheeeeeeeee
<geist>
and also dont forget M1+ cpus just have compression instructions built in
<geist>
risc baby!
<heat>
very risc much wow
<zid>
My friend implemented 'page compression' for the long-running java program that his device was designed to run.
<zid>
And by 'page compression' I mean every half an hour it'd free any page that hadn't been used since last time it checked.
<zid>
java program stayed rock stable, and stopped OOMing :p
<heat>
LRU for dummies
<zid>
I mean that's just an LRU, yes, but it's funnier explained that way
<heat>
gosh that's not even LRU
<heat>
NRU
<heat>
it's more or less how SVR4 did page replacement too
<zid>
that's your buzzword for this week I noticed btw
<zid>
page replacement
<zid>
it's a step up from pessimal or spinlock I guess
<bslsk05>
lkml.org: LKML: Linus Torvalds: Re: [Regression w/ patch] Media commit causes user space to misbahave (was: Re: Linux 3.8-rc1)
<mjg>
well known klassikkk
<sham1>
If I have to give any props to Windows (aside from it having page views to files, obviously), it's that they don't need to deal with this as much due to how you call the syscalls
<heat>
don't forget the paged heaps
<heat>
they're great
<sham1>
Yes
<sham1>
And it also has a book from two decades ago about its design which you can't even verify
<heat>
that's untrue
<heat>
the latest windows internals covers windows 10 even
<sham1>
Err yeah
<sham1>
I only remembered the one from like original NT, 2K and so on
<heat>
windoze has better books about its internals than linukz
<heat>
oh wait, is it sanitizers they are lacking?
<netbsduser>
they disabled tmpfs because they bit rotted it
<mjg>
i don't know about sanitizers
<netbsduser>
there was also a problem with it having an improper implementation with UVM on openbsd
<mjg>
tmpfs is not something they /object/ to but as netbsduser said there was nobody to work on it
<netbsduser>
namely if you mmap'd a tmpfs file, you would end up with a duplicate set of pages
<mjg>
that came from the original netbsd port
<mjg>
i would not be shocked if that was the case on netbsd as well
<netbsduser>
no, this was one of the things tmpfs was invented specifically to avoid on netbsd
<mjg>
freebsd has the netbsd port of tmpfs
<mjg>
and it came with this bullshit
<mjg>
had to be fixed later
<netbsduser>
the principle was that tmpfs files should have their data represented as UVM objects and be mappable directly into the address spaces of processes
<netbsduser>
if freebsd had that issue, i assume it's simply the case that they didn't do the adaptation immediately
<mjg>
now, while this does not mean netbsd definitely has (or had) this problem
<mjg>
i would suspect it very much does
<mjg>
because of the stuff i had seen there, for example linear scans in directories
<mjg>
it's basically turbo inefficient
<netbsduser>
netbsd tmpfs was designed inspired by the sunos tmpfs paper and this unity with mmap() was the centrepiece of the whole thing
<bslsk05>
github.com: src/sys/fs/tmpfs/tmpfs_vnops.c at ec9336561227af07020742d7fb925c32c9e1d6bd · NetBSD/src · GitHub
<heat>
oh cool thanks
<netbsduser>
if you look in tmpfs.h you'll see tmpfs nodes have a tn_aobj anonymous uvm object, you can also see another sunos inheritance in the read vnode op
<mjg>
thatu vm object may be there to facilitate mmap to begin with
<mjg>
or any other i/o
<mjg>
it was literally the case on freebsd
<mjg>
as in you need one to do anything
<mjg>
does not mean there are no duplicated pages
<netbsduser>
if you look at tmpfs_read, you can see that uiomove is used to do i/o by directly copying from a kernel-space mapping of the same uvm object; tmpfs_getpages also retrieves pages from the same object
<mjg>
as i said the uvm thing may be needed to do antyhing to begin with
<mjg>
you wanto make a case, show me the code refing the existing page on mmap
eddof13 has joined #osdev
<mjg>
or the fault
<netbsduser>
i already pointed you to tmpfs_getpages
Vercas has quit [Remote host closed the connection]
<mjg>
so... the code you linked did not even support mmap
<mjg>
i don't see any magic in this commit to avoid duplicating pages
<mjg>
and i'm willing to bet they are getting duped
<netbsduser>
what do you think the function of the getpages vnode op is?
<netbsduser>
to grab pages so you can duplicate them after getting them?
<mjg>
on netbsd? i don't know what they do
<gog>
what's the opposite of a net
<mjg>
you can trivially make the claim that since getpages is used on openbsd
<mjg>
it clearly does not dup pages
<gog>
i'm gonna fork netbsd and call it <opposite of a net> bsd
<gog>
the antonym
<sham1>
anti-net-bsd
<sham1>
offlinebsd
<mjg>
workbsd
elastic_dog has quit [Ping timeout: 240 seconds]
<mjg>
only supports 1 arch
<mjg>
... but it actually works
<mjg>
:d
<sham1>
ITANIUM
<netbsduser>
you can see that getpages is passed as one of its arguments a pointer `struct vm_page **a_m` and the job of the getpages routine is to put page pointer(s) (potentially more than one for efficiency) into the pointer(s) pointed to by the same
<mjg>
netbsduser: you can literally state this to claim that openbsd does not dup pages for tmpfs
<netbsduser>
if there is an implementation of that in openbsd which does the same, picking out pages of the uvm object and plopping them in, then i will say, mea culpa, i told a lie about openbsd
<mjg>
mate the api returning a bunch of pages is expected
<mjg>
whether they are duped or not is ortoghonal
<mjg>
the very same api *WAS* duping pages on freebsd
<mjg>
and then it the underlying func was chnaged to not do it
<mjg>
iow not an argument
<netbsduser>
perhaps because freebsd didn't care to adapt the code initially
<mjg>
i also stress the initial tmpfs code on netbsd clearly did not support mmap to begin with
<mjg>
as evidenced by the above commit
<netbsduser>
that surprises me, but i will accept that it sounds as if it didn't initially support it, which does surprise me
<mjg>
this bit surprises me too
<netbsduser>
as that was one of the objectives of the whole thing
<mjg>
i would assume the objective was to avoid having to fuck with md + ufs on ti
<netbsduser>
i am looking at openbsd's implementation and i don't see anything comparable that arranges for the pages of the uvm object to be supplied on demand
<mjg>
look mate this convo exhausted my "curiosity" in the subject
<netbsduser>
it does still do as netbsd does (reading the tmpfs file by mmap'ing it into the kernel's address space) but openbsd has no ubc so they do a fresh mapping on every read and then unmap it right away
elastic_dog has joined #osdev
<heat>
>(reading the tmpfs file by mmap'ing it into the kernel's address space)
<heat>
what?
<netbsduser>
heat: a sunos inheritance
<heat>
aren't the pages always mapped?
<heat>
is this some high-mem kind of thing?
<netbsduser>
originally they had a permanent mapping of the whole file, but in the commit mjg linked that was replaced with UBC use
<netbsduser>
UBC being a cache of mappings in kernel-space of (parts of) files
<netbsduser>
it can also use the direct map to do this without needing to map anything, if there is a direct map available
troseman has joined #osdev
troseman has quit [Client Quit]
troseman has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
osdever has joined #osdev
<Ermine>
gog: may I pet you
<osdever>
hello, who knows about io request packets??
troseman has quit [Quit: troseman]
stolen has quit [Quit: Connection closed for inactivity]
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<osdever>
Kernel
SGautam has quit [Quit: Connection closed for inactivity]
mahk has joined #osdev
<nikolar>
kernal
<GeDaMo>
Colonel
MiningMarsh has joined #osdev
<acidx>
mahk: uzix? hadn't heard that word for a *long* time
Hammdist has joined #osdev
troseman has quit [Quit: troseman]
zxrom has quit [Read error: Connection reset by peer]
zxrom has joined #osdev
gog has joined #osdev
goliath has joined #osdev
kof123 has quit [*.net *.split]
kof123 has joined #osdev
heat has joined #osdev
<heat>
posix AIO
<heat>
the bestests most AIO interface ever
xenos1984 has quit [Ping timeout: 245 seconds]
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
xenos1984 has joined #osdev
heat has quit [Ping timeout: 246 seconds]
danilogondolfo has quit [Quit: Leaving]
<ChavGPT>
quality triage by heat: is it posix?
<gog>
hi
<gog>
i'm posix
<nikolar>
hello posix
* childlikempress
pets posix
nikolar is now known as posix
posix is now known as nikolar
<Ermine>
hello posix, I'm N.T.
<kof123>
hi, i'm cow tools guy
<kof123>
*cow
<kof123>
i have to check, but the thing is, his barn was standing. it is only the humans that are confused, they work fine for him
troseman has joined #osdev
terminalpusher has joined #osdev
sprock has quit [Ping timeout: 260 seconds]
sprock has joined #osdev
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
xenos1984 has quit [Ping timeout: 246 seconds]
goliath has quit [Quit: SIGSEGV]
<netbsduser>
posix aio is one thing
<netbsduser>
but whether it's backed by a profoundly asynchronous layered driver model makes all the difference
zxrom has quit [Remote host closed the connection]
zxrom has joined #osdev
<gog>
i like that
* kof123
.oO( it is almost MVC )
<gog>
i don't like that as much
<gog>
but it's what i do every day
goliath has joined #osdev
<sham1>
MVC!
troseman has quit [Quit: troseman]
stolen has quit [Quit: Connection closed for inactivity]
<gog>
fibsh
<zid>
UMVC3 when?
<sham1>
MVC kernel when
eddof13 has joined #osdev
Matt|home has joined #osdev
<zid>
I need
<zid>
the black fizzy fluid
<Ermine>
gog: may I pet you
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dude12312414 has joined #osdev
x8dcc has joined #osdev
<x8dcc>
Hello everyone, after a pause with programming in general and osdev in particular, I decided to come back. I was adding multitasking and I need to save the SSE state using the fxsave instruction. My question is that in the example in wiki.osdev.org, they use: "fxsave [SavedFloats]", shouldn't it be "fxsave SavedFloats"?
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<zid>
not typically, in x86 syntax
<zid>
it's accessing memory so that's a memory operand, and gets []
<x8dcc>
yeah but isn't using [SavedFloats] the same as passing the dword at SavedFloats? Instead of the address of SavedFloats
<zid>
only if you think mov dword [rax], 12
<zid>
is the same as passing the contents at *rax to mov, along with 12
<zid>
which I'd assume you don't
<zid>
`inc byte [rsi]`
<x8dcc>
hmm I think I know what you mean but that mov makes sense as *rax += 12
<zid>
what about the inc then
<zid>
'do operation on the memory at *rsi'
<x8dcc>
yeah
<x8dcc>
yeah I know what you mean now
<x8dcc>
it just feels weird to have a big 512byte variable
<zid>
yea, it's one of those "huh that looks odd but makes total sense, I must be the weird one" things, along with lgdt and friends
<zid>
(lgdt takes a pointer to.. 10 bytes?)
<x8dcc>
so it would be more of *ptr = fxsave() instead of fxsave(*ptr)
sprock has quit [Remote host closed the connection]
<zid>
I think you're just showing how poor of a mapping it is to treat them as parameters that are passed by value
<x8dcc>
and yeah, just checked and I did lgdt [descriptor] without issue
<zid>
because that fails for 'inc'
<x8dcc>
but *rsi++ makes sense in my head :p
<zid>
inc(*rsi) doesn't though
<zid>
it needs to be *rsi = inc(*rsi)
<x8dcc>
yeah exactly
<zid>
neither of your schema fit even `inc`, so imo your schema is bad and should feel bad
<x8dcc>
the weirdness was how big the "parameter" was
<zid>
it suddenly highlighted the subtle out-of-touchness of your mental model
<zid>
You're right though, the [] thing is actually weird.
<x8dcc>
well I understand now, but it didn't feel weird because inc [rsi] was just like "yeah so nasm understands I want to do *rsi++"
<zid>
It's totally unnescesary, and is only for people
<x8dcc>
well, you helped me understand once again dear zid, thank you
<x8dcc>
you won't be removed from my project's credits :p
<zid>
like, inc [0xDEADBEEF] just encodes to ?? EF BE AD DE, [] is just how we write memory address literals into instructions
<zid>
in assemblers
<zid>
there's nothing saying the [] *have* to be there, it's just convention to disambiguate inc [rsi] from inc rsi, you could do it other ways, like inc rsi vs inc.m rsi, then you'd just have fxsave.m ptr instead
<x8dcc>
I see
<clever>
and RISC based cpu's just ban `inc [rsi]` style, so there is no need to disambiguate
<zid>
but then you'd still have the un-orthogonallity, because there'd be no `fxsave` without the .m
sprock has joined #osdev
<x8dcc>
So you couldn't escape from my question :D
vancz_ is now known as vancz
eddof13 has joined #osdev
x8dcc has left #osdev [ERC 5.4 (IRC client for GNU Emacs 28.2)]
<gog>
hi
alexander has quit [Quit: ZNC 1.8.2+deb3.1 - https://znc.in]
<zid>
Jello, Hog.
<zid>
Sorry, sow.
<nikolar>
Gollo, Heg
alexander has joined #osdev
osdever has quit [Quit: CGI:IRC (Session timeout)]
bgs has quit [Remote host closed the connection]
JTL has quit [Ping timeout: 250 seconds]
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<gog>
meow
<gog>
oink
<ChavGPT>
burp
troseman has joined #osdev
Yukara is now known as meisaka
xenos1984 has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]
<immibis>
i am looking at the sata specification and it's based around transferring copies of the PATA register block, and there's a PIO emulation mode where the drive tells the host adapter how many bytes the CPU is going to write to the host adapter. Yuck. At least the DMA packets use "buffer identifier + offset" instead of physical address
<immibis>
(apart from that it seems pretty sane)
<immibis>
(they have integrated too many drive electronics)