Turn_Left has quit [Read error: Connection reset by peer]
Matt|home has quit [Quit: Client closed]
<Ermine>
windowa has superfetch thingie which is related to swap
<Ermine>
idk if it swaps proactively
<geist>
yah i think that basically tracks the paging behavior of a boot, and plays it back proactively on next boot
<geist>
in a oversimplified nutshell
junon has joined #osdev
<junon>
So as I understand it, there is never DTB on x86_64, only ACPI, and on ARM there is never ACPI but always PSCI and, depending, either DTB or baked-in values based on SVD, correct? And DTB can say "I don't have information for this particular thing (e.g. booting a core), check DTB"?
<junon>
Is that accurate?
<junon>
and are SVD's just an STM thing or do other manu's use SVD / something similar?
<geist>
on ARM there can be ACPI too
<geist>
server based ARMs tend to be ACPI based (with or without DTB)
<geist>
i have no idea what SVDs are
<junon>
SVDs are the register description files that STM distributes, must be an STM thing.
<junon>
So when there's both ACPI and PSCI, which one takes precedent? Is it a choice of the OS or if there's ACPI is there a lack of PSCI?
<junon>
Seems linux tries to use DT first then ACPI if there is no DT. I can't find anything suggesting there's both PSCI and ACPI on the same chip.
Renfield has quit [Quit: Leaving]
Yoofie64 has quit [Read error: Connection reset by peer]
sortie has quit [Ping timeout: 276 seconds]
gog has quit [Ping timeout: 260 seconds]
Yoofie646 has joined #osdev
edr has quit [Quit: Leaving]
sortie has joined #osdev
kristinam has quit [Ping timeout: 246 seconds]
kristinam has joined #osdev
qubasa has quit [Ping timeout: 248 seconds]
<zid>
I have misplaced my spider friend, if I wake up screaming, I found him
eddof13 has quit [Quit: eddof13]
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
karenw has quit [Ping timeout: 252 seconds]
junon has quit [Ping timeout: 252 seconds]
netbsduser has joined #osdev
JupiterBig has quit [Ping timeout: 258 seconds]
goliath has joined #osdev
aosync has joined #osdev
FireFly has quit [*.net *.split]
remexre has quit [*.net *.split]
aws has quit [*.net *.split]
kazinsal has quit [*.net *.split]
FireFly has joined #osdev
remexre has joined #osdev
kazinsal has joined #osdev
netbsduser has quit [Ping timeout: 252 seconds]
JupiterBig has joined #osdev
asarandi has quit [Ping timeout: 252 seconds]
asarandi has joined #osdev
asarandi has quit [Max SendQ exceeded]
asarandi has joined #osdev
JupiterBig has quit [Ping timeout: 244 seconds]
JupiterBig has joined #osdev
JupiterB1g has joined #osdev
pabs3 has quit [Ping timeout: 248 seconds]
pabs3 has joined #osdev
JupiterB1g has quit [Ping timeout: 248 seconds]
JupiterBig has quit [Ping timeout: 248 seconds]
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
Dead_Bush_Sanpai has joined #osdev
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
Dead_Bush_Sanpai has joined #osdev
<geist>
heh i just found a giant spider on the wall in the bathroom
<bslsk05>
twitter: <DrawsMiguel> further. to be a good systems programmer you should acquire other skills that are not programming. for example, mine are:   - cooking  - drawing furries  - being a homosexual
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
<kazinsal>
furries run like 90% of network infrastructure
<kazinsal>
shit is real
<kazinsal>
if you're at a party full of furries you've got a 50/50 chance that any given person you talk to is either a netops dork or a tradie
<bslsk05>
man7.org: swapon(8) - Linux manual page
<heat_>
"Swap files on Btrfs are supported since Linux 5.0 on files with nocow attribute"
<gorgonical>
the man pages also now say that NFS doesn't work
<heat_>
it's definitely implemented, whether it works or not, i dunno
<kof673>
i was using freebsd 10 or something, so it was still there, it was a thing for sure
<kof673>
i think you are better off, even a diskless system...give it a local swap disk(s), do the rest over network, and wired preferably not wireless
<kof673>
and raid i think was supposedly "Inexpensive" originally, so put the good disks on the server(s), if the crappy "swap disk" dies, no big loss, the important data is elsewhere
<kof673>
i think you have to "export" nfs swap a certain way too, anyways...
eddof13 has joined #osdev
gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
xal has quit [Quit: bye]
xal has joined #osdev
<netbsduser>
swapoff is one of those things i do not plan to support
<netbsduser>
and if i ever did, i would treat it as a very painful and expensive procedure
<heat_>
linux swapoff scans all anon PTEs for swap entries that point to that exact swap area
<heat_>
it's terribad
<heat_>
all anon PTEs meaning literally all address spaces
<netbsduser>
perhaps it's just not worth doing anything else
<netbsduser>
that's how i would do it
gcoakes has joined #osdev
hwpplayer1 has quit [Quit: ERC 5.5.0.29.1 (IRC client for GNU Emacs 29.4)]
<heat_>
yes but it requires you to form a linked list of address spaces or something
<heat_>
which DOESNT SCALE SUN ENGINEERING ETHOS
<mjg>
did someone say LOL
<heat_>
LOL
<mjg>
yo heat_, want some sun engineering ethos
<heat_>
yes
<mjg>
lemme show you
<heat_>
who wouldn't want some sun engineering ethos
<mjg>
suppose you have unrelated processes having a file open, each a different one
<mjg>
the file is fully cached in memory 'n shit
<mjg>
in your assesment, how much LOCK CONTENTION is there when these poor fucks read their own files at the same time
<mjg>
there is a weird semi-distributed linked list which they hash into
<mjg>
and which has bufs removed from and added back on every read
<mjg>
there is tons of conflicts when hashing (go figure) and every such read contends on a lock twice
<heat_>
linked list of what?
<heat_>
pagen?
<mjg>
some fucken' bufs
<mjg>
backing the shite
<mjg>
i did not check specifically
<heat_>
oh, no page cache?
<mjg>
point is you have 2 processen minding their own business
<mjg>
and they can still contend on zfs (lol)
<mjg>
i would understand if i/o was needed 'n shit, but no
<mjg>
's all fully cached
<heat_>
i was told it was the last word on filesystems
<mjg>
it was the lsat word in sun engineering ethos
<zid>
last word on filesystems is zfs
<zid>
that's why it starts with z
<heat_>
sun left us engineering ethos for the ages
<mjg>
now here is some advice heat
<mjg>
i just wrote a patch
<mjg>
works great
<mjg>
but i have a feelin' i missed something
<mjg>
so i'm gonna sleep on it instead of posting
<mjg>
or committing
<heat_>
cool
<mjg>
perhaps onyx development would have been less regrettable
<mjg>
if you followed my engineering ethos
<heat_>
perhaps onyx development would have been less regrttable if you shut the fuck up and send some patches
<mjg>
well i did send a patch
<mjg>
singular
<mjg>
lemme figure out one more
<mjg>
lmao
<mjg>
/* eww eww ewew eww eww eww eww*/
<mjg>
maybe i'll add one "eww"
<heat_>
it's a really fuckin yucky hack
<mjg>
i note ewew instead of eww
<mjg>
i'll patch that
<heat_>
don't forget the signed-off-by
<mjg>
PR opened!
<immibis>
why would a linked list of all address spaces be a problem? 2 extra pointers per address space? unless you are creating and deleting them constantly? it doesn't matter that swapoff is slow, in fact it should be slow when that helps make other things fast
<heat_>
oh funny, this one is a separate one just populated when swapping
<heat_>
whether lock contention happens here is unknown to me, probably not. swapping isn't really scalable as-is anyway
terrorjack4 has joined #osdev
<immibis>
haven't lockless linked lists been solved?
<heat_>
no
<mjg>
:dd
<mjg>
the solution is mostly to not use them
<immibis>
RCU?
<netbsduser>
if you unconditionally add all anonymous vm objects and processes to a linked list, it doesn't sound too frightful to me
<netbsduser>
hopefully you don't do that often enough to be frightful
<immibis>
is there an obvious better structure i'm missing? deques are linked lists with a constant factor. vectors just no. a randomly sorted linked heap?
<heat_>
RCU'ing over large swaths of code (that may even sleep) is not really possible/beats the purpose
<heat_>
what? linked list is _the worst_
<mjg>
no amoutn of rcu helps if you need to *change* stuff
<immibis>
i thought the problem was just adding and deleting list entries
<heat_>
vectors are good (But It Depends), deques are okay
<mjg>
good news is that the question is obsolete
<heat_>
red-black trees and all that shit are meh, okay-ish
<mjg>
just RUST
<heat_>
oh you know what i found out yesterday?
<netbsduser>
to try to decontend a little bit that big linked list of things that can have swap entries in them
<mjg>
that the solaris diaspora likes rust?
<heat_>
rustc supports ASAN and KASAN on unsafe {} code
<netbsduser>
i fancy i would just do one of those - what do you call them - replicated rwlocks
<mjg>
that is nice
<mjg>
some c parity
<heat_>
also KCSAN and all that shit
<netbsduser>
where there's one per core and you acquire them all for write-locking
<mjg>
that's a known and terrible idea
<mjg>
it was great with liek up to 8 cores
<mjg>
goes to shit the more you need it
<netbsduser>
i might not actually, if write locking was only needed for the the swapoff case it would be understandable
<heat_>
hmm why does it go to shit?
<netbsduser>
but since you need it more often it's not on
<heat_>
if writing really is mega rare
<mjg>
a per-cpu rw lock which can be sensibly taken for writing is a solved problem
<netbsduser>
i would also be keen to hear how it's bad if you have a write-infrequently case
<mjg>
(btw netbsd has a very much frequently taken case of the sort, hilarity ensued)
<mjg>
heat_: this is only tolerable if this happens liek once per boot time
<mjg>
and even then why would you do it
<mjg>
imagine a box with -- say - 512 threads
<mjg>
are you gonna take 512 fucking lock
<mjg>
s
<mjg>
at the same time
<heat_>
i guess
<mjg>
again it's an idea from where you had a core count you could enumerate on your fingers
op has quit [Remote host closed the connection]
<mjg>
you know, great shit for 2003
<heat_>
good news, i still do
<mjg>
yes i know the onyx ambition
<heat_>
mon i have good rcu and freebsd does not
<mjg>
:(
<mjg>
oh wait
<mjg>
it's no longer a burn
<mjg>
my linux has better rcu than your onyx mate
<heat_>
now you have mega rcu :(
<netbsduser>
if rcu is so good why is there 7 of them in linux
<heat_>
because it's THAT GOOD
<heat_>
it's like fentanyl
<mjg>
so good you can't get enough
<netbsduser>
rcu classic edition, rcu with trees, sleeping rcu, sleeping rcu with trees, bottom-halves' rcu, rcu for sched, i forgot the other one
<heat_>
there's no classic rcu, there's a tiny rcu
<mjg>
synchronize_rcu() is the shit
<heat_>
you forgot tasks rcu and preemptible rcu
<immibis>
mjg: yes for a write-infrequently case you are absolutely going to take 512 locks
<netbsduser>
if there's no rcu classic edition what did i copy from the patent?
<netbsduser>
heat_: i thought preemptible rcu was sleeping rcu
<heat_>
no
<heat_>
there was a classic rcu version like... in 2.6.X, they eventually whacked it for tiny rcu and tree rcu
<heat_>
preemptible rcu is a special mode for CONFIG_PREEMPT=y
<heat_>
where you don't actually disable preemption on a rcu_read_lock()
<netbsduser>
i havne't read much about rcu with trees but i like the concept which i think i understand the gist of
<heat_>
so, like, your CONFIG_PREEMPT=y desktop kernel will not struggle with interactivity on large RCU read sections
<heat_>
whereas if you explicitly picked =n, that's not a problem, you don't care
<netbsduser>
unfortunately if i implemented it based on that gist then paul mckenney himself would turn up to serve papers to me for infringing one of his 600 patents
<heat_>
dropping the read lock and cond_resched() tends to be Good Enough for server workloads
<mjg>
it's a whack-a-mole "where do i need to cond_resched now"
<netbsduser>
oh, i think i saw some patent or other turn up relating to this preemptible rcu
<netbsduser>
i think they were promoting it for RT/Linux
<heat_>
mjg, allegedly they want to get rid of that
<immibis>
how much of this complexity is caused by threading the kernel instead of only having user threads and one kernel "thread" per CPU?
<heat_>
none
<immibis>
if it's one thread per CPU you don't need to think about other kernel threads sleeping because they don't. but then you have to write them more complexly in async style in C, so...
<netbsduser>
immibis: you mean by "On kernel thread per cpu" the situation whereby the kernel is always coresident with user programs and control sometimes moves into the kernel?
<netbsduser>
i would ask how you would carry out actions like being the page daemon if limited to that
<heat_>
you can very trivially see all sorts of multiprocessing issue when threads = cpus (aka approximated to one thread per CPU, in the kernel)
<immibis>
not sure what you mean by the first part. you want to eliminate system calls by running everything in ring 0 and calling the kernel directly? that's not relevant to threading models
<immibis>
(1) "being" isn't an action and (2) you'd have to have a page state machine rather than a page daemon. which would make it more complex.
<netbsduser>
immibis: no, i just don't understand what you're getting at
<netbsduser>
you can't have one kernel thread per CPU because all user threads are also kernel threads (barring odd threading models) and user programs regularly enter the kernel's context
<immibis>
user code is only running on a CPU when kernel code is not running on that CPU
<netbsduser>
it matters not that they have to do a syscall, it's still a call
<immibis>
a CPU is only doing one thing at a time, you know - threads are an abstraction invented by the OS
<nikolar>
there are also hyperthreads
<nikolar>
so not quite
<immibis>
what if the kernel just didn't implement that abstraction in the kernel
<immibis>
hyperthreads are virtual CPUs
<heat_>
kthreads are like... usually the least of your worries
<nikolar>
immibis: what are threads if not virtualised cpus
<immibis>
software virtualized cpus
<heat_>
oh look at mr philosopher over here
<nikolar>
lol
<nikolar>
hello heat_
<heat_>
hi
<immibis>
why does the kernel need to virtualize cpus for itself? it knows about your actual cpus
<nikolar>
how's it going
<heat_>
i'm aight how about you
<nikolar>
not bad, not bad
<immibis>
should i somehow ever feel like writing a kernel again i want to try writing one with a main loop like while(is_computer_on() && !is_computer_on_fire()) {process *p = find_runnable_process(); syscall *s = run_until_syscall(p); if(s) handle_syscall(p, s);}
<nikolar>
so you just want to smush syscal handling and scheduling into one thing
<nikolar>
weird but sure
<immibis>
just to see how that style goes - syscall is not treated as entry into a kernel subroutine, but exit from a user mode subroutine
<immibis>
nikolar: it's a one line description of a vague idea. don't read too much into it.
<nikolar>
i mean there's only one way i can read that