klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
Turn_Left has quit [Read error: Connection reset by peer]
Matt|home has quit [Quit: Client closed]
<Ermine> windowa has superfetch thingie which is related to swap
<Ermine> idk if it swaps proactively
<geist> yah i think that basically tracks the paging behavior of a boot, and plays it back proactively on next boot
<geist> in a oversimplified nutshell
junon has joined #osdev
<junon> So as I understand it, there is never DTB on x86_64, only ACPI, and on ARM there is never ACPI but always PSCI and, depending, either DTB or baked-in values based on SVD, correct? And DTB can say "I don't have information for this particular thing (e.g. booting a core), check DTB"?
<junon> Is that accurate?
<junon> and are SVD's just an STM thing or do other manu's use SVD / something similar?
<geist> on ARM there can be ACPI too
<geist> server based ARMs tend to be ACPI based (with or without DTB)
<geist> i have no idea what SVDs are
<junon> SVDs are the register description files that STM distributes, must be an STM thing.
<junon> So when there's both ACPI and PSCI, which one takes precedent? Is it a choice of the OS or if there's ACPI is there a lack of PSCI?
<junon> Seems linux tries to use DT first then ACPI if there is no DT. I can't find anything suggesting there's both PSCI and ACPI on the same chip.
Renfield has quit [Quit: Leaving]
Yoofie64 has quit [Read error: Connection reset by peer]
sortie has quit [Ping timeout: 276 seconds]
gog has quit [Ping timeout: 260 seconds]
Yoofie646 has joined #osdev
edr has quit [Quit: Leaving]
sortie has joined #osdev
kristinam has quit [Ping timeout: 246 seconds]
kristinam has joined #osdev
qubasa has quit [Ping timeout: 248 seconds]
<zid> I have misplaced my spider friend, if I wake up screaming, I found him
eddof13 has quit [Quit: eddof13]
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
karenw has quit [Ping timeout: 252 seconds]
junon has quit [Ping timeout: 252 seconds]
netbsduser has joined #osdev
JupiterBig has quit [Ping timeout: 258 seconds]
goliath has joined #osdev
aosync has joined #osdev
FireFly has quit [*.net *.split]
remexre has quit [*.net *.split]
aws has quit [*.net *.split]
kazinsal has quit [*.net *.split]
FireFly has joined #osdev
remexre has joined #osdev
kazinsal has joined #osdev
netbsduser has quit [Ping timeout: 252 seconds]
JupiterBig has joined #osdev
asarandi has quit [Ping timeout: 252 seconds]
asarandi has joined #osdev
asarandi has quit [Max SendQ exceeded]
asarandi has joined #osdev
JupiterBig has quit [Ping timeout: 244 seconds]
JupiterBig has joined #osdev
JupiterB1g has joined #osdev
pabs3 has quit [Ping timeout: 248 seconds]
pabs3 has joined #osdev
JupiterB1g has quit [Ping timeout: 248 seconds]
JupiterBig has quit [Ping timeout: 248 seconds]
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
Dead_Bush_Sanpai has joined #osdev
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
Dead_Bush_Sanpai has joined #osdev
<geist> heh i just found a giant spider on the wall in the bathroom
<geist> they love to hang out near water i think
<zid> I've lost mine, maybe I ate him
<zid> https://portlandpestguard.com/wp-content/uploads/2013/06/GiantHouseSpiders.jpg Reference image, maybe don't click if you'renot into spiders
<heat> spidarrrrr
coolcoder613 has quit [Ping timeout: 252 seconds]
JupiterBig has joined #osdev
JupiterB1g has joined #osdev
tanto has quit [Quit: No Ping reply in 180 seconds.]
pie_ has quit [Remote host closed the connection]
vancz has quit [Remote host closed the connection]
tanto has joined #osdev
pie_ has joined #osdev
vancz has joined #osdev
night has quit [Remote host closed the connection]
night has joined #osdev
foudfou_ has joined #osdev
foudfou has quit [Remote host closed the connection]
pabs3 has quit [Ping timeout: 252 seconds]
JupiterB1g has quit [Ping timeout: 265 seconds]
JupiterBig has quit [Ping timeout: 265 seconds]
Ermine has quit [Ping timeout: 244 seconds]
tomaw_ has joined #osdev
tomaw has quit [Read error: Connection reset by peer]
tomaw_ is now known as tomaw
energizer has quit [Quit: ZNC 1.7.0+deb0+xenial1 - https://znc.in]
gildasio has quit [Ping timeout: 260 seconds]
gildasio has joined #osdev
pabs3 has joined #osdev
arminweigl has quit [Quit: ZNC - https://znc.in]
moire has quit [Ping timeout: 276 seconds]
arminweigl has joined #osdev
moire has joined #osdev
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
Left_Turn has joined #osdev
energizer has joined #osdev
asarandi has quit [*.net *.split]
heat has quit [*.net *.split]
wgrant has quit [*.net *.split]
MrBonkers has quit [*.net *.split]
pax_73 has quit [*.net *.split]
andreas303 has quit [*.net *.split]
Celelibi has quit [*.net *.split]
DragonMaus has quit [*.net *.split]
sskras has quit [*.net *.split]
dinkelhacker has quit [*.net *.split]
pounce has quit [*.net *.split]
j00ru has quit [*.net *.split]
colona has quit [*.net *.split]
Turn_Left has joined #osdev
GeDaMo has joined #osdev
Left_Turn has quit [Ping timeout: 244 seconds]
asarandi has joined #osdev
heat has joined #osdev
MrBonkers has joined #osdev
wgrant has joined #osdev
pax_73 has joined #osdev
andreas303 has joined #osdev
Celelibi has joined #osdev
DragonMaus has joined #osdev
sskras has joined #osdev
pounce has joined #osdev
dinkelhacker has joined #osdev
j00ru has joined #osdev
colona has joined #osdev
asarandi has quit [Max SendQ exceeded]
andreas303 has quit [Max SendQ exceeded]
andreas808 has joined #osdev
asarandi has joined #osdev
Dead_Bush_Sanpai has joined #osdev
pax_73 has quit [Ping timeout: 248 seconds]
pax_73 has joined #osdev
foudfou_ has quit [Remote host closed the connection]
foudfou has joined #osdev
catten has joined #osdev
catten has quit [Client Quit]
leg7 has joined #osdev
leg7 has quit [Remote host closed the connection]
coolcoder613 has joined #osdev
fedaykin has quit [Quit: Lost terminal]
fedaykin has joined #osdev
theyneversleep has joined #osdev
<bslsk05> ​twitter: <DrawsMiguel> further. to be a good systems programmer you should acquire other skills that are not programming. for example, mine are: ␤ ␤ - cooking ␤ - drawing furries ␤ - being a homosexual
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
<kazinsal> furries run like 90% of network infrastructure
<kazinsal> shit is real
<kazinsal> if you're at a party full of furries you've got a 50/50 chance that any given person you talk to is either a netops dork or a tradie
<kazinsal> legit
bauen1 has quit [Ping timeout: 252 seconds]
gcoakes has joined #osdev
bauen1 has joined #osdev
antranigv has quit [Quit: ZNC 1.9.0 - https://znc.in]
bauen1 has quit [Ping timeout: 252 seconds]
edr has joined #osdev
bauen1 has joined #osdev
lanodan has quit [Quit: WeeChat 4.2.1]
lanodan has joined #osdev
junon has joined #osdev
junon has quit [Remote host closed the connection]
memset has quit [Remote host closed the connection]
memset has joined #osdev
<immibis> i can confirm the center of non-serious networking at bornhack was the furry village
<immibis> the camp network team provides 1Gbps ports around the camp. for all other needs, consult the furry village.
<immibis> (i got a 10G fiber VLAN trunk carrying internet and pixelflut - https://labitat.dk/wiki/Pixelflut-XDR )
<bslsk05> ​labitat.dk: Pixelflut-XDR - Labitat
<kazinsal> next summer I plan to bring some weird bespoke shit to defcon to see what people do to it
<immibis> (i only ran into pixelflut by accident because its segregated network was present on the same switch, but i'm glad i did)
<kazinsal> some kind of self contained machine with maybe like 2x100G
<kazinsal> see what defcon dorks do to it
<nikolar> lol
<immibis> at some unspecified future point i want to run pixelflut with remote dma
<immibis> i have a 40G infiniband fabric laying around
memset has quit [Remote host closed the connection]
memset has joined #osdev
bauen1 has quit [Ping timeout: 252 seconds]
bauen1 has joined #osdev
CaptainIRS has joined #osdev
goliath has quit [Quit: SIGSEGV]
bauen1 has quit [Ping timeout: 252 seconds]
CaptainIRS has quit [Remote host closed the connection]
antranigv has joined #osdev
gcoakes has quit [Ping timeout: 252 seconds]
JupiterBig has joined #osdev
<immibis> kazinsal: you could run pixelflut, if that isn't already present at defcon
eddof13 has joined #osdev
valshaped7424880 has quit [Quit: Gone]
<sskras> found an article about i286 and the ICE (in-circuit emulation): https://rep-lodsb.mataroa.blog/blog/intel-286-secrets-ice-mode-and-f1-0f-04/
<bslsk05> ​rep-lodsb.mataroa.blog: Intel 286 secrets: ICE mode and F1 0F 04 — rep lodsb
<sskras> probably of no use these days, but seemed historically very interesting to me
<zid> I think ken's blog has what a 286 does if you rep things that aren't scas/lods etc
<mjg> sskras: this kind of stuff is most welcome here
<mjg> thanks for sharing
valshaped7424880 has joined #osdev
bauen1 has joined #osdev
goliath has joined #osdev
PapaFrog has quit [Ping timeout: 244 seconds]
Dead_Bush_Sanpa1 has joined #osdev
Dead_Bush_Sanpai has quit [Ping timeout: 252 seconds]
Dead_Bush_Sanpa1 is now known as Dead_Bush_Sanpai
PapaFrog has joined #osdev
Ermine has joined #osdev
JupiterBig has quit [Quit: leaving]
Ermine_ has joined #osdev
LostFrog has joined #osdev
PapaFrog has quit [Ping timeout: 248 seconds]
<Ermine_> ping
Ermine has quit [Quit: Client closed]
Ermine_ is now known as Ermine
<heat_> pong
<Ermine> nice
<Ermine> libera didn't want to let my bouncer in
<heat_> damn bouncer got bounced
LostFrog has quit [Client Quit]
PapaFrog has joined #osdev
<Ermine> you're goddamn right
eddof13 has quit [Quit: eddof13]
PapaFrog has quit [Ping timeout: 258 seconds]
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
<zid> tell ur bouncer to stop punching people
<zid> and maybe we'd let him in
freakazoid332 has quit [Read error: Connection reset by peer]
PapaFrog has joined #osdev
frkazoid333 has joined #osdev
vai has quit [Remote host closed the connection]
eddof13 has joined #osdev
gog has joined #osdev
theyneversleep has quit [Remote host closed the connection]
hwpplayer1 has joined #osdev
bliminse has quit [Quit: leaving]
netbsduser has joined #osdev
bliminse has joined #osdev
chiselfuse has quit [Remote host closed the connection]
chiselfuse has joined #osdev
op has joined #osdev
eddof13 has quit [Quit: eddof13]
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
gorgonical has joined #osdev
eddof13 has joined #osdev
gog has quit [Quit: byee]
eddof13 has quit [Client Quit]
xenos1984 has quit [Read error: Connection reset by peer]
xenos1984 has joined #osdev
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
Opus has quit [K-Lined]
<geist> hewwo fronds
<heat_> hi
<nikolar> oi
<gorgonical> how goes it lads and ladettes
memset has quit [Remote host closed the connection]
memset has joined #osdev
<gorgonical> having a mid afternoon bubble tea, really enjoying myself
<heat_> i've been fucking with swap this afternoon
<gorgonical> productively?
<nikolar> lel
<heat_> yes
<heat_> i'm basically only missing the swap-in path
<heat_> and then a shitton of testing
<gorgonical> without a swap-in path you've created an evolutionary pressure for memory compactness
<gorgonical> lol
<gorgonical> the goal: never be at the back of the lru
<heat_> :)
<heat_> basically the current result is that newly faulted pages are file-filled or zero-filled instead, and promptly crash
<heat_> fwiw there are a lot of awful edge cases and possible races i need to take into account
<gorgonical> I wonder if you could use that evolutionary pressure productively
<heat_> and i'm not even thinking about swapoff which god oh god why please no
<gorgonical> If you never swap in then you're just auto-pruning stale processes
<heat_> that would be true but the current workload i'm using consumes the whole of memory with file pages
<heat_> and i'm swapping file and anon equally... so we end up swapping all of anon it seems
<Ermine> if you're swapping-off, you need to pull all the pages out of the swap
<heat_> i know, it's terrible
<heat_> consider that there's no easy way to find what swap pages correspond to what virtual pages without walking all address spaces
<zid> Why the fuck does the w7 installer like, whitelist what files you can see on the install media
<zid> I put some drivers onto it so I could use them
<zid> you can't dir them to copy them
<heat_> while also considering the terrible races we can have if we offline a swap area while someone's using it
<Ermine> I guess you need to wait for it to become idle (and don't start new operations on it)
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
<Ermine> Well I should stop talking possible bs
<heat_> that's not bs, it's just non-trivial to pull off
<heat_> i'll need to think about it
<heat_> but who swapoff's anyway
<kof673> eh.....swap needs to be raided i guess to allow for swapping (no pun) disks....
<heat_> fun fact linux supports swap over nfs
<kof673> yes
<Ermine> I did that couple of times in the whole life
<kof673> i do not recommend it lol
<gorgonical> I mean you can swap on top of any filesystem in linux
<heat_> you cannot
<gorgonical> can't you? you just make a file and swapon the file
<heat_> nope
<heat_> traditional linux swapping basically maps logical blocks to extents
<Ermine> Meanwhile lxc can run android containers apparently
<heat_> this obviously breaks CoW filesystems
<gorgonical> Oh, well it's never stopped me so I guess it supports many filesystems
<heat_> nfs (and cifs) support some sort of weird fs swapping that does not use this, but cow filesystems don't seem to implement this for some reason
<heat_> i mean, there's a funny dance you need to do on many filesystems. i can't remember if mkswap does it for you
<CompanionCube> i think you can swap on btrfs?
<bslsk05> ​man7.org: swapon(8) - Linux manual page
<heat_> "Swap files on Btrfs are supported since Linux 5.0 on files with nocow attribute"
<gorgonical> the man pages also now say that NFS doesn't work
<heat_> it's definitely implemented, whether it works or not, i dunno
<kof673> i was using freebsd 10 or something, so it was still there, it was a thing for sure
<kof673> i think you are better off, even a diskless system...give it a local swap disk(s), do the rest over network, and wired preferably not wireless
<kof673> and raid i think was supposedly "Inexpensive" originally, so put the good disks on the server(s), if the crappy "swap disk" dies, no big loss, the important data is elsewhere
<kof673> i think you have to "export" nfs swap a certain way too, anyways...
eddof13 has joined #osdev
gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
xal has quit [Quit: bye]
xal has joined #osdev
<netbsduser> swapoff is one of those things i do not plan to support
<netbsduser> and if i ever did, i would treat it as a very painful and expensive procedure
<heat_> linux swapoff scans all anon PTEs for swap entries that point to that exact swap area
<heat_> it's terribad
<heat_> all anon PTEs meaning literally all address spaces
<netbsduser> perhaps it's just not worth doing anything else
<netbsduser> that's how i would do it
gcoakes has joined #osdev
hwpplayer1 has quit [Quit: ERC 5.5.0.29.1 (IRC client for GNU Emacs 29.4)]
<heat_> yes but it requires you to form a linked list of address spaces or something
<heat_> which DOESNT SCALE SUN ENGINEERING ETHOS
<mjg> did someone say LOL
<heat_> LOL
<mjg> yo heat_, want some sun engineering ethos
<heat_> yes
<mjg> lemme show you
<heat_> who wouldn't want some sun engineering ethos
<mjg> suppose you have unrelated processes having a file open, each a different one
<mjg> the file is fully cached in memory 'n shit
<mjg> in your assesment, how much LOCK CONTENTION is there when these poor fucks read their own files at the same time
<mjg> there is a weird semi-distributed linked list which they hash into
<mjg> and which has bufs removed from and added back on every read
<mjg> there is tons of conflicts when hashing (go figure) and every such read contends on a lock twice
<heat_> linked list of what?
<heat_> pagen?
<mjg> some fucken' bufs
<mjg> backing the shite
<mjg> i did not check specifically
<heat_> oh, no page cache?
<mjg> point is you have 2 processen minding their own business
<mjg> and they can still contend on zfs (lol)
<mjg> i would understand if i/o was needed 'n shit, but no
<mjg> 's all fully cached
<heat_> i was told it was the last word on filesystems
<mjg> it was the lsat word in sun engineering ethos
<zid> last word on filesystems is zfs
<zid> that's why it starts with z
<heat_> sun left us engineering ethos for the ages
<mjg> now here is some advice heat
<mjg> i just wrote a patch
<mjg> works great
<mjg> but i have a feelin' i missed something
<mjg> so i'm gonna sleep on it instead of posting
<mjg> or committing
<heat_> cool
<mjg> perhaps onyx development would have been less regrettable
<mjg> if you followed my engineering ethos
<heat_> perhaps onyx development would have been less regrttable if you shut the fuck up and send some patches
<mjg> well i did send a patch
<mjg> singular
<mjg> lemme figure out one more
<mjg> lmao
<mjg> /* eww eww ewew eww eww eww eww*/
<mjg> maybe i'll add one "eww"
<heat_> it's a really fuckin yucky hack
<mjg> i note ewew instead of eww
<mjg> i'll patch that
<heat_> don't forget the signed-off-by
<mjg> PR opened!
<immibis> why would a linked list of all address spaces be a problem? 2 extra pointers per address space? unless you are creating and deleting them constantly? it doesn't matter that swapoff is slow, in fact it should be slow when that helps make other things fast
<heat_> lock
eddof13 has quit [Quit: eddof13]
<mjg> linux has a linked list for mms
<mjg> it sucks ofc
terrorjack4 has quit [Quit: The Lounge - https://thelounge.chat]
<mjg> see lru_gen_add_mm/lru_gen_del_mm
<mjg> i'm gonna sort that out tho
<heat_> it's also used in swapoff
<mjg> with OBJECT CACHING
<heat_> see try_to_unuse
<heat_> in mm/swapfile.c
<heat_> oh funny, this one is a separate one just populated when swapping
<heat_> whether lock contention happens here is unknown to me, probably not. swapping isn't really scalable as-is anyway
terrorjack4 has joined #osdev
<immibis> haven't lockless linked lists been solved?
<heat_> no
<mjg> :dd
<mjg> the solution is mostly to not use them
<immibis> RCU?
<netbsduser> if you unconditionally add all anonymous vm objects and processes to a linked list, it doesn't sound too frightful to me
<netbsduser> hopefully you don't do that often enough to be frightful
<immibis> is there an obvious better structure i'm missing? deques are linked lists with a constant factor. vectors just no. a randomly sorted linked heap?
<heat_> RCU'ing over large swaths of code (that may even sleep) is not really possible/beats the purpose
<heat_> what? linked list is _the worst_
<mjg> no amoutn of rcu helps if you need to *change* stuff
<immibis> i thought the problem was just adding and deleting list entries
<heat_> vectors are good (But It Depends), deques are okay
<mjg> good news is that the question is obsolete
<heat_> red-black trees and all that shit are meh, okay-ish
<mjg> just RUST
<heat_> oh you know what i found out yesterday?
<netbsduser> to try to decontend a little bit that big linked list of things that can have swap entries in them
<mjg> that the solaris diaspora likes rust?
<heat_> rustc supports ASAN and KASAN on unsafe {} code
<netbsduser> i fancy i would just do one of those - what do you call them - replicated rwlocks
<mjg> that is nice
<mjg> some c parity
<heat_> also KCSAN and all that shit
<netbsduser> where there's one per core and you acquire them all for write-locking
<mjg> that's a known and terrible idea
<mjg> it was great with liek up to 8 cores
<mjg> goes to shit the more you need it
<netbsduser> i might not actually, if write locking was only needed for the the swapoff case it would be understandable
<heat_> hmm why does it go to shit?
<netbsduser> but since you need it more often it's not on
<heat_> if writing really is mega rare
<mjg> a per-cpu rw lock which can be sensibly taken for writing is a solved problem
<netbsduser> i would also be keen to hear how it's bad if you have a write-infrequently case
<mjg> (btw netbsd has a very much frequently taken case of the sort, hilarity ensued)
<mjg> heat_: this is only tolerable if this happens liek once per boot time
<mjg> and even then why would you do it
<mjg> imagine a box with -- say - 512 threads
<mjg> are you gonna take 512 fucking lock
<mjg> s
<mjg> at the same time
<heat_> i guess
<mjg> again it's an idea from where you had a core count you could enumerate on your fingers
op has quit [Remote host closed the connection]
<mjg> you know, great shit for 2003
<heat_> good news, i still do
<mjg> yes i know the onyx ambition
<heat_> mon i have good rcu and freebsd does not
<mjg> :(
<mjg> oh wait
<mjg> it's no longer a burn
<mjg> my linux has better rcu than your onyx mate
<heat_> now you have mega rcu :(
<netbsduser> if rcu is so good why is there 7 of them in linux
<heat_> because it's THAT GOOD
<heat_> it's like fentanyl
<mjg> so good you can't get enough
<netbsduser> rcu classic edition, rcu with trees, sleeping rcu, sleeping rcu with trees, bottom-halves' rcu, rcu for sched, i forgot the other one
<heat_> there's no classic rcu, there's a tiny rcu
<mjg> synchronize_rcu() is the shit
<heat_> you forgot tasks rcu and preemptible rcu
<immibis> mjg: yes for a write-infrequently case you are absolutely going to take 512 locks
<netbsduser> if there's no rcu classic edition what did i copy from the patent?
<netbsduser> heat_: i thought preemptible rcu was sleeping rcu
<heat_> no
<heat_> there was a classic rcu version like... in 2.6.X, they eventually whacked it for tiny rcu and tree rcu
<heat_> preemptible rcu is a special mode for CONFIG_PREEMPT=y
<heat_> where you don't actually disable preemption on a rcu_read_lock()
<netbsduser> i havne't read much about rcu with trees but i like the concept which i think i understand the gist of
<heat_> so, like, your CONFIG_PREEMPT=y desktop kernel will not struggle with interactivity on large RCU read sections
<heat_> whereas if you explicitly picked =n, that's not a problem, you don't care
<netbsduser> unfortunately if i implemented it based on that gist then paul mckenney himself would turn up to serve papers to me for infringing one of his 600 patents
<heat_> dropping the read lock and cond_resched() tends to be Good Enough for server workloads
<mjg> it's a whack-a-mole "where do i need to cond_resched now"
<netbsduser> oh, i think i saw some patent or other turn up relating to this preemptible rcu
<netbsduser> i think they were promoting it for RT/Linux
<heat_> mjg, allegedly they want to get rid of that
<immibis> how much of this complexity is caused by threading the kernel instead of only having user threads and one kernel "thread" per CPU?
<heat_> none
<immibis> if it's one thread per CPU you don't need to think about other kernel threads sleeping because they don't. but then you have to write them more complexly in async style in C, so...
<netbsduser> immibis: you mean by "On kernel thread per cpu" the situation whereby the kernel is always coresident with user programs and control sometimes moves into the kernel?
<netbsduser> i would ask how you would carry out actions like being the page daemon if limited to that
<heat_> you can very trivially see all sorts of multiprocessing issue when threads = cpus (aka approximated to one thread per CPU, in the kernel)
<immibis> not sure what you mean by the first part. you want to eliminate system calls by running everything in ring 0 and calling the kernel directly? that's not relevant to threading models
<immibis> (1) "being" isn't an action and (2) you'd have to have a page state machine rather than a page daemon. which would make it more complex.
<netbsduser> immibis: no, i just don't understand what you're getting at
<netbsduser> you can't have one kernel thread per CPU because all user threads are also kernel threads (barring odd threading models) and user programs regularly enter the kernel's context
<immibis> user code is only running on a CPU when kernel code is not running on that CPU
<netbsduser> it matters not that they have to do a syscall, it's still a call
<immibis> a CPU is only doing one thing at a time, you know - threads are an abstraction invented by the OS
<nikolar> there are also hyperthreads
<nikolar> so not quite
<immibis> what if the kernel just didn't implement that abstraction in the kernel
<immibis> hyperthreads are virtual CPUs
<heat_> kthreads are like... usually the least of your worries
<nikolar> immibis: what are threads if not virtualised cpus
<immibis> software virtualized cpus
<heat_> oh look at mr philosopher over here
<nikolar> lol
<nikolar> hello heat_
<heat_> hi
<immibis> why does the kernel need to virtualize cpus for itself? it knows about your actual cpus
<nikolar> how's it going
<heat_> i'm aight how about you
<nikolar> not bad, not bad
<immibis> should i somehow ever feel like writing a kernel again i want to try writing one with a main loop like while(is_computer_on() && !is_computer_on_fire()) {process *p = find_runnable_process(); syscall *s = run_until_syscall(p); if(s) handle_syscall(p, s);}
<nikolar> so you just want to smush syscal handling and scheduling into one thing
<nikolar> weird but sure
<immibis> just to see how that style goes - syscall is not treated as entry into a kernel subroutine, but exit from a user mode subroutine
<immibis> nikolar: it's a one line description of a vague idea. don't read too much into it.
<nikolar> i mean there's only one way i can read that
<immibis> while(is_computer_on() && !is_computer_on_fire()) {process *p = find_runnable_process(); while(syscall *s = run_until_syscall_or_timeslice_expired(p)) {handle_syscall(p, s);}} happy?
<nikolar> so still the same thing then
<immibis> don't forget if(p) {...} else {turn_cpu_off();}
<immibis> nikolar: have you ever written a video game from scratch?
<nikolar> i have
<nikolar> a crappy game, but sure
<immibis> so you know all about game loops
goliath has quit [Quit: SIGSEGV]
<nikolar> oh also tetris
<nikolar> yes i know about event loops
<immibis> while(!player_wants_to_quit) {while(get_event()) {process_event();} render(); wait_for_vblank();}
<immibis> what do you find wrong with a kernel being one other than something i'll probably run into if i ever get around to actually trying it
netbsduser has quit [Ping timeout: 260 seconds]
<heat_> threads need to be preemptible
<heat_> the CPU already gives you an event handling mechanism called "interrupts"
<nikolar> ^
<nikolar> and you can look at syscalls like one of those events
<immibis> all interrupt handlers unwind the stack appropriately and jump to the return address of run_until_syscalll_or_timeslice_expired_or_interrupt
<immibis> downside: no interrupts when running in kernel mode (but kernel code would have to be kept minimal or get very complex)
<immibis> i guess that leads to a microkernel design
<immibis> who knows. speculating is pointless. i may try writing that loop one day.
memset has quit [Remote host closed the connection]
memset has joined #osdev
linear_cannon has quit [Remote host closed the connection]
eddof13 has joined #osdev
lanodan has quit [Quit: WeeChat 4.3.2]
eddof13 has quit [Client Quit]
<heat_> oooh github's down
<heat_> mjg you crashed the fucking site
<Ermine> omg wow
lanodan has joined #osdev
gog has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]