<bslsk05>
libera.irclog.whitequark.org: #osdev on 2021-06-25 — irc logs at whitequark.org
<vai>
Mutabah: is there any humanly understandable standards on what threads and also processes do own?
<vai>
documentation
<Mutabah>
for a specific design, sure - e.g. POSIX would specify the duties
<Mutabah>
but in a general sense, they're just broad categories and can get fuzzy
<danieldg>
vai: generally anything reached by a pointer is owned by the process
<vai>
Mutabah: but if you make a practically senseful LIBC implementation - you arent all lost ? yes ?
<vai>
thats tons of POSIX
<vai>
kernel implementations vary on OSes
<geist>
well... it's not *easy*
<geist>
it involves a lot of learning, a lot of sleuthing around, a lot of looking at existing things, figuring out what is in a spec and what is defacto
<geist>
that's sort of the meat of a lot of osdev, at least the part if you're trying to be compatible with existing things
sts-q has joined #osdev
lainon has quit [Remote host closed the connection]
lainon has joined #osdev
<moon-child>
vai: threads and processes are orthogonal
<moon-child>
threads are a concurrency domain, processes are a security domain
aquijoule_ has joined #osdev
zoey has quit [Quit: Leaving]
aquijoule__ has quit [Ping timeout: 258 seconds]
srjek_ has quit [Ping timeout: 240 seconds]
<klange>
Speaking of threads... found an issue with my thread scheduling recently; I don't really deal with "processes" in too many places and forked children of a thread aren't being tracked properly as children of the original process; need to fix that for 'waitpid' to work correctly in certain situations.
ElectronApps has quit [Read error: Connection reset by peer]
ElectronApps has joined #osdev
freakazoid333 has joined #osdev
freakazoid333 has quit [Read error: Connection reset by peer]
ElectronApps has quit [Read error: Connection reset by peer]
mctpyt has joined #osdev
lainon has quit [Quit: Leaving]
ids1024 has joined #osdev
thinkpol has quit [Remote host closed the connection]
thinkpol has joined #osdev
freakazoid333 has joined #osdev
Izem has joined #osdev
freakazoid333 has quit [Read error: Connection reset by peer]
<sham1>
Are most of the Freetype patents even in force anymore? IIRC ClearType isn't
FireFly has joined #osdev
<gog>
i don't think so
<gog>
they expired at least a few years ago
Mooncairn has joined #osdev
<sham1>
Well yeah, so Freetype is free game
<graphitemaster>
PfrekType
fconti has quit [Remote host closed the connection]
fconti has joined #osdev
Oli has quit [Read error: Connection reset by peer]
skipwich has joined #osdev
<dzwdz>
how do kernels usually keep track of multiple processes? do they just have a global with the currently running one?
<j`ey>
yep
<sham1>
They indeed do store it in data-structures. Usually per-processor
<dzwdz>
that's suprisingly simple
<j`ey>
in arm64 + linux, it's stored in a register, for fast lookup
Oli has joined #osdev
<dzwdz>
couldn't the process just alter that?
<dzwdz>
or are there some registers that are only accessible from ring0
<geist>
bingo
<geist>
the latter
<geist>
depends on the arch, etc etc. but some arches have at least one supervisor only register that the kernel keeps a pointer to the current task/etc in it even when running in user space
<dzwdz>
i assume that normal x86 doesn't have that?
<geist>
it basically does in the form of a bunch of segments and whatot
<geist>
this is where x86-64 would differ, since you traditionally store that anchor in the form of the KERNEL_GS_BASE
<geist>
which when you enter the kernel the first thing you do is SWAPGS to 'recover' it
<geist>
note the reason you dont just generally put this all in a global is when you're readling with SMP (multiple cpus) each is independent
<geist>
so you cant/not as efficient to just have a void *current_task; that you look at
<geist>
since there are as many current tasks as there are cpus in the sstem
<dzwdz>
yeah, ik
<geist>
but in supervisor mode you can always just look up what the current cpu number is (APIC ID) on x86 so wortst case you can always current_task[get_current_apic_id()]
<geist>
so it's mostly an optimization to have it ready to go in a segment like that
gareppa has joined #osdev
<dzwdz>
also, do people use paging inside of their kernels? or is it usually only used in the userspace
<geist>
and on arches like arm and riscv, like j`ey was saying there's just a supervisor only banked register that you can just leave pointing at the current thread, so it's nice and handy to just dereference it
pretty_dumm_guy has joined #osdev
<geist>
yes, 100% paging in the kernel
<geist>
or more to the point, most arches dont let you *not* use paging in the kernel
<geist>
you can more or less straight map ram in the kernel if you want, but that's still paging, because once you turn it on it's generally on for everything, user and kernel
<dzwdz>
aight
<geist>
however actually depends on precisely what you mean by 'paging' in this case
gareppa has quit [Remote host closed the connection]
<geist>
lots of folks mean different thing when they use that word, it's very overloaded
<geist>
i'm talking about it from the point of 'do you have to use the mmu in the kernel'. yes.
<geist>
if wha tyou mean is stuff like demand paged files or whatnot in the kernel, generally not. it gets really complicated if you allow the kernel to page fault on itself
<geist>
it's doable, but generally best left to limited use cases. *usually* kernels more or less map things straight into their own address space and dont really demand fault or whatnot
<dzwdz>
i just meant having the kernel use anything other mapping than a direct one
<dzwdz>
or however it's called
<geist>
yeah sure
<dzwdz>
the one where virtual addressess == physical addressess
<j`ey>
identity mapped
<geist>
yah folks call that 'identity map'
<geist>
a variant of it is where it's not an identity, but it's at least a linear map
<geist>
ie, physical 0 - 256MB mapped to 0x8000.0000+ or something
<geist>
i dunno precisely what you call it, but that's also a common pattern. map all of physical ram into the kernel in a single run and then run th ekernel out of it
<geist>
handy for just getting going
<geist>
and pretty efficient, but security/safety dangerous
<geist>
some older arches actually have that by default where some ranges of virtual space at hard fixed to directly map to physical, supervisor only
<dzwdz>
how do people test their kernels?
<geist>
generally by booting it a lot and doing things
<geist>
highly recommend using both emulators and physical hardware
<dzwdz>
what about unit tests?
<geist>
you could
<geist>
i dont hear folks talk about it too much here, but you can buid in some if you want
<geist>
i have some for mine, though most of them are fairly manual
<geist>
ie, run this command, observe it doesn't fall over, etc
<geist>
vs some sort of automated test suite
<geist>
you know, create 1000 threads, let them wail on a mutex, observe that no two things grab the mutex at the same time, repeat
<geist>
i've een slowing working on automating that and maybe running on a server somewhere on qemu
<hgoel[m]>
I've been meaning to put together an automated unit testing thing for my kernel via the serial port, but keep putting it off for more 'fun' things like drivers
<j`ey>
geist: doug has some for his kernel
<geist>
yah doug16k is probably the best to answer here
<geist>
i'm fairly old school and dont lean on unit tests and it seems to be in vogue nowadays
<geist>
but i'm slowly coming around to the idea that it's an annoying up front investment but it does pay off once things get more complicated
<geist>
but i still also like integration tests and stress tests a lot too
<geist>
unit tests with nothing else i think leaves a lot on the table
<geist>
in other words if you only put a finite amount of time in it i'd start by doing some sort of end to end stress test first, then go back and fill in the unit tests
<geist>
but ideally you do both from the beginning
<j`ey>
unit tests for data structres is a good idea@!
dennis95 has quit [Quit: Leaving]
<geist>
yeah 100%. i'm not in any way saying dont do it
<geist>
just saying dont stop at writing a few unit tests and calling it done. i've seen that again and again
<geist>
and i think that leaves tons of blind spots, especially in something like a kernel where there's lots of subsystems working together
<geist>
if you have say 10 minutes to write a test, write something that stresses out the system all else held equal
<geist>
ideally bot
<geist>
both
<dzwdz>
how do people usually implement malloc() inside of their kernels?
<dzwdz>
my current idea is to just allocate pages and then map them at some very far away place in virtual memory
<GeDaMo>
malloc is a user space function normally
<dzwdz>
yeah, but i mean for internal kernel use
<GeDaMo>
Built on mmap
<dzwdz>
that's linux specific
<sham1>
What is, building malloc on top of mmap?
<hgoel[m]>
same basic idea, although instead of mapping in new pages there's also the option to just have all of physical memory mapped, so when the malloc pool is full you just allocate more pages and translate their address
<geist>
yah that's what i've done in the past
<geist>
there's lots of ways to do a malloc in the kernel but has hgoel[m] said its where the backing pages come from
<geist>
so one strategy is to virtually map in chunks of pages and extend the heap when it runs out, more or less how user space does it
<geist>
though you really dont want to demand fault it as much as maybe add a chunk of new pages, say 1MB at a time
<geist>
the other strategy is what hgoel[m] said, and what I do and linux does: map all of physical ram (or at least a lot of it) into the kernel somewhere. i call it the physmap
<geist>
then if you ask for a physical page you can do a simple piece of arithmetic tosee where it was already mapped and then use that. so you can grab chunks of physical pages and add it to the heap and th heap just lives scattered across the physmap
<geist>
it's very efficient, but downside is relatively dangerous
<geist>
so it's a compromise. as are lots of kernel design decisions
GeDaMo has quit [Quit: Leaving.]
<hgoel[m]>
yep, I'd say if you're just getting to the allocator, just make sure the rest of the system doesn't particularly care how the allocator works, then you can just put in a minimum viable version and swap it out later
aquijoule_ has quit [Quit: Leaving]
aquijoule_ has joined #osdev
aquijoule_ is now known as richbridger
richbridger has quit [Remote host closed the connection]
richbridger has joined #osdev
Mooncairn has quit [Quit: Quitting]
dormito has quit [Ping timeout: 272 seconds]
mctpyt has joined #osdev
dormito has joined #osdev
<NieDzejkob>
the problem with just handing out physical addresses (with an offset) from kmalloc is that when you request something >4k, you now have to worry about fragmentation of the physical memory
<bslsk05>
twitter: <actualGraphite> 1/ Windows 11 needing TPM 2.0 got me thinking, what can we as consumers do to prevent this from being a requirement of Windows? This way, PCs without TPM, or with it disabled (as many enthusiasts prefer) can still use Windows 11. I believe I've come up with the ultimate solution.
<gog>
:o
<hgoel[m]>
lmao
<gog>
you found a workaround or are you trolling?
<j`ey>
gog: read the thread :P
<gog>
NO
<gog>
(ok)
<gog>
lmao
<gog>
ok it is genius
<geist>
yah i saw something about that
<geist>
seems every time they push it. "oh by the way we're going to require this" and then they inevitably back off
<gog>
if it happens i'll probably quit windows forever and mean it this time
<geist>
yeah i think there's just too many random computers in random places that doesn't have all that crap wired up even if they wanted to
<geist>
it's a pipe dream but i'm sure some huge sizable number of PCs out there are just cobbled together crap
<graphitemaster>
SecureBoot being a requirement also sucks. I have a friend who installed Windows with SecureBoot enabled and did overclocking, but the overclock is unstable and couldn't get back in the bios because to do that you have to boot the OS and inside the OS do "boot into uefi shell"
<graphitemaster>
Ended u p bricking a mobo :|
<graphitemaster>
SecureBoot is shit too
<clever>
graphitemaster: that sounds like a poor implementation
<geist>
yah i was doing some maintenance on my sisters computer and it turns it since it's some store bought dell it is fully secure booted
<gog>
ugh
<geist>
i got real cautious at that point, since if i broke it it'd likely force a full reboot
<clever>
my understanding is that secureboot is meant to ensure the boot chain cant be corrupted from "within"
<clever>
as-in, malware somehow got root, and rewrite the boot chain
<geist>
but you couldn't touch half of the bios bits without disabling it, etc
<geist>
s/full reboot/full reinstall
<clever>
but if you are unable to factory reset a mobo with secureboot setup, its just wrong
<graphitemaster>
Yeah secureboot these days is more than just ensuring the boot chain is correct, it locks you out of the bios too, can't even boot from livecd or liveusb in this case.
<hgoel[m]>
yeah, I'm expecting they'll drop it as a requirement but if not it'll be a great excuse to switch to linux native and windows vm
<graphitemaster>
Windows VM? You got TPM in your VM?
<graphitemaster>
That's the other thing, VM not possible either
<hgoel[m]>
ouch
<gog>
this is what microsoft has wanted for like 20 years now
<gog>
they can actually force the end-user to abide by vendor lock-in
<j`ey>
graphitemaster: VM's can emulate tpms
<hgoel[m]>
I assumed TPM could just be emulated in VM, if not maybe I'll just have to get a third computer for anything I might need windows for
<graphitemaster>
They'll only accept fTPM of select CPUs btw
<graphitemaster>
They have a processor list already online.
<kazinsal>
almost assuredly some dipshit product manager released the dev test environment requirements as the prod requirements
<kazinsal>
we wouldn't have any info at all about win11 right now if not for that leak a few days ago
<kazinsal>
the whole announcement was one big post-leak panic
<hgoel[m]>
I'm leaning towards the leak itself having been intentional to get a sense of what public reception might be like
<geist>
that did all remind me i needed to close out that duplicate MSFT account
<geist>
someone had grabbed my main email address forever ago
<geist>
that was fairly painless to recover it and shut it down, so i give them that
<geist>
re: windows on VM. clearly they'll have some solution for that
<geist>
since that's a sizable chunk of their whole azure stuff
<gog>
i wonder if windows for vm will become its own sku
<geist>
and AWS, etc, so i think just requiring that windows can only be VMed on top of hyperv is also dumb
<geist>
but i can generally see them running some nerfed thing if they dont have hypervisor level access. there's already something kinda like that now, it's just generally not visible unless you poke around
<hgoel[m]>
yeah
<kazinsal>
PNW folks, stay safe this weekend
<kazinsal>
fuckin hot one
<geist>
yah totally
<gog>
new england too
<kazinsal>
vancouver is set to smash our heat records by 11 F on monday
<geist>
saw an article that seattle and SF are still the least ACed cities in the US, both below 50%
<geist>
but seattle is inching up
<kazinsal>
yeah, some friends of mine have been getting swamp coolers ready to go
<graphitemaster>
<kazinsal> vancouver is set to smash our heat records by 11 F on monday
<graphitemaster>
First time I've seen an Canadian use Freedom units for weather
<hgoel[m]>
on the other hand been pretty cool here in NY
<geist>
it seems more impressive
<gog>
wow they're forecasting 25° in the northeast next week
<kazinsal>
when the freedom units scale brings the temperature to three digits you need to switch to it for the sheer horror
<gog>
that's actually like scorching hot for iceland
<kazinsal>
42 C / 108 F on monday
<geist>
yah they're predicting blowing past records for *any* day in any year in history
<geist>
much less this early in the year
<kazinsal>
yeah, this is bizarrely early for heat
<gog>
20° forecast for akureyri on tuesday
<kazinsal>
usually it's late july through august
<gog>
that's way north
<geist>
they're saying 108 in vancouver? wow
<kazinsal>
yep
<graphitemaster>
24C all week here, rain every day until next Friday
<graphitemaster>
:(
<kazinsal>
I'm a bit inland but still near the fraser river, but apparently that's not enough water to temper things
<geist>
portland is definitely gonna get it, seattle city wise is probably only going to top 100 (though obviously there are terrible heat islands where it can be a lot worse)
<kazinsal>
they're saying the water temps in the salish sea are going to reach mid 70s F
<kazinsal>
which is friggin warm
<geist>
noice. here on the island i think they're predicting about 95
<gog>
beautiful sunny day expected tomorrow :) but windy :|
<graphitemaster>
In the year 2100 people will be like "20C? damn that's cold, how did anyone SURVIVE"
<kazinsal>
nice. being right next to the sound helps with the temps I bet
<geist>
wel hmm, actually now the forecast is 106 for the island. so actually no. :(
<geist>
on monday
<kazinsal>
aww
<kazinsal>
the 2021 preview of the climate apocalypse certainly is interesting
ahalaney has quit [Remote host closed the connection]
<kazinsal>
everyone thought the wasteland was going to be all fallout style with two headed cows and radioactive rodents of unusual size, but no, we're going straight for mad max
<gog>
look at it this way: it's the coolest summer of the rest of our life
<geist>
that being said i bet it wont break 100 here. there are so many microclimates and they usually dont get the island exactly right
offlinemark has joined #osdev
<geist>
and being in the woods helps immensely
<kazinsal>
yah. also quite glad my apartment faces west so I won't get the sun streaming in through the windows until the evening
<bslsk05>
cliffmass.blogspot.com: Cliff Mass Weather Blog
<geist>
high pressure plus a lot of downdraft compression warming against the mountains to the east
<geist>
you get warm air flowing from the east over the mountains and then as it downdrafts it compresses and heats up
<gog>
then you get an inversion where the air closer to the ground is cooler than the air above it
<gog>
and then smog
<kazinsal>
the good news is we're still going to be about 200 C under the autoignition temperatures for wood!
<geist>
yay
<gog>
for now
<kazinsal>
that being said I'm pretty sure at 42+ C, a sufficiently energetic sneeze will light a forest fire
zoey has quit [Remote host closed the connection]
zoey has joined #osdev
<kazinsal>
July 1st masks are going to be no longer required by provincial health order in BC and will be up to businesses, but I'm expecting the smoke to be rolling in by then so I'll definitely be keeping mine handy
<gog>
all restrictions in iceland are being lifted tomorrow
<gog>
200,000+ are now fully vaccinated
<kazinsal>
damn that's gotta be like, most of the country
<gog>
more than 3/4
<kazinsal>
we're now at 9.2 million fully vaccinated out of a total of 38 million. getting there!
<gog>
i get pfizer #2 on july 13, will be fully immune on august 3rd just in time for reykjavík pride
<kazinsal>
my second is tentatively schedled for around then as well
<gog>
nice
freakazoid333 has quit [Read error: Connection reset by peer]
<vin>
And in midwest there is rain and/or hailstorm for the next 10 days.