klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
gabi-250 has quit [Remote host closed the connection]
gabi-250 has joined #osdev
bgs has joined #osdev
nyah has quit [Quit: leaving]
valerius_ is now known as valerius
zxrom has quit [Quit: Leaving]
gog has quit [Ping timeout: 260 seconds]
<geist> it's a fairly nice day here today. cold, but right on the usual. clear sky which is odd
<geist> for this time of year in the PNW
<geist> but that's also why it's coldish, usually coldest days are clear
valerius has quit [Killed (NickServ (GHOST command used by theophilus!~corvus@user/theophilus))]
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
valerius_ has joined #osdev
j`ey has quit [Ping timeout: 248 seconds]
ebb has quit [Ping timeout: 246 seconds]
valerius_ is now known as valerius
hl has quit [Ping timeout: 256 seconds]
hl has joined #osdev
ebb has joined #osdev
ebb has quit [Max SendQ exceeded]
ebb has joined #osdev
j`ey has joined #osdev
valerius has quit [Killed (NickServ (GHOST command used by theophilus!~corvus@user/theophilus))]
valerius_ has joined #osdev
Burgundy has quit [Ping timeout: 272 seconds]
AFamousHistorian has joined #osdev
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 260 seconds]
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
epony has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
zxrom has joined #osdev
[itchyjunk] has quit [Read error: Connection reset by peer]
elastic_dog has quit [Ping timeout: 260 seconds]
elastic_dog has joined #osdev
craigo has quit [Ping timeout: 246 seconds]
terrorjack has quit [Quit: The Lounge - https://thelounge.chat]
terrorjack has joined #osdev
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #osdev
bgs has quit [Remote host closed the connection]
elastic_dog has quit [Ping timeout: 252 seconds]
elastic_dog has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
fedorafan has quit [Ping timeout: 246 seconds]
fedorafan has joined #osdev
FreeFull has quit []
Gooberpatrol66 has joined #osdev
troglodito has joined #osdev
troglodito has left #osdev [#osdev]
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
bradd has quit [Remote host closed the connection]
heat has quit [Ping timeout: 256 seconds]
bradd has joined #osdev
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
fedorafan has quit [Ping timeout: 252 seconds]
fedorafan has joined #osdev
fedorafan has quit [Ping timeout: 256 seconds]
fedorafan has joined #osdev
Gooberpatrol66 has quit [Ping timeout: 255 seconds]
Gooberpatrol66 has joined #osdev
fedorafansuper has joined #osdev
fedorafan has quit [Ping timeout: 256 seconds]
srjek has quit [Ping timeout: 256 seconds]
AFamousHistorian has quit [Ping timeout: 256 seconds]
vdamewood has joined #osdev
ThinkT510 has quit [Quit: WeeChat 3.8]
ThinkT510 has joined #osdev
GeDaMo has joined #osdev
pretty_dumm_guy has joined #osdev
<bslsk05> ​ares-os.org: ASID control | Ares
<geist> looks alright
<zid> That's an odd syntax but the styling on the page is nice
fedorafansuper has quit [Ping timeout: 252 seconds]
<dinkelhacker> FireFly: Here in Stuttgart it's not snowing but its cold^^
fedorafan has joined #osdev
heat has joined #osdev
<heat> morn
danilogondolfo has joined #osdev
<zid> oh, can we have the cat instead?
epony has quit [Ping timeout: 268 seconds]
fedorafan has quit [Ping timeout: 246 seconds]
fedorafan has joined #osdev
<heat> ...and other deeply hurtful things you can say
<heat> ;_;
* mjg snaps theo to give heat a hug
<heat> aww so wholesome
<heat> lea startup_secondary_64(%ebp), %eax
<heat> push %eax
<heat> no way to shorten this is there?
<heat> push startup_secondary_64(%ebp) just loads from the calculated address AFAIK
<Mutabah> not afaik
<Mutabah> why bother? It's not hot code, right?
<heat> yeah
<heat> just seems kind of off
<mjg> premature optimizaiton vibes from heat
<heat> annoying how 32-bit does no sort of pc-relative addressing
<mjg> annoying how register starved it is
xenos1984 has quit [Read error: Connection reset by peer]
<heat> (and how x86_64 pc-rel is explicitly opt-in)
<heat> my phys relocatable code is now peppered with lea sym(%ebp) where ebp is a load bias
<sham1> x86 should have had way more registers than it did. Thankfully AMD64 did fix that
<moon-child> 'annoying how 32-bit x86'
<moon-child> I agree
<heat> early boot x86 seems to have the NUTS to run a lot of C code
<heat> (eg efi stub crap)
<kazinsal> hmm. got two months to crank out my usual planned april fools joke that I never actually get around to doing
<kazinsal> maybe I'll do it this year
xenos1984 has joined #osdev
gog has joined #osdev
<Jari--> should port my Windows CRPG to the OS project I have
unimplemented has joined #osdev
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 260 seconds]
unimplemented has quit [Read error: Connection reset by peer]
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 252 seconds]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 260 seconds]
fedorafan has quit [Ping timeout: 256 seconds]
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
fedorafan has joined #osdev
craigo has joined #osdev
DutchIngraham has joined #osdev
dutch has quit [Ping timeout: 265 seconds]
Burgundy has joined #osdev
Burgundy has quit [Ping timeout: 246 seconds]
Turn_Left has quit [Read error: Connection reset by peer]
Turn_Left has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]
Turn_Left has joined #osdev
<mrvn> heat: what do you need OC relative addressing for when all you can do is run one application at a time?
<mrvn> s/OC/PC/
<mrvn> geist: cloudy, dark, 0°C, one might say dreary.
craigo has quit [Quit: Leaving]
craigo has joined #osdev
nyah has joined #osdev
vancz has quit []
pie_ has quit []
vancz has joined #osdev
pie_ has joined #osdev
epony has joined #osdev
fedorafan has quit [Ping timeout: 256 seconds]
fedorafan has joined #osdev
rorx has quit [Ping timeout: 252 seconds]
terminalpusher has joined #osdev
[itchyjunk] has joined #osdev
Terlisimo has quit [Quit: Connection reset by beer]
Terlisimo has joined #osdev
_xor has quit [Read error: Connection reset by peer]
_xor has joined #osdev
rorx has joined #osdev
fedorafan has quit [Ping timeout: 256 seconds]
fedorafan has joined #osdev
__xor has joined #osdev
_xor has quit [Ping timeout: 256 seconds]
dude12312414 has joined #osdev
bgs has joined #osdev
srjek has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
hl has quit [Quit: ZNC - https://znc.in]
hl has joined #osdev
AFamousHistorian has joined #osdev
fedorafansuper has joined #osdev
fedorafan has quit [Ping timeout: 256 seconds]
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #osdev
MiningMarsh has quit [Ping timeout: 260 seconds]
micttyl has joined #osdev
terminalpusher has quit [Remote host closed the connection]
__xor has quit [Read error: Connection reset by peer]
micttyl has quit [Quit: leaving]
__xor has joined #osdev
<zid> gog: https://vxtwitter.com/fuckedupfoods/status/1616594990234214406 Is this how they do on the island?
srjek has quit [Ping timeout: 265 seconds]
MiningMarsh has joined #osdev
xenos1984 has quit [Ping timeout: 248 seconds]
xenos1984 has joined #osdev
Burgundy has joined #osdev
xenos1984 has quit [Ping timeout: 256 seconds]
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
xenos1984 has joined #osdev
<mrvn> zid: You've got to love lava-cake, I mean steak.
fedorafansuper has quit [Ping timeout: 252 seconds]
<ddevault> highly tempted to put x86 support on my never-implement list
<ddevault> as in 32-bit
fedorafan has joined #osdev
<mrvn> you haven't already? What are you thinking?
<ddevault> I support x86_64
<ddevault> my language does not support 32-bit targets yet
gorgonical has joined #osdev
<gorgonical> heat: cntpct_el0 ticks at a rate of what?
<gorgonical> Like I have a difference in ticks of cntpct_el0, is that ticks of the general system clock, e.g. frequency?
<heat> gorgonical, i dont know, im the wrong person for spontaneous arm64 questions
<heat> isn't the arm64 timer at a fixed frequency?
<clever> gorgonical: on 32bit arm, the answer is in CNTFRQ, that register has no actual control over the rate, and is purely just somewhere for the bootloader to store the answer, so kernels can later read it
<ddevault> I have some code to program it to 100 Hz cargo culted from toaruos which you can cargo cargo cult cult if you just want a timer of some kind
<gorgonical> ah
<clever> gorgonical: linux will even divide by zero if you dont initialize that register!
<heat> well, no shit
<gorgonical> Oh then that's good, it means it's working. I'm doing some op-tee profiling right now
<heat> it's an architectural reg
<clever> and i do see a CNTFRQ_EL0 in the armv8 docs
<clever> heat: i'm just a bit surprised there was no check that its valid, and it just goes directly to dividing
<clever> but then again, every working bootloader sets it, so its usually not missing
<clever> my fault for creating an invalid bootloader
<heat> i mean, it's an architectural thing
<heat> linux should not try to interpret it
<heat> in fact, "On a Warm reset, this field resets to an architecturally UNKNOWN value."
<clever> > This register is provided so that software can discover the frequency of the system counter.
<heat> so it may not even be 0
<heat> could just as well default to all-1s or dont-care or something
<ddevault> EFI spec says it has to be programmed with the timer frequency on ARM
<ddevault> so it's valid to rely on it at least in the context of EFI
<heat> EFI spec? more like typoefi spec
<heat> haha am ir ight
<mrvn> is anything using warm reset?
<heat> sure, just reboot
<mrvn> heat: that jumps back into the bootloader so that's fine.
<mrvn> What does kexec do? Does it set the reset vector to the loaded kernel and reset?
<heat> i don't think kexec ever resets anything
<mrvn> probably just always jumps straight to the new kernel. But what else would reset AND skip the firmware that sets the CNTFRQ again?
<mrvn> Secondly where else would linux get the frequency from?
<heat> why are you overanalyzing this
<mrvn> beware of timers that change with cpu / vpu frequency scaling.
<geist> by the time the OS is running, CNTFRQ is supposed to already be initialized if the previous layers behaved themselves
<geist> so if it's set i think you can safely rely on it, if it's not then i guess you have to go find the frequenty to shove into it from the device tree, or Just Know
<geist> but other than that the arch timer stuff on armv8 is quite nice. has good guarantees
<heat> its not fun unless you calibrate it yourself using two other timers
<zid> tscpithpet
<heat> lapiccccccc
<heat> acpi too
<heat> you're not lacking any timer choices are you
<heat> /technically/ the rtc too although that's stretching it, not much of a timer although it can have a stable IRQ tick (128Hz IIRC?)
<sham1> "if the previous layers behaved themselves" That's assuming loads
nur has quit [Quit: Leaving]
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #osdev
<mrvn> heat: real men have a LED and photodiode at an exactly known distance from each other and measure how long it takes for light to travel. :)
<geist> even if the tick rate sucks, you can still fairly accurately detect the 'edge' of a rtc tick
<geist> fairly though, depends on how fast it is to access the IO port, etc
genpaku has quit [Remote host closed the connection]
nur has joined #osdev
genpaku has joined #osdev
<heat> yeah sure
<heat> although you're missing the joke that x86 has N timers/clock sources and they all either suck or serve simply to calibrate the TSC :v
<heat> except in older/weird systems where they can actually sometimes serve a purpose! (in practice probably just the hpet right?)
terminalpusher has joined #osdev
<geist> yeah
<geist> well, i mean the whole 'TSC as a timebase' is relatively new
<geist> like reliably in the last 10 years, with constant/invariant TSC
<geist> otherwise you were always ysing some of the old stuff
<geist> HPET itself is also relatively new, 2000s
<heat> it's very annoying that the only timers/clocksources you want to use do NOT have stable/easily knowable frequencies
<geist> PIT does, so it's always the ultimate fallback
<geist> and/or the thing you calibrate everything against
<heat> all the other ones do. HPET, PIT, RTC, ACPI-pm
<geist> yep, tough HPET is variable in the sense that you have to read it
<geist> PIT (and HPET i think) also have the property that overclocking doesn't fuck it up
<geist> that does affect TSC i think, and is still a hazard
<heat> really?
<heat> so 15h isn't reliable?
<geist> yah, one of the reasons overclockers will say use HPET, etc. and/or reason you may want to manually calibrqte the TSC on boot even if you have the cpuid that says its at this freq
<geist> since that's based on the assumption the base oscillator is running at the particular freq
<geist> but i think it only comes into play if someone overclocks by bumping the base freq, which is much more rare
<heat> I would expect cpuid 15h to be able to understand what the bus frequency is and simply give a new value
<geist> no, i dont think it understands that the osc is moved
<heat> at the end of the day the CPU must be aware of the adjustment
<geist> but i suppose the bios could update that field, though the resolution is whole mhz iirc
<geist> yah and could always figure it out by calibrating TSC against something else
<geist> and no the cpu doesn't need to be aware of it. if the point is you're overclocking the system by bumping the base frequency, you're officially pushing it out of bounds, so the cpu can still assume it's running at 100 * N where you have now set it to 101 or 102 or whatnot
<mrvn> geist: the RTC isn't all that accurate either.
<heat> 15h is also silently misleading on hypervisors as you notice with KVM :v
<heat> geist, I doubt the CPU does not have a hand in setting its base frequency as well
<geist> mrvn: debatable. it's possible at the seconds range, aside from observation error (time to observe an edge on it, etc) it may be PPM pretty good
<geist> heat: sure it doesn't. that's the input oscillator to the whole thing
<mrvn> heat: he is talking about someone desoldering the oscillator and putting in a faster one. The CPU/BIOS only knows about the selected divisor.
<geist> if you have an overclockable board, it may have a VDSO there that the bios can bump out of spec
<geist> exactly
<mrvn> geist: no, the hardware itself often has incredible errors for a clock.
<geist> but i dont think this sort of overclocking is that popular nowadays
<heat> >vdso
<heat> hehe kernel
<heat> geist, anyway so why are certain CPUs not overclockable then?
<geist> because their multipliers are probably fixes
<mrvn> geist: like second per day drift
<geist> mrvn: a) that's a very very very bad RTC, and b) that may still be better than the regular oscillator on the board
<mrvn> geist: Hey, the clock on a C64 drifts 30 minutes per day easily. :)
<geist> my point is RTCs *should* be drifting by no more than a few ms per day, and though that may be bad by atomic clock purposes, that may actually be better than the raw osc on a modern machine
<geist> depends on what their PPM rating is
<geist> but generally RTCs are run by a 32khz crystal with good PPM rating
<mrvn> geist: yeah, *should*. there are cheap companies out there
<geist> yes yes there are always outliers but they may also put an even shittier main oscillator of they're putting in shitty stuff
<geist> my pioint is the RTC is not just universally bad. and in many cases may actually be a *good* thing to calibrate against
<mrvn> Linux has software to correct the drift of the RTC. Does it also include this for calibrating the other timers?
<geist> i'd assume yes. once you have a modern networked machine you just synchronize to something external, and then it's just a software problem
<mrvn> i.e. when I restore the drift correction on boot does it also correct the other timers?
<geist> i'd assume
<heat> i hate time
<mrvn> A drift of 1PPM would be 1000 ticks of a GHz timer. Sounds like that should matter.
<mrvn> heat: there is always too much or too little of it
<geist> though the main oscillator will probably be something like 25 or 100mhz on a modern PC
<geist> re: your C64 that was probably a simple RC oscillator, which are terrible, but very easy to construct
<mrvn> geist: IRQ driven clock
<heat> 1) it's objectively bad and horrible and we'll never figure out true time (atomic clock what?) 2) it's silly 3) it's relative 4) i already had to listen to fuchsia nerds talk way too much about time and now I have to listen to #osdev nerds too
<geist> which means in this context?
<geist> (mrvn that is)
<geist> heat: haha well john, if you knew him, he'll talk to you about time in meatspace too
<heat> the superior timing technique really is the jiffy
<mrvn> geist: It's not a hardware counter. The time you spend incrementing the clock is lost or you drop IRQs or something. can't remeber.
<mrvn> heat: long live the BOGOMIP
<heat> geist, ngl that "you can drop a second over a whole day" kinda scared me
<geist> yah that being said i've only seen one motherboard personally that had bad drift like that, though honestly all modern PCs you typically just clock sync with the network
<geist> so youwouldn't really know if it drifts
<heat> like, surely this means CLOCK_MONOTONIC can also drop a second over a whole day. does this mean long-range timing with CLOCK_MONOTONIC is unreliable?
<heat> what now? cry?
<heat> i was promised my TSC would work for 200 years but now Intel is also lying to me
<mrvn> geist: you can turn of saving the clock to the RTC on shutdown and then keep track of and correct the drift.
<mrvn> heat: monotonic means it doesn't go back in time. Not that it's accurate
<geist> heat: yes, that's correct
<geist> that's why modern software uses external stuff to compute the delta between monotonic nad whatever wall time is
<geist> but in general the kernel really only deals with fairly short term timelines in general, so it's not that big of a deal
<geist> longer term stuff is probably handled in user space anyway
<heat> I had the idea linux never used the RTC except at boot
<mrvn> time only matters when you talk to other computers anyway.
<geist> suspend and whatnot throws a huge spanner in it though
<geist> or if you're doing audio/video stuff, but a pro setup would use external clock syncs anyway
<mrvn> heat: it does
<mrvn> geist: does a PPM error matter for audio/video? Is your hearing that good?
<heat> i was pretty surprised when I found out CLOCK_REALTIME had nothing to do with an RTC
<geist> it matters if you have multiple machines drifting from each other
<mrvn> 22:09 < mrvn> time only matters when you talk to other computers anyway.
<geist> or the sound card is sampling at a rate that drifts from the main cpu oscillator and you start picking up drift
<geist> and then it's the software's problem
<mrvn> geist: so your 1000Hz input turns into 1000.0001Hz. does that matter?
<geist> audio folks will go on ad-nauseum as to why clocks really matter a lot
<geist> eventually, yes.
<geist> eventually as in over the course of maybe minutes, etc
<geist> so it's not an un-real problem
<mrvn> geist: do you make all your microphone cables the same length so you don't get signal delays in there too?
<mrvn> +uneven
<mrvn> I heard they do that to the network cables in datacenters for traders so none of them has an advantage by sitting closer to the switch.
* geist shrugs
<geist> i've also heard that synchronizing audio and video playback when showing a movie/etc on a desktop is actually kinda tricky
<geist> since the audio and video pipelines are usually completely different paths, and at the end of the day the screen/cpu/sound card may have different syncs
<geist> ad humans are pretty good at detecting audio/video drift
<mrvn> horribly difficult. But that's because both the audio and video have unknown and unmeasurable delays
<geist> yah, and even if the delays are known the clocks the hardware is running on are probably different
<mrvn> they are offset but don't drift unless your software screws up.
<geist> sure they drift, if your sound card says it's running at 48khz but it's really 48.001
<geist> then eventually you'll have to resample or sample stuff to synchronize
<mrvn> geist: then you will see less audio samples getting consumed than video frames and adjust the framerate.
<geist> or it's running at 48khz perfectly but your cpu is slightly off, so the systems monotonic is off, etc
<geist> indeed. obviously there are solutions
<geist> but point is its a case where time drift matters
<geist> *without* other computers involved
<mrvn> you also get cases where the audio is 48kHZ but the cheap onboard chip only does 42KHz.
<mrvn> geist: My point was that you don't notice a 48.000 or 48.001 sound card frequency.
<mrvn> The software also has to cope with 29.95Hz vs. 120Hz monitor frequency.
<geist> okay, not sure what we're arguing about now, so will leave it at that
<heat> fun rtc fact
<mrvn> My point was that if the sound card does 48.001kHz you simply go with that playing the 48kHz sound just that bit faster. You don't start resambling and correcting for the bad clock. It's not something people can hear. That is unless you have 2 computers, 2 sound cards and they start to drift away from each other.
<geist> oooh oooh what!
<heat> the efi rtc facilities are so broken in x86 linux won't let you use them
<geist> yay
<heat> ... ok fun here wasn't the right word
<mrvn> heat: what? You can't set the RTC time and wakeup timer?
gdd has joined #osdev
<heat> no, it's just that EFI has timing facilities built in (that access the underlying CMOS or whatever) and various particular implementations on x86 have been so broken that linux just gave up
<heat> so
<heat> config RTC_DRV_EFI
<heat> tristate "EFI RTC"
<heat> depends on EFI && !X86
<heat> geist, x86_64 C code should be mostly PIC itself right?
<mrvn> heat: much more than x86 but not where it counts
<heat> even with globals?
<geist> much more so than before, but the codegen will definitely still have to opt into using PC relative address calculations
<geist> vs just slamming a globla constant down
<heat> hrm
<geist> so not by default per se, but much less expensive to make it so (than x86-32)
danilogondolfo has quit [Remote host closed the connection]
<heat> I don't know how these MANIACS keep running so much C code at the wrong address
<geist> for something like riscv or arm64 it's mostly pic by default because you dont easily have the 'slam down a large constant address' path anyway
<geist> also remember C++ vtables or any sort of jump table like that is almost always un-pic by default
<mrvn> any address-of in the .data section isn't pic.
<mrvn> s/in the .data section/ even
<geist> it can emit a pc relative lea for that
<mrvn> geist: no, an address stored in memory. Not the address of something.
<geist> ah
<mrvn> in case of vtables you have the address of member functions stored in the struct.
<mrvn> lambdas would be another example.
<geist> yeah all basically a subset of the same thing: precomputed address ofs sitting in a variable
vdamewood has joined #osdev
<geist> for fuchsia we switched to relative vtables, but that's an ABI changer
<geist> but it helps reduce the amount of fixups in the rodata/text segment
<heat> geist, anyway how is zircon doing runtime relocation anyway? I've noticed you have a wrapper script for that
<mrvn> I wish for a compiler with position relative pointers. Store all pointers as "address - location where you store it"
<heat> linux also does
<geist> heat: for the kernel? yeah that's exactly what it does
<geist> physboot is going to switch to doing a full ELF load of the kernel here failure soon
<geist> most of the machinery is already in place for that
<geist> s/failure/fairly
<geist> haha freudian skip
<heat> i'm afraid I'll need to wrap my kernel in another kernel (yo I heard you like kernels) that knows the relocations and/or can pass them
<geist> slip. damnit, i have a cat rubbing against my arm which is messing me up
<geist> for KASLR?
<heat> yeah
<geist> yah, frankly im dubious of the need for it in a hobby kernel, much less a real one
<mrvn> heat: me too. A full ELF loader in the boot.S that jumps to higher half.
<heat> geist, my kernel is 99% things dubious for a hobby kernel
<heat> like flamegraphs
<geist> 👍
<mrvn> The question though is: Do you write the ELF loader to be 100% position independent or do you add a relocation stub to it?
<geist> on hat note i'm going to go hack some code and close irc
<geist> i love you all to death but i've just burned 2 hours of the weekend on the internet
<heat> hacker
<heat> <3
<heat> mrvn, writing an elf loader to load another elf is silly
<heat> at that point you just use a blob
<mrvn> heat: what else would load an ELF?
<heat> why would you load an ELF?
<mrvn> KASLR
<heat> <heat> at that point you just use a blob
<mrvn> you want to reimplement all the relocation capabilities needed for KASLR in your own blob format?
<heat> processing vmkernel's relocs and making a new wrapper image with a blob is pretty standard
<heat> particularly the "new reloc format" bit. ELF relocations are not trivial and are slow
<heat> particularly if you have stupid amounts of them like you're bound to have for a large kernel
<mrvn> I don't see speed being relevant. It's a one time thing.
<heat> it's boot time and boot complexity
<heat> KISS
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<heat> somehow here in #osdev people reach the "what if we added another boot phase" conclusion too easily
<mrvn> My linux takes 50s to boot. I'm not concerned with time spend on relocations. :)
<heat> you can boot linux under 1s
<mrvn> Either way. YOu have a boot.S that then loads a blob or elf or whatever.
<heat> no, it's C
<mrvn> heat: for some reason dhcp takes 35+s in the KVMs to finish.
<heat> because doing KASLR in asm must be some degrading stuff
<mrvn> heat: C wouldn't be PIC. then you need another wrapper that relocates the boot code first.
<heat> C is PIC if you're careful enough
<heat> so be careful :)
<mrvn> verry fragile.
<heat> tis life
<mrvn> last I tried building the page tables would put absolute addresses into the table instead of generating them PC relative.
<heat> i have plenty of page table building code in C, fully PIC
<mrvn> lucky you. mine didn't.
<mrvn> on the other hand I can constexpr the page table.
<netbsduser> limine nicely provides KASLR support
<netbsduser> very convenient
<mrvn> does that support multiboot?
bgs has quit [Remote host closed the connection]
<mrvn> netbsduser: is the stivale boot protocol it uses a clone of multiboot?
AFamousHistorian has quit [Ping timeout: 260 seconds]
<netbsduser> mrvn: it can do multiboot but stivale is new
<mrvn> looks very much multiboot inspired at least
<netbsduser> the newest protocol is just called 'the limine protocol' and is defined for amd64 and aarch64 now: https://github.com/limine-bootloader/limine/blob/trunk/PROTOCOL.md
<bslsk05> ​github.com: limine/PROTOCOL.md at trunk · limine-bootloader/limine · GitHub
hmmmm has quit [Quit: Leaving]
<mrvn> So after implementing stivale, then doing it again as stivale2 they decided they still made a bad job at it and started fresh with 'the limine protocol'?
<netbsduser> it was a no-brainer for me because it provides you with a sane and sensible state on loading (for amd64: you are in long mode, loaded in the higher-half, there is appropriate pagetables setup for this + a direct map of all main memory)
<mrvn> netbsduser: sure. Sounds wonderfull. My worry would be that next month they switch to yet another boot protocol and the existing one gets bitrot.
fedorafansuper has joined #osdev
fedorafa_ has joined #osdev
<netbsduser> limine protocol v.s. stivale2 for amd64 at least appears to be very similar
<mrvn> How does that direct map work for ia32?
fedorafan has quit [Ping timeout: 252 seconds]
<mrvn> netbsduser: huh? limine protocol v.s. stivale2 are completely different.
<mrvn> request/responce vs. tagged structs
<netbsduser> it doesn't, limine protocol has no definition for IA32 (i presume because the benefit of it v.s. multiboot is minimal for IA32 while for amd64 the benefits are clear)
fedorafansuper has quit [Ping timeout: 256 seconds]
<mrvn> stivale(2) has x86_64, IA32 and aarch64. No ARM support though.
<mrvn> I guess limine is out. No ARM support and I need that too.
<netbsduser> mrvn: the actual tags/requests appear very similar to me
<netbsduser> i might be missing something but it looks like a straightforward mechanical process to adapt stivale2 kernel to limine protocol
<mrvn> netbsduser: unless I'm mistaken in multiboot/stivale2 you set a bitfield and the bootloader gives you the address of a blob with tags. In limine protocol you put a bunch of requests structs into your kernel (or a special section if you like) and the bootloader puts the address of an reply struct into each request it finds.
<mrvn> limine might be actually easier to parse since you don't have to walk through a blob of bytes extrating the tagged structs.
<mrvn> you could write something like: for (auto &req : requests) { if (req.reply) .... }
fedorafan has joined #osdev
fedorafa_ has quit [Ping timeout: 256 seconds]
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
_xor has joined #osdev
justmatt has quit [Quit: ]
AFamousHistorian has joined #osdev
__xor has quit [Ping timeout: 252 seconds]
justmatt has joined #osdev
justmatt has quit [Client Quit]
justmatt has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
justmatt has quit [Client Quit]
justmatt has joined #osdev
<energizer> could there be a dynamic language for low-level systems programming? i can't think of one
<mrvn> forth
<mrvn> The amount of low-level system stuff you need for a kernel is miniscule. So you can use pretty much anything that can call a few asm functions,
justmatt has quit [Quit: ]
<energizer> afaict most dynamic languages are garbage collected, which doesn't seem all that suitable for systems programming
<kof123> asm was reflective in a way, von neumann machine. it is the extra layers (hardware and software) that strive to eliminate that
<kof123> wrote that before i saw you explain dynamic
<mrvn> GC is a bit of a problem. But you can do a microkernel with the core in asm / C / C++ and everything else as instances of your dynamic language as separate processes.
<mrvn> or have an incremental GC that runs a bit after every interrupt.
<energizer> i don't mean to define dynamic as 'uses gc' i'm just noticing that they tend to be that way for some reason
<mrvn> kof123: check out https://mirage.io/ if you know ocaml.
<bslsk05> ​mirage.io: Welcome to MirageOS
<moon-child> energizer: common lisp
<bslsk05> ​froggey/Mezzano - An operating system written in Common Lisp (180 forks/3313 stargazers/MIT)
<mrvn> energizer: Note that a modern GC takes about the same time as a modern malloc.
<sham1> Yeah, this whole "can't use a GC'd language for systems programming" is frankly BS. Even if we discount refcounts being a type of GC and focus on tracing instead
<moon-child> mrvn: tbf
<moon-child> malloc kind of sucks
<moon-child> but yes gc is great
<sham1> It's all about the impl
<mrvn> moon-child: malloc here stands for other memory management techniques.
<mrvn> For kernel work you just have to take a little bit of care the GC doesn't block IRQs for too long at a time.
xenos1984 has joined #osdev
<mrvn> If you have non-allocating IRQ handlers you can make the GC run with IRQs enabled.
<moon-child> sham1: the interface may constrain what you can implement. For instance, malloc(3) is necessarily prone to pathological fragmentation because it can't compact
<moon-child> mrvn: yeah, absent hard realtime gc I feel the solution is to just avoid allocating in isrs
<sham1> Having thread-local or in this case even core-local heaps would be helpful, because then another CPU going through a GC cycle most likely wouldn't bother the one currently handling an IRQ
<moon-child> though still have to bound pause times to avoid filling up queues. Still
<mrvn> sham1: run one kernel per core with verry little sharing between cores.
<sham1> Eh, that also works
<moon-child> any remotely good gc will have thread-local nursery collections. But I don't think shared-nothing is the solution
<moon-child> as a lot of what's attractive about a gc is making concurrent algorithms work
<moon-child> for those things that you do have to share
<mrvn> then ocamls GC is not even remotely good. It's not multithreaded at all
<moon-child> I mean, ocaml only just recently got multithreading for the mutator, no? Give it time ;P
<moon-child> :P*
<mrvn> moon-child: yes. multicore ocaml is a WIP
<sham1> Even still. It's been, what, a decade or two
<mrvn> haven't checked recently, like last few years.
<sham1> I get that there's complexity there, but still.
<moon-child> python is _still_ completely single-threaded
<mrvn> isn't perl too?
<sham1> Guido cannot into threading
<sham1> Nor Larry
Left_Turn has joined #osdev
<moon-child> but at least he can yoooooooooooooouuuuuuuniiiiiiicoooooooode
<mrvn> "
<mrvn> Multicore OCaml project has now been merged into OCaml tada. This repository is no longer developed or maintained. Please follow the updates at the OCaml Github repository.
Turn_Left has quit [Ping timeout: 256 seconds]
<sham1> Good
_xor has quit [Ping timeout: 260 seconds]
<mrvn> Last commit to the repo is Dec. 9th 2022.
<mrvn> So it could be real recent or they just did a few updates to the repo post merge anyway.
_xor has joined #osdev
_xor has quit [Ping timeout: 260 seconds]
<mrvn> My feeling is that a lot of kernel stuff you want to do per core for one reason or another and it's no big hardship to make inter-core communications explicit. So you can run one instance of the language per core and they can use a much simpler GC.
_xor has joined #osdev
<moon-child> but then you need to use rcu or some such when you communicate
<moon-child> which fucking sucks
<moon-child> and you will screw it up
<moon-child> if you already have a gc, why not use it?
<mrvn> moon-child: ringbuffer work fine or transfering ownership of pages when sending messages.
<mrvn> A multi-core GC has to use atomics and barriers and needs bits for different cores in the metadata and such. Much more complex and potentially a lot slower.
<mrvn> Alone the thread-local nursery is a big jump in complexity.
<moon-child> you need a nursery to not be crap _anyway_
<moon-child> you need atomics and barriers if you want to have more than one core _anyway_. I'm not sure what you mean by 'bits for different cores in the metadata and such'. But that seems like the same sort of complexity as you see in a good concurrent malloc
<mrvn> sure, but that's a different kind. The thread local nursery has to migrate objects when you share them
<moon-child> you are speaking in very absolutist terms
<moon-child> you have to do _something_ when you share something in the nursery and have many threads. You also have to do _something_ when you share something in the nursery and have one thread
<mrvn> Most GCs do mark & sweep in some form and have some bits to color objects. With multiple cores you need per core bits for the color to make sure every core has marked an object.
<moon-child> it's not clear that the former is significantly more complex than the latter
<moon-child> 'With multiple cores you need per core bits for the color to make sure every core has marked an object' wat
<mrvn> Haeh? With a single core there is no sharing.
<moon-child> single threaded generational gc needs a write barrier to cope with old->new pointers
<mrvn> moon-child: yes, for mutables.
<mrvn> if you have them
<moon-child> my friend
<moon-child> when you are writing a kernel
<moon-child> how do you think you are going to get away without mutables?
<mrvn> ignoring hardware registers you can make everything else purely without mutables. It might not be the nicest but it's not required.
<moon-child> if everything is referentially transparent, what does it mean to share something with another core?
<moon-child> mrvn: mmio? Page mapping? Scheduling?
<moon-child> I mean
<moon-child> a kernel's whole job is to manage mutable structures
<mrvn> On a high level sharing then means nothing. But the GC runs below the language and will mutate. Or do you know of a GC that is referentially transparent?
<moon-child> yes, the gc also has to mutate. I was ignoring that for the sake of argument
<mrvn> mmio you don't have under GC controll so you can have code to mutate them. Page tables you can make referentially transparent.
<mrvn> moon-child: that kind of ignores the main point. The GC mutating suddnely turns into mutations from multiple cores and take is what makes it difficult.
<mrvn> anyway, I'm not saying you must run one instance of the language with it's own GC per core. Just that it's not a big deal to do so. And that a single core GC can be written simpler and faster because it can use cheaper memory access.
<mrvn> So no big barrier for writing your OS in python for example.
<mrvn> one python per core and the GIL problem goes away.
janemba has quit [Ping timeout: 252 seconds]
terminalpusher has quit [Ping timeout: 260 seconds]
<moon-child> ...
<moon-child> I'm not really sure what you mean by 'can use cheaper memrory access'. Nor 'mutations from multiple cores'. Broadly, I think you're assuming a lot about the gc design, whereas the design space is quite large, and it's not at all clear what would be optimal in a given case
srjek has joined #osdev
epony has quit [Read error: Connection reset by peer]
<mrvn> moon-child: if the GC mutates something and that needs to be visible on other cores (i,e. outside the per core nursery) then you need atomic access or memory barriers safeguarding it
Gooberpatrol66 has quit [Ping timeout: 255 seconds]
<mrvn> and any shared object you have to mark as alife from multiple cores so you have to do that carefully and ensure all cores have done so before you free anything.
<mrvn> "it's not at all clear what would be optimal in a given case" which is the extra complexity I mentioned.
<moon-child> I stand by what I said
<mrvn> A multi core GC simply has to consider more design and architecture problems.
<moon-child> I agree with that
<mrvn> sounds like we agree in prinzipal
<moon-child> I disagree with most of the rest of what you said, for the reasons I mentioned
<moon-child> and: in general, adding essential complexity to core infrastructure allows you to remove it from everywhere else, so it is not obviously a loss
<mrvn> didn't say it would be
<mrvn> "it's no big hardship"
<ornx> why do i get a fault when i assign ss to a non-null descriptor in long mode
<ornx> but i don't get a fault when i assign it to a null descriptor
<mrvn> wrong CPL? Not a data segment?
janemba has joined #osdev
<mrvn> setting ss to a null descriptor should fault on the next interrupt or syscall
<ornx> cpl/rpl is 0, it looks like a data segment ($3 = {limit1 = 0, base1 = 0, base2 = 0, field1 = 10010000, field2 = 0, base3 = 0})
<mrvn> or function call in generall
<ornx> in long mode?
<ornx> i thought it ignored all of the segmentation stuff
<mrvn> not the cpl/rpl and executable bit and such
<ornx> (hence why base/limit are 0 in that descriptor)
<mrvn> if it ignored all of it then why would you have to set them at all?
<mrvn> not sure what bits ss checks but the specs should say
<ornx> it's not technically necessary i suppose but it seems odd to just leave it as whatever the bootloader set it to rather than setting it to a known state
<mrvn> indeed
<mrvn> potentially the bootloader may never have used a stack and never initialized ss.
<mrvn> Check if setting ss to null is OK. I would have expected it to cause a problem on the next use. But I might be wrong.
<mrvn> did you set ds/es/fs/gs?
<ornx> it seems to work fine - my code sets all the segment registers and then calls some functiosn to draw a picture on the framebuffer in c, so it is probably using the stack for this
<ornx> yeah, it sets all of them and return far returns to set cs
<ornx> s/and return/and then/
<ornx> hm actually, i set cs to a code segment, ss to the data segment i pasted earlier, but the rest i set to null
<mrvn> The manual has a section about how to initialize long mode that should tell you in detail how to setup the segment registers.
<ornx> i'm already in long mode actually, it goes EFI -> GRUB -> multiboot2 , and then i'm in EFI AMD64 mode with EFI boot services still enabled
<mrvn> if setting ss to a data segment at boot works but fails later then that sounds like you are not setting it to a data segment later. Maybe mixed up the number? Or corrupted the descriptor?
<moon-child> I suggest telling boot services to go away before doing anything
<moon-child> does it work if you set ds/es/fs/gs to that descriptor? ss is the only one that's screwed up?