klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
housemate has joined #osdev
<kof673> whether RPN is convenient for people to use is another story
raphaelsc has joined #osdev
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
frkazoid333 has joined #osdev
netbsduser has joined #osdev
chiselfuse has quit [Ping timeout: 264 seconds]
chiselfuse has joined #osdev
goliath has quit [Quit: SIGSEGV]
housemate has joined #osdev
eddof13 has quit [Quit: eddof13]
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
thinkpol has quit [Remote host closed the connection]
thinkpol has joined #osdev
housemate has joined #osdev
housemate has quit [Remote host closed the connection]
cloudowind has joined #osdev
troseman has joined #osdev
troseman has quit [Client Quit]
troseman has joined #osdev
troseman has quit [Client Quit]
eddof13 has joined #osdev
eddof13 has quit [Quit: eddof13]
housemate has joined #osdev
heat has quit [Ping timeout: 248 seconds]
eddof13 has joined #osdev
raphaelsc has quit [Remote host closed the connection]
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
eddof13 has quit [Quit: eddof13]
eddof13 has joined #osdev
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
eddof13 has quit [Client Quit]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
Dead_Bush_Sanpai has quit [Read error: Connection reset by peer]
eddof13 has joined #osdev
Dead_Bush_Sanpai has joined #osdev
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
annamalai has quit [Remote host closed the connection]
housemate has joined #osdev
housemate has quit [Client Quit]
solaare has quit [Ping timeout: 246 seconds]
eddof13 has quit [Quit: eddof13]
qubasa has quit [Ping timeout: 252 seconds]
eddof13 has joined #osdev
edr has quit [Quit: Leaving]
qubasa has joined #osdev
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
<geist> oh this looks educational: https://youtu.be/SO83KQuuZvg
<geist> (Sebastian Lague rendering truetye font)
Opus has quit [Quit: .-.]
<zid> yea I liked that one
<zid> apparently microsoft have software patents on using quadratic curves
<zid> so, gj microsoft
<zid> shame your font rendering is still fuck ugly
Opus has joined #osdev
solaare has joined #osdev
eddof13 has quit [Quit: eddof13]
gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
eluks has quit [Remote host closed the connection]
eluks has joined #osdev
<cloudowind> mur would be interested with that
<geist> this one he kinda gets off in the weeds trying to do it with a shader, not sure if that's a realistic use of it
<geist> but it does beg the question how precisely do modern font renderers render
<zid> Using a shader is good, but I think trying to do it in realtime isn't
<zid> unless you wanna use SDFs or something
<geist> right
<kazinsal> I think SDFs are used for a lot of in-engine text rendering in games
<kazinsal> iirc Valve somewhat pioneered it in TF2 and it took about a decade before other engine devs went "oh, *wow*" and started doing it as well
<geist> hmm, what's a SDF?
<geist> besides a large space battleship that turns into a mecha
<zid> signed distance fields
housemate has joined #osdev
<zid> valve published a paper on it
<geist> gotcha
<kazinsal> signed distance function/field -- basically instead of pre-rendering glyphs at a given resolution and scaling from there you render a representation of the distance at a given "pixel" on a map from the boundary of a glyph
<kazinsal> it lets you pre-calculate some of the more complicated math for scaling glyphs without needing to render each glyph at some absurd size
housemate has quit [Max SendQ exceeded]
<zid> Instead of a 1bit b&w image, you encode distance to the edge in greyscales
<zid> which makes it a lot more accurate, naturally
<kazinsal> https://docs.unity3d.com/Packages/com.unity.textmeshpro@4.0/manual/FontAssetsSDF.html -- decent visual comparison of how SDF rendering works
<bslsk05> ​docs.unity3d.com: About SDF fonts | TextMeshPro | 4.0.0-pre.2
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
griddle has joined #osdev
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
gildasio has quit [Ping timeout: 264 seconds]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
gildasio has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
housemate has joined #osdev
housemate has quit [Max SendQ exceeded]
op has joined #osdev
xenos1984 has joined #osdev
eddof13 has joined #osdev
eddof13 has quit [Quit: eddof13]
solaare has quit [Ping timeout: 265 seconds]
solaare has joined #osdev
craigo has quit [Quit: Leaving]
griddle has quit [Quit: griddle]
housemate has joined #osdev
koon has left #osdev [#osdev]
housemate has quit [Remote host closed the connection]
op has quit [Remote host closed the connection]
hwpplayer1 has joined #osdev
GeDaMo has joined #osdev
hwpplayer1 has quit [Read error: Connection reset by peer]
housemate has joined #osdev
PapaFrog has quit [Quit: ZNC 1.8.2+deb3.1+deb12u1 - https://znc.in]
PapaFrog has joined #osdev
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
kof673 has quit [Quit: q]
goliath has joined #osdev
Dead_Bush_Sanpai has quit [Quit: Dead_Bush_Sanpai]
stolen has joined #osdev
muffin has joined #osdev
muffin has quit [Quit: Reconnecting]
muffin has joined #osdev
Dead_Bush_Sanpai has joined #osdev
muffin has quit [Ping timeout: 260 seconds]
msv has quit [Remote host closed the connection]
Arthuria has joined #osdev
edr has joined #osdev
rayan has joined #osdev
<rayan> hey
rayan is now known as rayanmargham
<rayanmargham> whoops fixed my nick
housemate has joined #osdev
rayanmargham has quit [Ping timeout: 244 seconds]
<nikolar> lol
eddof13 has joined #osdev
netbsduser has quit [Remote host closed the connection]
netbsduser has joined #osdev
teardown has quit [Ping timeout: 264 seconds]
teardown has joined #osdev
eddof13 has quit [Quit: eddof13]
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
foudfou has joined #osdev
heat has joined #osdev
<heat> kern
<mcrod> hi
<Ermine> kern.info
<mcrod> i haven't been here in a long while
<mcrod> this is because I usually fuck with heat in discord DMs
<Ermine> oh hi
<Ermine> say gex!
<mcrod> wtf is gex
<heat> WE DO BE FUCKIN
<heat> HARD
<ring0_starr> i think it's an n64 video game character that's an antrophomorphic gecko
<heat> sysctl is LAME, procfs is BASED
<heat> just one more file bro please please just one more file bro i swear it'll fix everything i swear it'll fix it please bro just one more file
<stolen> Me and my friend are embarking on the holy journey of creating an OS just to learn more about it. Though he uses mac and I windows, should we set up the environment in docker?
<heat> meet halfway and use linux
<heat> you're welcome
<ring0_starr> are you going to make a holy C too
eddof13 has joined #osdev
<heat> seriously windows as an osdev environment is awful unless you strictly use wsl
<ring0_starr> when i started os dev, i was using bochs on windows,oll
<mcrod> min-heaps are cool
<heat> i know buddy you told me that already
<mcrod> i didn't tell anyone else
<heat> i am everyone.
<mcrod> i dunno though, i don't like the jumping around in the array but I guess you have to
<mcrod> it's a tree after all
hwpplayer1 has joined #osdev
<heat> much nicer to jump around in an array than following pointers
<heat> even though you're still branching
<mcrod> i wish perf could be live too
<mcrod> but that'll never happen
<heat> what
<mcrod> some dude tried to make a patch for a live perf but never got through
<heat> live perf?
<bslsk05> ​lwn.net: perf: 'live mode' [LWN.net]
<heat> try perf stat
<mcrod> yeah but I can't see the results as they're going on
<Ermine> mcrod: swap s and g in 'say gex'
<mcrod> ah
<heat> COMEDY
<heat> hey chat did we do a funny
<mcrod> wait a minute
<mcrod> I wonder if I can do perf stat -o stdout
<heat> no
<mcrod> god dammit
<\Test_User> mcrod: can use -I to get it to print regularily
<mcrod> oooo
<mcrod> yeah I just think for what I'm doing it might be better to get some results in "real time"
<mcrod> give me something every second or so..
<Ermine> sysctl is LAME, procfs is BASED --- seems like sysctl is just echo $value > /proc/sys/$key
<heat> not on the BSDs
<mcrod> but
<mcrod> i am currently limited to my macbook
<mcrod> this is because the new mobo I ordered arrived with bent pins.
<mcrod> so...
<heat> Traditionally sysctl is a syscall, it also used to be a syscall on linux but got deprecated and killed off
<ring0_starr> meanwhile procfs is the optional part on bsd...
<ring0_starr> you dont even need procfs to run htop anymore
<mcrod> oh yeah speaking of the mobo
<mcrod> i'm sure some of you remember the mobo memory training to death
<mcrod> the old one
<mcrod> the evidence is mounting that I probably just needed to use QVL listed RAM in dual-channel
<mcrod> i could've saved myself a lot of trouble
<stolen> honestly i'm just afraid to ask anything here hehe, cause i can already hear RTFM haha
<mcrod> it's IRC, 99% of the time it's a "RTFM" echo chamber
<mcrod> even though if it were always that simple, these channels wouldn't exist
<mcrod> so go ahead
<stolen> so for the OS, I am only targeting the kernel, so I can just put GNU on top of it right? or does the development needs to follow some structure to GNU?
<Ermine> what does GNU mean here?
<ring0_starr> i assume he means hurd userland
<Ermine> then you need to write a kernel and port glibc and stuff to it
<ring0_starr> stolen: I'd just like to interject for a moment. What you've been referring to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU + Linux.
<Ermine> unless you're going to be binary compatible with linux
<stolen> right Linux is just the kernel GNU is on top of it... but was GNU there when torvalds developed it?
<ring0_starr> no
<ring0_starr> well
<mcrod> Linux is just a kernel
<mcrod> GNU is a userland
<mcrod> that's basically it
<ring0_starr> linux's first release is after hurd's initial release
<mcrod> there is nothing stopping you from throwing a BSD userland on Linux
<ring0_starr> i doubt penguin man created it with the gnu corelibs and such in mind
<mcrod> and, in fact, https://chimera-linux.org/
<bslsk05> ​chimera-linux.org: Chimera Linux
<ring0_starr> Debian/kFreeBSD
<Ermine> when Linus made Linux, he has ported glibc and utilities and stuff so it would work on linux
<mcrod> fuck glibc
<mcrod> :(
CryptoDavid has joined #osdev
<stolen> so GNU come first in the chronology hmm...
<heat> yep
<heat> but yes you can put "GNU" on top of the kernel
<heat> in various capacities
<ring0_starr> isn't it peak irony though?
<stolen> why?
<ring0_starr> GNU is supposed to be microkernel based, and here penguin man gets into a big flame war with his professor shitting all over microkernels, and then he steals GNU to use for his monolithic kernel
<heat> GNU is... not really supposed to be microkernel based
<heat> it was, at one point
<ring0_starr> meanwhile stallman is sitting there scratching his ass eating toejam still without a kernel to this day
<heat> but GNU software was AFAIK alway portable
<heat> and early BSDs already had GNU stuff and used gcc and gnu ld and all that
<heat> old glibc even had several ports including svr4 and the bsds and SunOS
<heat> the choice between "portable" and "non-portable" is pretty obvious to me, if you're just looking to write a kernel and not a whole userspace
<heat> contrast that with openbsd where you need to watch out for pledge() and unveil() calls and all that shit
<ring0_starr> strlcpy too
<ring0_starr> the bastards
<bslsk05> ​blog.gnoack.org: The feasibility of pledge() on Linux · blog.gnoack.org
<heat> i've thought about pledge on linux
<heat> it can be done with seccomp but purely in the libc
<heat> and yeah as they state in the article non-direct args are problematic, but this is true for all seccomp filters
Matt|home has quit [Read error: Connection reset by peer]
<ring0_starr> when you control both the kernel and libc you can do a lot more
<ring0_starr> there's a reason why apple computers (used to, at least) be known as impressive, it's because they maintain a high degree of control over each component that makes up the entire product
<heat> uhhhhhhhhhh i disagree
<ring0_starr> many limitations that exist only exist for interoperability
<heat> i mean
<heat> i agree, but not in this case
<ring0_starr> you mean about pledge()?
<heat> yes
<heat> yes, openbsd does pledge in the kernel, no it doesn't need to be done that way
<ring0_starr> you need to do pledge in the kernel
<heat> you do not
<ring0_starr> otherwise it can be bypassed by the raw int 0x80/syscall/svc instruction
<heat> oh im not talking about *that* kind of userspace pledge
<heat> but if seccomp was a little better (and maybe with the help of an LSM), you could quite easily blit those opcodes out in the libc
<heat> because the libc knows what paths it needs and what syscalls it uses for various features
<ring0_starr> by blit out what do you mean?
<heat> codegen
<heat> libc could codegen some seccomp bpf filter, ezpz
<ring0_starr> ahh
<heat> as long as you give the libc enough tools you don't need to do a layer violation
Left_Turn has joined #osdev
<ring0_starr> what about syscall keys
<heat> syscall keys?
<ring0_starr> like you mandate that a certain register is some dynamically generated per process or per thread cookie
<ring0_starr> shellcode would need that information in order to execute raw syscalls
<ring0_starr> that would eliminate any syscalling from not-libc
<heat> what would the point of that be?
<ring0_starr> then pledge could be usermode only without the use of a sandbox
<heat> that doesn't solve the path problem
<heat> (and that's also severely limiting)
<stolen> anyone here worked with Rust for OS? didn't find mcuh mention of rust on the wiki
<heat> some people yeah
* mcrod clears throat
<mcrod> RUUUUUUUUUUUUUST
<mcrod> do we still do that here
<heat> sometimes
griddle has joined #osdev
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 265 seconds]
eddof13 has quit [Quit: eddof13]
eddof13 has joined #osdev
Arthuria has quit [Ping timeout: 252 seconds]
goliath has quit [Quit: SIGSEGV]
* Ermine drinks water
<Ermine> RUUUUUUUUUUUUUUUUUUUUUUUST
<nikolar> ew
<Ermine> re pledge
<Ermine> jart iirc made a library that emulates pledge on top of seccomp
msv has joined #osdev
netbsduser has quit [Ping timeout: 248 seconds]
goliath has joined #osdev
xenos1984 has quit [Ping timeout: 248 seconds]
xenos1984 has joined #osdev
CryptoDavid has quit [Quit: Connection closed for inactivity]
craigo has joined #osdev
jjuran_ has joined #osdev
jjuran has quit [Read error: Connection reset by peer]
jjuran_ is now known as jjuran
xenos1984 has quit [Ping timeout: 252 seconds]
griddle has quit [Quit: griddle]
goliath has quit [Quit: SIGSEGV]
<zid> heat Q for you
<zid> Tangentially related, how does the rcu grace period actually work? I can't find any hard info on it
<zid> Does it just wait for all rcu readers anywhere to finish? What actually fires off rcu delayed free?
xenos1984 has joined #osdev
griddle has joined #osdev
griddle has quit [Quit: griddle]
griddle has joined #osdev
<nikolar> quiescent states and such
<nikolar> (I don't get it either)
stolen has quit [Quit: Connection closed for inactivity]
goliath has joined #osdev
eddof13 has quit [Quit: eddof13]
Arthuria has joined #osdev
X-Scale has joined #osdev
Arthuria has quit [Ping timeout: 272 seconds]
hwpplayer1 has quit [Read error: Connection reset by peer]
hwpplayer1 has joined #osdev
eddof13 has joined #osdev
hwpplayer1 has quit [Quit: see you]
frkazoid333 has quit [Read error: Connection reset by peer]
frkazoid333 has joined #osdev
<geist> i assume by default it's when nothing has a hold of anything, but then the question is what happens if some list/etc is in a perpetually busy state
<geist> i remember someone at work saying that's the general weakness of it, even in linux if the data structure is constantly in use it can basically grow to fill up all of ram
<geist> ie, it works best on data structures that are mostly read only
Arthuria has joined #osdev
<heat> zid, this is all stuff that fires on a preemption point
eddof13 has quit [Quit: eddof13]
<zid> and what's one of those? syscall and or timer expiration?
<heat> syscall exit or schedule in/out point
<zid> so.. yes?
<heat> basically what you want to do is "this is a batch of $things_we_need_to_do for this GP. once every cpu sees this GP (i.e clears its cpu from a cpu mask), we can execute all of this"
<heat> timer expiration is completely unrelated because you can have a timer expire while under RCU
<zid> how do RCUs affect pre-emption points?
<zid> "What counts asa pre-emption point?" "Not timers, because timers can happen while under RCU" makes no sense to me
SGautam has joined #osdev
PapaFrog has quit [Quit: ZNC 1.8.2+deb3.1+deb12u1 - https://znc.in]
<heat> RCU *literally or conceptually* (answer depends on the RCU variant and your kernel and if linux, your preemption model) disables preemption
PapaFrog has joined #osdev
<heat> conceptually you can think of rcu_read_lock() and rcu_read_unlock() as preempt_disable() and preempt_enable()
<zid> right but we're talking about when nobody has an rcu thingy outstanding, *and* I wasn't asking that
<heat> you can get IRQs but you cannot get preempted, you can't hit the scheduler
<zid> I asked when pre-emption points *can* happen
eddof13 has joined #osdev
<geist> preempt_enable()
<geist> ie, when you go back to being preemptable, you can check if any preemption is queued up
<zid> right, but when is that check performed, was the question
<heat> (when some higher prio thread got woken up OR scheduler timer slice expired) AND preempt is enabled
<zid> and my guess was syscalls, and scheduler points
<heat> no you said timer expiration
<zid> that's..entirely what I meant? Note I know *nothing* about linux's architecture and won't be talking in terms of it
<geist> i guess the main reason for the preempt disable/enable is primarily to pin it on the current cpu, which is where the queue would live
<heat> but if you truly meant "scheduler is acting upon timer slice being oopsie", then yes
<zid> yes I did, thanks
<heat> geist, there's a preemptible RCU variant that's a little higher overhead on the read side but allows for preemption under read_lock
<geist> but even that, can it always make forward progress at these points?
<heat> it's what runs on CONFIG_PREEMPT=y kernels these adys
<heat> >can it always make forward progress at these points?
<heat> no and that's called an RCU stall
<geist> right, and it stalls precisely when?
<geist> something still is in use somewherE?
<geist> er i mean at these points what would cause it to stall
<geist> (thanks for these answers, easier to ask someone that has worked with it practically than to try to grok reading it)
<heat> preemption is disabled for too long, irqs are disabled for too long, softirqs are disabled for too long
<geist> ah okay
<heat> on PREEMPT=n kernels preemption is always disabled so the answer there could also be "you've just looped for too long without rescheduling"
<geist> i guess my question is when it does get around to checking (preemption point, etc) does it always fully GC everything that needs to be GCed?
<heat> they try somewhat hard to sprinkle some "resched if needed" breaks in big loops for these kernels, as AFAIK most server stuff runs PREEMPT=n
<geist> or is there some condition that may cause it to skip this time around because stuff is in use?
<heat> wdym
<geist> i guess i dont fully grok the GC part of the RCU mechanism
<geist> as in what does it *do* when it gets a chance to GC (or reclaim or whatever its called)
<geist> i'm looking for cases where garbage can grow uncontrollably
<geist> telling me to RTFM is fine, i'm just trying to avoid reading the *linux source code* for RCU knowledge
<geist> for reasons
<nikolar> I don't think rcu deals with GC
<heat> it will only grow uncontrollably when you have an RCU stall
<heat> also yeah GC is a little loose term, but yes in practice it's GC
<geist> okay so i guess that means it always makes forward progress as long as you get a chance to service the cleanup phase
<heat> kfree_rcu and such
<heat> yeah
eddof13 has quit [Quit: eddof13]
<geist> usually i think of things as always beeing preempt, becase no kernel i've ever built or worked on has ever been not the equivalent of full CONFIG_PREEMPT
<heat> although AFAIK in practice I believe you kind of can get into tough spots with RCU and low memory situations, it's important to make sure none of your RCU code does any sort of allocation
<geist> that was a result of starting my kernel learning discovery with BeOS, which is a klunky kernel for a lot of modern reasons, but it was 100% fully preemptable
<heat> was XNU fully preemptible?
<geist> dunno. good question no idea
<geist> was playing with BeOS recently again on an old dual p3, triple booting between win98 and win2k. on the same hardware you really see why it shined at the time
<geist> boots in like 5 seconds and the others sit around and grind for a 45 seconds
eddof13 has joined #osdev
<nikolar> I mean windows still sits around and grinds of minutes after showing you the desktop
<geist> but you see the FS is pretty slow in that it really aggressively syncs stuff to disk, you actually hear it hitting the disk on every keypress in vim, presumably because it's writing out the .swp file
<nikolar> Oh no lol
<heat> lol
griddle has quit [Quit: griddle]
<geist> it wasn't that the FS was synchronous, it just had a very short flush window
<zid> I mean, flush it if you got it
<geist> i think on purpose, idea is get your shit out to disk because that's what you want. MacOS has the same philosophy
<zid> no point waiting around with a character in a buffer if the system is idle
<geist> if it's brown, flush it down
<heat> i do 5s
<heat> linux by default is also around 5-10s
<zid> the buffer should be there so that once you try to flush it 24/7 back-to-back, the writes can just get bigger
<zid> to try keep perf
<geist> that being said, though befs is pretty sophisticated by modern standards, somewhere > ext4 feature wise but still traditional ffs style
<heat> oh there's going to be a new preadv2 flag called UNCACHED or DONTCACHE or (they haven't figured out the name quite yet)
<geist> the cache was pretty cheezy by modern standards. no file based caching, it's a traditional 'carve off 10% of memory for block cache'
<heat> that does somewhat of a cached read or write but throws away the page when the IO ends
<geist> the beos VM is extremely basic, no mmap(), etc
<heat> oh 10% of memory for the block cache is horribly small
<geist> oh i made up that number, but it's one of those sort of things
<heat> yeah
<heat> ye old unix
<geist> it computes some minimum and maximum at boot, and then lets it grow/shrink to that
<geist> i think there's some way for the VM to tell it to flush stuff in a pinch
eddof13 has quit [Client Quit]
<geist> but the point is it caches at the block level, not at the file level. there's no integrated file cache/vm
<heat> yeah that sucks
<geist> it's fairly usable if your VM doesn't do mmap, just means every time you read a file you have to drill through the fs layer though to at least compute the block in the cache
<heat> it's usable, just slower
<heat> do less work if you can
<geist> yah even at the time BSDs had a fs_strategy() call that basically requests the fs give the block map for a particular offset so the VM can directly IO the disk
<geist> i think macos still uses that style in their fs api
<heat> linux is a little bipolar here
<geist> yeah linux fs layer is unlike anything else
<heat> they're slowly doing away with the buffer cache in general
<heat> as in the old buffer_head stuff that resembles unix struct buf
<heat> it turns out maintaining a large structure for every block in the filesystem is slow and memory expensive
<heat> iomap is the swanky new thing
<geist> there's still always that problem of caching metadata though (NT of course solved that by making every piece of metadata also a VM cachable file)
<heat> yes and honestly there's no good solution yet
<heat> if you ask someone involved they'll say "uhhhhh uhhhhh uhh write your own we'll eventually port xfs's buffer cache maybe idk"
<geist> yeah i think fuchsia we also do a 'big vmo that represents the entire device' for metadata caching
<geist> reminds me i really do need to sit down and try to grok the fuchsia fs. we hired a guy out of apple in australia a few years ago that's been working on it directly the whole time
<heat> but even if you end up using the buffer cache for metadata you skip most of the disk space (used by files)
<geist> i haven't really talked to him much, but it's quite sophistcated and unlike anything else i know
<heat> there are no public docs :(
<heat> >it's quite sophistcated
<heat> oh no!
<geist> indeed. not sure there are private docs
<geist> well probably some early ones. its also written in rust
<heat> no private docs?? are you sure you didn't hire a NTFS guy accidentally
<heat> yeah i vaguely remember some horrible serdes-style thing
<geist> but i remember it is like zfs/btrfs in that it's a large data structures with snapshots and subvolumes and whatnot
<geist> but iirc it doesn't use a traditional btree thing
<geist> some sort of other data structure i hadn't heard of before
<bslsk05> ​fuchsia.dev: RFC-0136: Fxfs  |  Fuchsia
<nikolar> There was some write optimized b tree
<geist> ah log structured merge (LSM) trees
<nikolar> Oh lsm
<nikolar> Yeah
<nikolar> I can't think of any other fs using it
<heat> i'm a filesystem amish
<heat> ext4 or xfs (and that's stretching it) are ideal, god said so
<the_oz> In LSM-trees, write operations are designed to optimize performance by reducing random I/O and leveraging sequential disk writes. When a write operation is initiated, the data is first buffered in an in-memory component, often implemented using a sorted data structure such as a Skip list or B+ tree.
<geist> kinda agree. i mean i love newer stuff, but until i have a good solid generic btree data structure, which i piddle with from time to time, i wont be able to implement anything bigger
<the_oz> C0 GO FAST IN RAM K/V OMG
<geist> SKIP LISTS
<geist> i remember beos used a lot of those, was a data structure in vogue at the time
<nikolar> geist yeah, btrees are annoyingly messy if you want cow
<nikolar> I'll have to figure them out at some point heh
<geist> yeah it's like a grok em, but there's not a lot of good generic btree implementations around
<nikolar> Yeah true
<geist> or the ones that are have an incompatible license
<nikolar> But if you want something for an fs, you'll probably want a custom implementation anyway
<geist> yah or at least a very specialized form
<nikolar> Yeah
<nikolar> I tried looking for btree implementations so I could cheat a bit, but I couldn't find anything usable door what I needed.
<heat> moooooooooooooooooooooo i want cow
<cow> heat: mooooooooo
<heat> mooooooooooooooooooooooooo
<nikolar> We broke heat
<childlikempress> meow
<geist> praise be the holy cow
<heat> we do be cowing
<the_oz> copy on cow-tipped moc
<childlikempress> i would never copy on my writes
<childlikempress> please dont slander
<heat> RCU on copyrighted material is illegal
<heat> follow me for more legal advice from definitely-a-lawyer
<nikolar> RCU RCU RCU
<nikolar> We read copy update
<heat> you know who cant use RCU?
<heat> zfs lol
<nikolar> I'm pretty sure zfs could use rcu
<nikolar> No clue if the current implemention does
<heat> zfs cannot use rcu
<heat> *legally*
<nikolar> Oh wait, did you need to gpl the code using rcu
<nikolar> I forgot if there was a particular license
<zid> Man cannot live in RCU alone
<heat> it needs to be GPL or LGPL compatible yeah
<zid> Try a lovely linked list
<geist> skip list!
<nikolar> Linked lists work great with rcu
<the_oz> zfs does not like being statically linked either BUT THEY DON'T TELL ME WHAT TO FUCKING DO, MAHM
<nikolar> heat: yeah fair enough then
<geist> but AFAIK (and i dont really want to know) RCUs were traditionally under some patents so i have generally stayed away from them
<geist> at least looking at the linux impl
<nikolar> Yeah that's what heat was referring to
<geist> surely it's not patented anymore, but i dont want to know
<heat> yeah thats fair
<nikolar> Some variant is still patented I think
<nikolar> Can't remember the details
<heat> some variants, some implementation details
<nikolar> heat will probably know more
<nikolar> Yeah
<geist> presumably the original, basic form is clean
<heat> which i wont go into because geist cannot and will not know
<geist> but then if you implement that you may accidentally get into patenable territory
<Griwes> some form of RCU is now not-under-patents enough that it is going to land in C++26
<geist> but then you did it at least on your own
<heat> Griwes, nah
<Griwes> what "nah"
<Griwes> it is
<heat> C++26 adds an rcu-ish API more directed towards epoch-based reclamation
<heat> EBR was never patented but it strictly sucks a little harder than RCU
<heat> QSBR (quiescent-state based reclamation) RCU
<geist> stupid patents. i wouldn't care one iota if it werent working on an OS at work
<nikolar> What sucks less than rcu
<nikolar> Well geist, patents are used for far worse things than this
<heat> WAITONADDRESS()!!!
<the_oz> make -DFUCK_PATENTS=1
<heat> why would you fuck them, are they hot?
<nikolar> Apparently
<the_oz> unlocked awesome mode do what you yourself have done against official advice, definitely don't do this
<heat> sucking less than rcu would be.... not needing it in the first place
<nikolar> Kek fair enough
<zid> I think only gog is allowed to call WAIT_ON_A_DRESS
<Griwes> heat, well, Paul McKenney calls it RCU :V
<heat> but yeah RCU is love RCU is life RCU is everything
<zid> C++26 "We have RCU at home"
<heat> RCU is garbage collection RCU is type stability RCU is a way to wait for the scheduler RCU truly is infinite
<nikolar> heat, when I get around to writing my kernel, should I use rcu for everything
<nikolar> *literally everything
<heat> hmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm no
<nikolar> Dang
<nikolar> rcu not infinite after all
<the_oz> sounds like disk enabled um um what's it called lockless operation implementation thingy
<childlikempress> YOU CAN DO ANYTHING WITH RCU
<childlikempress> THE ONLY LIMIT IS YOURSELF
<the_oz> cmnpswap
<the_oz> cmpxchg whatever the fuck
<heat> Griwes, he does call it RCU, but it is not RCU as in the patented thing, rather just the idea in general + some similar API to linux
<heat> one feature or two hint towards epoch-based reclamation as in the folly impl
<heat> i'm not sure if you can even permissibly do EBR with signals in the C++ implementation
<heat> s/EBR/QSBR/
<bslsk05> ​<heat*> i'm not sure if you can even permissibly do QSBR with signals in the C++ implementation
<heat> in general for QSBR in userspace you need to register every thread with the rcu impl + signals OR add an explicit quiescent state manually
hwpplayer1 has joined #osdev
<SGautam> I never got why you even need these strategies. Ideally, there should be a "master thread" that should have the responsibility to free()ing all involved shared memories, no?
<SGautam> That said, my experience in multi threading is limited to solving homework problems
eddof13 has joined #osdev
<heat> how would you know what needs to be freed and when?
<SGautam> You're viewing it from the point of the kernel, I'm viewing it from the point of the person writing the program
<SGautam> If a program/thread shuts down, all memory associated with it should be reclaimed, no?
<heat> that's intensely useless
<heat> person writing the program could have a "magically_free_this_someday(ptr);" and it still needs an implementation behind it
X-Scale has quit [Ping timeout: 240 seconds]
eddof13 has quit [Quit: eddof13]
<SGautam> Wait, you're saying right after I call free(), the kernel shouldn't just think that the memory allocated is now up for grabs, and actually wait when conditions are suitable?
<SGautam> This can only happen if other threads are reliant on the same memory region
<heat> >the kernel shouldn't just think that the memory allocated is now up for grabs, and actually wait when conditions are suitable?
<heat> yes that's kind of RCU in a nutshell and exactly what we try to solve with RCU
<heat> >if other threads are reliant on the same memory region
<heat> which they are, that's what we're tackling with it
<SGautam> That makes sense, but I'm thinking if there's programmatically a way to avoid this situation where a thread or CPU has freed shared memory while other threads are using it.
<heat> ofc there is
<SGautam> Because that looks wrong to me, that ideally shouldn't happen.
<heat> whether it applies to your program or not is up for grabs
<heat> it reminds me of #musl's trip about multi-threaded malloc and how it doesn't need to be scalable if you just <insert really specific program structure that doesn't apply to many cases>
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<SGautam> Some guy published a 200 page paper about RCU, EBR and QSBR
<heat> brooooo just malloc from one thread and the other ones dont need to do that!!!!!
<SGautam> yeah that's my idea, "master thread" in charge of memory allocation
<heat> probably, EBR has a paper
<nikolar> Didn't freebsd start using ebr for some things
<heat> EBR cannot physically be faster than QSBR (at least on the read side), its main advantage is that it's not patent incumbered/GPL incumbered
<heat> yes
<nikolar> (not rcu for obvious reasons)
<heat> but then they realized EBR kind of sucks so they started to do some funky similar-ish stuff in the allocator
<heat> SMR I think they call it
<nikolar> Oh yeah that's what I was thinking of
<heat> the freebsd namecache is all SMR'd courtesy of our polish friend
<SGautam> I get it now, this article makes a very good case for QSBR in case of a web server connected to multiple clients. https://preshing.com/20160726/using-quiescent-states-to-reclaim-memory/
<bslsk05> ​preshing.com: Using Quiescent States to Reclaim Memory
<SGautam> Yeah this is unavoidable
<cloudowind> a-void
cloudowind has quit [Ping timeout: 260 seconds]
cloudowind has joined #osdev
housemate has joined #osdev
troseman has joined #osdev
eddof13 has joined #osdev
eddof13 has quit [Client Quit]
hwpplayer1 has quit [Quit: ERC 5.5.0.29.1 (IRC client for GNU Emacs 29.4)]
Matt|home has joined #osdev
housemate has quit [Quit: Nothing to see here. I wasn't there. I take IRC seriously.]
X-Scale has joined #osdev
troseman has quit [Quit: troseman]
remexre has quit [Remote host closed the connection]
fedaykin has quit [Quit: leaving]
X-Scale has quit [Ping timeout: 240 seconds]
fedaykin has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]
goliath has quit [Quit: SIGSEGV]
mpetch has joined #osdev