klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
oldgalileo has quit [Ping timeout: 268 seconds]
oldgalileo has joined #osdev
oldgalileo has quit [Ping timeout: 256 seconds]
joe9 has quit [Quit: leaving]
<immibis> cloudflare is the great firewall of the united states of america
<immibis> china has their firewall, and america has theirs
gog has quit [Quit: byee]
gog has joined #osdev
navi has quit [Ping timeout: 268 seconds]
heat has quit [Quit: Client closed]
theruran has quit [Quit: Connection closed for inactivity]
rustyy has joined #osdev
voidah has quit [Ping timeout: 252 seconds]
gorgonical has joined #osdev
<gorgonical> haha oh man I have some real nightmare images here
<gorgonical> these images I'm capturing off this camera are truly nightmarish
<gorgonical> my face in the dark room, super underexposed
* kof673 gives gog excalibur
voidah has joined #osdev
teardown has quit [Remote host closed the connection]
teardown has joined #osdev
Matt|home has quit [Remote host closed the connection]
m3a has quit [Ping timeout: 246 seconds]
Arthuria has quit [Ping timeout: 260 seconds]
m3a has joined #osdev
gog has quit [Quit: byee]
gog has joined #osdev
chiselfuse has quit [Ping timeout: 260 seconds]
chiselfuse has joined #osdev
goliath has joined #osdev
node1 has joined #osdev
teardown has quit [Remote host closed the connection]
gsekulski has joined #osdev
teardown has joined #osdev
oldgalileo has joined #osdev
oldgalileo has quit [Ping timeout: 272 seconds]
agent314 has joined #osdev
node1 has quit [Quit: Client closed]
theyneversleep has joined #osdev
jjuran has quit [Ping timeout: 268 seconds]
jjuran has joined #osdev
zxrom has quit [Quit: Leaving]
gbowne1 has quit [Quit: Leaving]
node1 has joined #osdev
oldgalileo has joined #osdev
oldgalileo has quit [Ping timeout: 268 seconds]
neo has joined #osdev
Etabeta1 has quit [Read error: Connection reset by peer]
Etabeta1 has joined #osdev
remexre has quit [Ping timeout: 264 seconds]
remexre has joined #osdev
oldgalileo has joined #osdev
gog has quit [Quit: byee]
gog has joined #osdev
node1 is now known as tikuc
GeDaMo has joined #osdev
tikuc has quit [Quit: Client closed]
gog has quit [Quit: byee]
gog has joined #osdev
gog has quit [Ping timeout: 264 seconds]
chiselfuse has quit [Ping timeout: 260 seconds]
chiselfuse has joined #osdev
zetef has joined #osdev
zetef has quit [Ping timeout: 252 seconds]
zetef has joined #osdev
zetef has quit [Remote host closed the connection]
Left_Turn has joined #osdev
navi has joined #osdev
bauen1 has quit [Ping timeout: 264 seconds]
bauen1 has joined #osdev
bauen1 has quit [Ping timeout: 260 seconds]
zetef has joined #osdev
netbsduser has joined #osdev
netbsduser has quit [Ping timeout: 264 seconds]
scaleww has joined #osdev
m257 has joined #osdev
netbsduser has joined #osdev
edr has joined #osdev
foudfou has joined #osdev
MiningMarsh has quit [Ping timeout: 252 seconds]
MiningMarsh has joined #osdev
agent314 has quit [Ping timeout: 255 seconds]
bauen1 has joined #osdev
node1 has joined #osdev
agent314 has joined #osdev
m257 has quit [Ping timeout: 250 seconds]
m257 has joined #osdev
agent3141 has joined #osdev
agent314 has quit [Ping timeout: 260 seconds]
agent3141 is now known as agent314
Turn_Left has joined #osdev
agent314 has quit [Quit: Ping timeout (120 seconds)]
agent314 has joined #osdev
Left_Turn has quit [Ping timeout: 272 seconds]
agent314 has quit [Ping timeout: 264 seconds]
netbsduser has quit [Ping timeout: 256 seconds]
foudfou_ has joined #osdev
foudfou has quit [Remote host closed the connection]
netbsduser has joined #osdev
m257 has quit [Ping timeout: 250 seconds]
scaleww has quit [Quit: Leaving]
goliath has quit [Quit: SIGSEGV]
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 255 seconds]
zetef has quit [Remote host closed the connection]
Matt|home has joined #osdev
foudfou_ has quit [Remote host closed the connection]
foudfou has joined #osdev
linear_cannon has joined #osdev
bliminse has joined #osdev
MiningMarsh has quit [Read error: Connection reset by peer]
MiningMarsh has joined #osdev
zxrom has joined #osdev
Arthuria has joined #osdev
node1 has quit [Quit: Client closed]
theyneversleep has quit [Remote host closed the connection]
xenos1984 has quit [Ping timeout: 268 seconds]
xenos1984 has joined #osdev
bauen1 has quit [Ping timeout: 264 seconds]
rom4ik has quit [Quit: Ping timeout (120 seconds)]
rom4ik has joined #osdev
dalme has joined #osdev
gildasio has joined #osdev
freakazoid332 has quit [Read error: Connection reset by peer]
frkazoid333 has joined #osdev
gog has joined #osdev
xenos1984 has quit [Ping timeout: 240 seconds]
bauen1 has joined #osdev
xenos1984 has joined #osdev
shikhin has quit [Quit: Quittin'.]
spareproject has joined #osdev
zetef has joined #osdev
<mjg> yo
<mjg> who wants to see one weird trick which massively improves ILLUMOS scalability in a ubench
<kof673> yes, but do you have any more marcus aurelius quotes
<mjg> they have ABSOLUTELY SHITE stat collection which violates fundamental scalability laws
<mjg> as in goes against them and they pay for it
<mjg> all cpus using a given mount point (and a fs type in general!) globally serialize on fucking lol updates
<mjg> this is gated with a flag test
<gorgonical> kof673: I really like the one where he scolds himself for not wanting to get out of bed
<Ermine> sun engineering ethos?
<mjg> using their kernel debugger i flip the flag on and off
<mjg> > 0xfffffe21fea41438::write -l 4 0x2420
<mjg> 0xfffffe21fea41438: 0x420 = 0x2420
<mjg> > 0xfffffe21fea41438::write -l 4 0x0420
<mjg> 0xfffffe21fea41438: 0x2420 = 0x420
<mjg> stock: min:691383 max:740996 total:14513141
<mjg> patch: min:2483235 max:2723264 total:52640698
<mjg> :d
<mjg> with 20 cores
<kof673> small world mjg j/k > Merlin, after educating the boy, gave Arthur to > Sir Ector already had two foster-sons > the orphaned sons of the late British-Roman general Marcus Aurelius
<mjg> Ermine: sun engneering ethos indeed
<Ermine> i guess the bigger number is the better
<Ermine> ?
<mjg> ops/second
<mjg> you should all recognize will-it-scale output by now :(
<mjg> here i'm doing fstat on *separate* files
<mjg> this should scale perfectly, but does not
<mjg> cause suntard
zetef has quit [Remote host closed the connection]
oldgalileo has quit [Ping timeout: 260 seconds]
gbowne1 has joined #osdev
<nikolapdp> ILLUMOS
<geist> OpenVMS
<nikolapdp> is it actually open
<nikolapdp> as in open source
<geist> nah, open meant something else back then
<nikolapdp> what did it mean
<geist> lots of stuff threw around oppen as in 'interoperable with other stuff' i think
<geist> ie, publish specs to protocols, etc
<nikolapdp> eh yeah that still makes sense for open
<nikolapdp> openvms was heavily networked no?
<geist> i can't think off the top of my head of over stuff that used open, but i remember there being a fair amout of products or whatnot that were open something
<geist> yeah
<nikolapdp> what distributed storage and what no
<nikolapdp> what not
<geist> brb, meeting
<gog> opengog
<nikolapdp> gogpen
<GeDaMo> OpenGL
<nikolapdp> opengl was actually open though, no?
<GeDaMo> Not OpenSource, I don't think
oldgalileo has joined #osdev
<kof673> "open source" surely came after "free software" (although "information" people disagree, i mean that is surely where the software version took off, as a contrast, not making any argument here about FOSS )
<kof673> *"information" people meaning "open source information" etc. <strike out "disagree", replace with "in another context")
<kof673> there's still openvms hobbyist programs i believe, not sure which architectures
<kof673> "was" pshaw
<nikolapdp> is it not at hing anymore ?
<nikolapdp> *a thing
<kof673> i'm just nitpicking "was"
<nikolapdp> i mean i guess you can always run ancient versions, no one is coming after you for those
goliath has joined #osdev
gsekulski has left #osdev [#osdev]
<nortti> < kof673> there's still openvms hobbyist programs i believe, not sure which architectures ← aiui, the old-style hobbyist program has been terminated, and nowadays the only thing available for hobbyists is an amd64 virtual machine image that's been pre-provisioned to self-destruct in a couple years
<nikolapdp> rude
<nortti> https://www.theregister.com/2024/04/09/vsi_prunes_hobbyist_prog/ oh apparently the vax hobbyist licensing was killed off even earlier
<bslsk05> ​www.theregister.com: VMS Software prunes OpenVMS hobbyist program • The Register
<nikolapdp> why
<nikolapdp> do they care about vax so much
<nortti> I'd presume the exact opposite, only hobbyists cared about the vaxa port
<nortti> -a
<nikolapdp> so just let them have it?
<nortti> alas I don't believe businesses in this space believe in "giving away your old stuff for free"
<nikolapdp> it literally doesn't cost them anything and also brings good will
<nikolapdp> like they only gain from it
<nortti> I mean, yeah. but openvms for vax is far from the only retro OS that's no longer legally available
<GeDaMo> I assume there are still people using VMS and paying for support
Arthuria has quit [Ping timeout: 260 seconds]
<nikolapdp> yeah but the thing is that it was available recently and there was no reason to not keep on with taht
<zid> is it 10pm yet
<nikolapdp> no
<zid> how about now?
<nikolapdp> no
j00ru has quit [Ping timeout: 272 seconds]
PublicWiFi has quit [Quit: WeeChat 4.0.3]
<gorgonical> for a brief while I had access to openvms, the hobbyist version
PublicWiFi has joined #osdev
<nikolapdp> what did you do with it
<gorgonical> I just poked around. I got access to it by total serendipity when I met a guy from the australian user group who offered to give me access so I could play with it
<gorgonical> Who I met in a hostel in japan
<gorgonical> this was before I started my phd and was still pretty peripherally interested in OS development only
<nikolapdp> so a completely random encounter
<nikolapdp> always fun
<dinkelhacker> is anyone familiar with Zephyr?
<bslsk05> ​www.zephyrproject.org: Zephyr Project – A proven RTOS ecosystem, by developers, for developers
<dinkelhacker> yes, I was reading through their documentation about memory protection (https://docs.zephyrproject.org/latest/kernel/usermode/memory_domain.html). They say: "The kernel ensures that any user thread will have access to its own stack buffer, plus program text and read-only data. " I was wondering how they can seperate threads code segments from each other.
<bslsk05> ​docs.zephyrproject.org: Memory Protection Design — Zephyr Project Documentation
<nortti> presumably through usage of either an MMU, or on targets that lack that, an MPU
<nortti> or how do you mean?
<zid> GeDaMo I have mini sausage rolls and garlic and herb dip
<zid> Just needs a fried mars bar to wash it down
ecs has quit [Remote host closed the connection]
<dinkelhacker> Looking through the samples it seems like you can just have some arbitraty functions in the same file and call a thread creation API and pass the respective function. However, if they would really seperate threads from each other what would happen if I have a a common function called from both thread functions? Like to configure the MMU or the MPU they would need to somehow link all the functions
<dinkelhacker> from each thread to a section.
ecs has joined #osdev
<zid> the point of threads is that they're not seprated though?
<nikolapdp> also i assume the text section is read only so you can share it no problem
<zid> separated threads are called processes
<dinkelhacker> nikolapdp: yeah I also thought that but it kind of reads like they would also seperate that... but that would be tricky I guess
<nikolapdp> well if it's read only, they are separated
<nikolapdp> they can't affect each other
<nikolapdp> also what zid said, if the threads aren't sharing a memory space, it's process
<dinkelhacker> Is that something aggreed on also in the context of RTOSes? Zephyr seems to only have Threads, no Tasks.
<dinkelhacker> I kind of get confused by that at times. Seems like some people use theses terms interchangably.
<nikolapdp> well you'll have to read the documentation then
<nikolapdp> to know what they mean
<zid> sounds like it has threads, and implements threads which it calls threads
<nikolapdp> zid, rtos calls them tasks
<zid> tough titties
<nikolapdp> kek
<zid> just less than an hour to gooo
<nikolapdp> it's 10pm here, i don't know what you're talking about
<zid> it's 8/8 on the penultimate volume, presumably we get SPOILER perspective
<zid> which will reveal a whole bunch of stuff that the MC missed because she's an unreliable AF narrator
<zid> then next week onwards.. final volume
<geist> huh TIL that in C++17 you can declare an inline variable in a header
<nikolapdp> and it works??
<zid> From what I've seen of the inline keyword in C++, it seems very very strange
<zid> and you have to fix it all up afterwards with -fno-common or -fcommon no matter what you do :P
<geist> yah C++17 allows you to declare an inline variable, which lets you put something in a header
<geist> and then the linker picks one copy
<bslsk05> ​en.cppreference.com: inline specifier - cppreference.com
<geist> just saw a CL at work and was like whaaa?
<nikolapdp> yeah definitely a whaaa? type of situation
<zid> gcc switched from -fcommon to -fno-common default, which stopped it merging some of those 'lol declare everything in a header and let the linker sort it out' things
<zid> and broke a whole bunch of C++ code
<zid> loots of traffic in the support channels
<zid> It was like.. gcc 11?
<zid> I think you're supposed to add inline, which does.. the opposite of what it says, or something? It's weeird.
<geist> yeah from that link it looks like it's more like it matches the existing properties of inline functions
<geist> which though the function is inline, can be stamped out, and the linker promises to dump all but one version
<geist> so though inline variable isn't really inline, or whatnot, it has the same linkage property
<gorgonical> I have an actual osdev question, for once
* geist waits with baited breath
<gorgonical> processes that do an anonymous mmap get a chunk of their own heap space starting from the top. But right now we have a sort of assumption that processes won't be frequently mapping/munmapping, so we don't cleanup spaces. So munmap leaves holes that aren't reclaimed
<zid> does it? which os is this?
<gorgonical> The obvious fix for this is to stick a buddy allocator in there so those spaces can be reclaimed when munmap happens
<gorgonical> a kernel designed for long-running hpc processes
<gorgonical> kitten
<zid> I'd *prefer*, tbh, perf wise, if it just.. on failure, tried to compact
<nikolapdp> so you have almost a gc pause then
<zid> rather than having to eat the minor overhead all of the time in the vast vast majority of cases where it doesn't happen
<nikolapdp> how often do you unmap though
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<gorgonical> the hpc part was its first life. I'm now adapting it for more general purposes
<zid> you'd need it in the mmap path wouldn't you
<gorgonical> yes
<gorgonical> nikolapdp: well seemingly musl optimizes (?) file accesses with mmap/munmap
<zid> making mmap slow so that munmap is fast seems counterproductive to me, nikolapdp
<geist> hmm, what do yo mean 'cleanup spaces'?
<nikolapdp> i mean yeah but how much of an overhead a buddy allocator would be
<gorgonical> geist: we currently unmap regions. so you munmap(some_rgn) and we delete the pages from the process page tables. But that physical memory range is left in-place
<netbsduser> gorgonical: i implement the "vmem" resource allocator from solaris for tracking vm map entries
<geist> that i dont get. what do yo mean it's left in place?
<gorgonical> And the mmap_brk is just a single point that grows toward the heap beginning
<geist> like, you dont return the physical page to the free list?
<gorgonical> there isn't a free list is my point
<geist> ah i see
<gorgonical> because we never needed one
<geist> so you leak the physical memory basically
<gorgonical> yes
<geist> that's probably a better way to describe it
<gorgonical> because hpc apps don't really ever munmap stuff
<netbsduser> oh
<netbsduser> i thought you were on about the virtual address map
<zid> same
<gorgonical> sorry for the confusion
<zid> I thought you had fragmented your memory regions internally
<geist> basically your PMM (physical memory manager) is basically a single pointer that moves forward with no way to return free pages?
<nikolapdp> yeah i thought so too lol
<gorgonical> yes
<gorgonical> currently
<geist> think of it as there's a PMM and a VMM. usually that's a reasonable abstraction
<zid> best way for a hobby OS to work
<gorgonical> and for normal applications that like write to files and free buffers and shit that's not acceptable
<geist> the whole virtual memory system is among other things involving a pmm and vmm
oldgalileo has quit [Ping timeout: 246 seconds]
<gorgonical> i'm just trying to figure out if there's a better solution than a buddy allocator
<netbsduser> gorgonical: many kernels sufficed with a simple freelist of pages as allocating physically contiguous pages is a rare operation almost solely done in early boot and occasionally depending on what devices can be hotplugged
<nikolapdp> and you can keep a small pool for that
<gorgonical> free list gets much better memory compactness at the cost of performance though. a contiguous mapping should have higher performance becuase of prefetching right
<netbsduser> nowadays some kernels try to use large pages (a strong case against fork() which forces you to make tough decisions since it CoWs private memory)
<netbsduser> i don't know whether prefetch is done across page boundaries on typical processors
<gorgonical> yeah I was just thinking that perhaps this only helps if the page mapper eagerly maps regions at the largest chunk size
<netbsduser> large pages are an optimisation in terms of reducing TLB pressure
<gorgonical> which, ours does fwiw
<gorgonical> contiguous regions saves you on pfns in the region descriptors I guess
<dostoyevsky2> isn't fork so difficult to implement efficiently?
<Ermine> it is
<Ermine> copying the whole process is hard
<dostoyevsky2> so I was wondering if one could "scan" for "exec" following immediately and then just skip the whole fork thing
<Ermine> Cow alleviates that, but then you get smp and your cow becomes dirty if you're not careful enough
<Ermine> define "immediately"
<gorgonical> dostoyevsky2: isn't this the point of like execve or whatever
<gorgonical> I can't remember which one, but one of the forking syscalls was designed just for this
<nikolapdp> vfork?
<dostoyevsky2> Ermine: Just let fork be slow and check for the next instructions after the fork syscall, does it look like the typical fork/exec? If yes, just skip the fork, done
<gorgonical> Yes I guess the point is that clone does all this
<dostoyevsky2> or maybe let fork just be a syscall that returns a new pid, and if the next syscall isn't exec it executes the rest of the fork
<dostoyevsky2> but I guess that'll don't work with memory accesses inbetween
<nikolapdp> that'll don't work indeed
spareproject has quit [Remote host closed the connection]
<geist> dostoyevsky2: that's a terrible idea
<geist> the checking the next instructions
<geist> but yeah that's what vfork/etc is all about
<geist> and of course posix_spawn
<zid> Okay book read, back to my cave for a week
<netbsduser> dostoyevsky2: it is impossible to implement fork really efficiently if the goal is a way to get new processes that aren't clones of the parent
<netbsduser> cow lets you avoid copying the entire address space and doubling memory use, but still requires a painful process to establish the copy-on-writes, including not tlb shootdowns, it's worse than that, it's a tlb massacre
SGautam has joined #osdev
* Ermine looks at nagios
<dostoyevsky2> geist: idk, checking the instructions is also done by ebpf... and realistically you want e.g. shell scripts to be reasonably fast in a hobby os, so wouldn't be too hard to recognize a couple of fork/execs to have a noticeable performance gain
<dostoyevsky2> I guess when you want to actually implement a realistic OS you need to implement most of the fork optimizations like the major OSes do
<dostoyevsky2> But I think in the Linux kernel a significant amount of complexity is probably due to fork optimizations
goliath has quit [Quit: SIGSEGV]
<netbsduser> you could also leave it as a slow path and patch it out in favour of vfork wherever you find it
<netbsduser> i think this is more reasonable
<dostoyevsky2> netbsduser: Would COW typically copy on each fault, or would that also become slow very quickly? So rather you need to avoid more page faults, and copy ahead?
<dostoyevsky2> I guess when you use huge tables it makes the faults less frequent
<netbsduser> dostoyevsky2: i personally would regard it as quite unacceptable to do some sort of "cow clustering" like this, since it increases the memory consumed potentially needlessly
oldgalileo has joined #osdev
<netbsduser> so i think people who fork on top of my kernel should eat the consequences of their obscenity
<dostoyevsky2> *huge pages even
<netbsduser> dostoyevsky2: that's a big issue i think, the done thing is to break up the large pages on fork
<netbsduser> so you get granular sharing of pages instead of having to go from sharing 2mib to sharing 0mib in one fault
<dostoyevsky2> netbsduser: what programs actually use a fork->COW work-flow? I can only think of something arcane like apache/nginx passing sockets to worker processes in a fork-less web server...
bradd has quit [Ping timeout: 256 seconds]
<netbsduser> dostoyevsky2: old servers, the sort that used inetd. it's not so bad there since they will be small
<dostoyevsky2> netbsduser: couldn't one just rewrite those couple of programs instead of implementing all the fork optimizations in the kernel? haha
<netbsduser> dostoyevsky2: realistically programs doing fork-per-connection don't need optimisation
<netbsduser> it's good to uphold what's accepted and traditional but for good reason this is no longer the accepted and traditional practice
<netbsduser> threads and/or epoll/kqueue are now the accepted practice and this has been the done thing for over 20 years so it's now more venerable than the old forking-per-connection approach was at that time
dude12312414 has joined #osdev
<dostoyevsky2> netbsduser: if one were to write some linux-compatible kernel without support for fork and just have exec, would one still need to implement most of the COW stuff for the likes of mmap()?
<netbsduser> dostoyevsky2: the fork that is used for mmap() with MAP_PRIVATE is of an altogether different character
<netbsduser> it's asymmetrical while the fork of cow is symmetrical
bradd has joined #osdev
<netbsduser> that is, the shared page is only copied in the MAP_PRIVATE case when you write to a view of it within a private mapping
<dostoyevsky2> netbsduser: I guess one could implement mmap() of a file with just huge pages, and never really want to refactor it into smaller pages... unless e.g. it was MAP_SHARED with many processes writing?
oldgalileo has quit [Ping timeout: 240 seconds]
dude12312414 has quit [Remote host closed the connection]
netbsduser has quit [Ping timeout: 268 seconds]
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
nikolapdp has quit [Ping timeout: 260 seconds]
Left_Turn has quit [Ping timeout: 255 seconds]
Matt|home has quit [Quit: Leaving]