<kof673>
small world mjg j/k > Merlin, after educating the boy, gave Arthur to > Sir Ector already had two foster-sons > the orphaned sons of the late British-Roman general Marcus Aurelius
<mjg>
Ermine: sun engneering ethos indeed
<Ermine>
i guess the bigger number is the better
<Ermine>
?
<mjg>
ops/second
<mjg>
you should all recognize will-it-scale output by now :(
<mjg>
here i'm doing fstat on *separate* files
<mjg>
this should scale perfectly, but does not
<mjg>
cause suntard
zetef has quit [Remote host closed the connection]
oldgalileo has quit [Ping timeout: 260 seconds]
gbowne1 has joined #osdev
<nikolapdp>
ILLUMOS
<geist>
OpenVMS
<nikolapdp>
is it actually open
<nikolapdp>
as in open source
<geist>
nah, open meant something else back then
<nikolapdp>
what did it mean
<geist>
lots of stuff threw around oppen as in 'interoperable with other stuff' i think
<geist>
ie, publish specs to protocols, etc
<nikolapdp>
eh yeah that still makes sense for open
<nikolapdp>
openvms was heavily networked no?
<geist>
i can't think off the top of my head of over stuff that used open, but i remember there being a fair amout of products or whatnot that were open something
<geist>
yeah
<nikolapdp>
what distributed storage and what no
<nikolapdp>
what not
<geist>
brb, meeting
<gog>
opengog
<nikolapdp>
gogpen
<GeDaMo>
OpenGL
<nikolapdp>
opengl was actually open though, no?
<GeDaMo>
Not OpenSource, I don't think
oldgalileo has joined #osdev
<kof673>
"open source" surely came after "free software" (although "information" people disagree, i mean that is surely where the software version took off, as a contrast, not making any argument here about FOSS )
<kof673>
*"information" people meaning "open source information" etc. <strike out "disagree", replace with "in another context")
<kof673>
there's still openvms hobbyist programs i believe, not sure which architectures
<kof673>
"was" pshaw
<nikolapdp>
is it not at hing anymore ?
<nikolapdp>
*a thing
<kof673>
i'm just nitpicking "was"
<nikolapdp>
i mean i guess you can always run ancient versions, no one is coming after you for those
goliath has joined #osdev
gsekulski has left #osdev [#osdev]
<nortti>
< kof673> there's still openvms hobbyist programs i believe, not sure which architectures ← aiui, the old-style hobbyist program has been terminated, and nowadays the only thing available for hobbyists is an amd64 virtual machine image that's been pre-provisioned to self-destruct in a couple years
<bslsk05>
www.theregister.com: VMS Software prunes OpenVMS hobbyist program • The Register
<nikolapdp>
why
<nikolapdp>
do they care about vax so much
<nortti>
I'd presume the exact opposite, only hobbyists cared about the vaxa port
<nortti>
-a
<nikolapdp>
so just let them have it?
<nortti>
alas I don't believe businesses in this space believe in "giving away your old stuff for free"
<nikolapdp>
it literally doesn't cost them anything and also brings good will
<nikolapdp>
like they only gain from it
<nortti>
I mean, yeah. but openvms for vax is far from the only retro OS that's no longer legally available
<GeDaMo>
I assume there are still people using VMS and paying for support
Arthuria has quit [Ping timeout: 260 seconds]
<nikolapdp>
yeah but the thing is that it was available recently and there was no reason to not keep on with taht
<zid>
is it 10pm yet
<nikolapdp>
no
<zid>
how about now?
<nikolapdp>
no
j00ru has quit [Ping timeout: 272 seconds]
PublicWiFi has quit [Quit: WeeChat 4.0.3]
<gorgonical>
for a brief while I had access to openvms, the hobbyist version
PublicWiFi has joined #osdev
<nikolapdp>
what did you do with it
<gorgonical>
I just poked around. I got access to it by total serendipity when I met a guy from the australian user group who offered to give me access so I could play with it
<gorgonical>
Who I met in a hostel in japan
<gorgonical>
this was before I started my phd and was still pretty peripherally interested in OS development only
<bslsk05>
www.zephyrproject.org: Zephyr Project – A proven RTOS ecosystem, by developers, for developers
<dinkelhacker>
yes, I was reading through their documentation about memory protection (https://docs.zephyrproject.org/latest/kernel/usermode/memory_domain.html). They say: "The kernel ensures that any user thread will have access to its own stack buffer, plus program text and read-only data. " I was wondering how they can seperate threads code segments from each other.
<nortti>
presumably through usage of either an MMU, or on targets that lack that, an MPU
<nortti>
or how do you mean?
<zid>
GeDaMo I have mini sausage rolls and garlic and herb dip
<zid>
Just needs a fried mars bar to wash it down
ecs has quit [Remote host closed the connection]
<dinkelhacker>
Looking through the samples it seems like you can just have some arbitraty functions in the same file and call a thread creation API and pass the respective function. However, if they would really seperate threads from each other what would happen if I have a a common function called from both thread functions? Like to configure the MMU or the MPU they would need to somehow link all the functions
<dinkelhacker>
from each thread to a section.
ecs has joined #osdev
<zid>
the point of threads is that they're not seprated though?
<nikolapdp>
also i assume the text section is read only so you can share it no problem
<zid>
separated threads are called processes
<dinkelhacker>
nikolapdp: yeah I also thought that but it kind of reads like they would also seperate that... but that would be tricky I guess
<nikolapdp>
well if it's read only, they are separated
<nikolapdp>
they can't affect each other
<nikolapdp>
also what zid said, if the threads aren't sharing a memory space, it's process
<dinkelhacker>
Is that something aggreed on also in the context of RTOSes? Zephyr seems to only have Threads, no Tasks.
<dinkelhacker>
I kind of get confused by that at times. Seems like some people use theses terms interchangably.
<nikolapdp>
well you'll have to read the documentation then
<nikolapdp>
to know what they mean
<zid>
sounds like it has threads, and implements threads which it calls threads
<nikolapdp>
zid, rtos calls them tasks
<zid>
tough titties
<nikolapdp>
kek
<zid>
just less than an hour to gooo
<nikolapdp>
it's 10pm here, i don't know what you're talking about
<zid>
it's 8/8 on the penultimate volume, presumably we get SPOILER perspective
<zid>
which will reveal a whole bunch of stuff that the MC missed because she's an unreliable AF narrator
<zid>
then next week onwards.. final volume
<geist>
huh TIL that in C++17 you can declare an inline variable in a header
<nikolapdp>
and it works??
<zid>
From what I've seen of the inline keyword in C++, it seems very very strange
<zid>
and you have to fix it all up afterwards with -fno-common or -fcommon no matter what you do :P
<geist>
yah C++17 allows you to declare an inline variable, which lets you put something in a header
<nikolapdp>
yeah definitely a whaaa? type of situation
<zid>
gcc switched from -fcommon to -fno-common default, which stopped it merging some of those 'lol declare everything in a header and let the linker sort it out' things
<zid>
and broke a whole bunch of C++ code
<zid>
loots of traffic in the support channels
<zid>
It was like.. gcc 11?
<zid>
I think you're supposed to add inline, which does.. the opposite of what it says, or something? It's weeird.
<geist>
yeah from that link it looks like it's more like it matches the existing properties of inline functions
<geist>
which though the function is inline, can be stamped out, and the linker promises to dump all but one version
<geist>
so though inline variable isn't really inline, or whatnot, it has the same linkage property
<gorgonical>
I have an actual osdev question, for once
* geist
waits with baited breath
<gorgonical>
processes that do an anonymous mmap get a chunk of their own heap space starting from the top. But right now we have a sort of assumption that processes won't be frequently mapping/munmapping, so we don't cleanup spaces. So munmap leaves holes that aren't reclaimed
<zid>
does it? which os is this?
<gorgonical>
The obvious fix for this is to stick a buddy allocator in there so those spaces can be reclaimed when munmap happens
<gorgonical>
a kernel designed for long-running hpc processes
<gorgonical>
kitten
<zid>
I'd *prefer*, tbh, perf wise, if it just.. on failure, tried to compact
<nikolapdp>
so you have almost a gc pause then
<zid>
rather than having to eat the minor overhead all of the time in the vast vast majority of cases where it doesn't happen
<nikolapdp>
how often do you unmap though
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<gorgonical>
the hpc part was its first life. I'm now adapting it for more general purposes
<zid>
you'd need it in the mmap path wouldn't you
<gorgonical>
yes
<gorgonical>
nikolapdp: well seemingly musl optimizes (?) file accesses with mmap/munmap
<zid>
making mmap slow so that munmap is fast seems counterproductive to me, nikolapdp
<geist>
hmm, what do yo mean 'cleanup spaces'?
<nikolapdp>
i mean yeah but how much of an overhead a buddy allocator would be
<gorgonical>
geist: we currently unmap regions. so you munmap(some_rgn) and we delete the pages from the process page tables. But that physical memory range is left in-place
<netbsduser>
gorgonical: i implement the "vmem" resource allocator from solaris for tracking vm map entries
<geist>
that i dont get. what do yo mean it's left in place?
<gorgonical>
And the mmap_brk is just a single point that grows toward the heap beginning
<geist>
like, you dont return the physical page to the free list?
<gorgonical>
there isn't a free list is my point
<geist>
ah i see
<gorgonical>
because we never needed one
<geist>
so you leak the physical memory basically
<gorgonical>
yes
<geist>
that's probably a better way to describe it
<gorgonical>
because hpc apps don't really ever munmap stuff
<netbsduser>
oh
<netbsduser>
i thought you were on about the virtual address map
<zid>
same
<gorgonical>
sorry for the confusion
<zid>
I thought you had fragmented your memory regions internally
<geist>
basically your PMM (physical memory manager) is basically a single pointer that moves forward with no way to return free pages?
<nikolapdp>
yeah i thought so too lol
<gorgonical>
yes
<gorgonical>
currently
<geist>
think of it as there's a PMM and a VMM. usually that's a reasonable abstraction
<zid>
best way for a hobby OS to work
<gorgonical>
and for normal applications that like write to files and free buffers and shit that's not acceptable
<geist>
the whole virtual memory system is among other things involving a pmm and vmm
oldgalileo has quit [Ping timeout: 246 seconds]
<gorgonical>
i'm just trying to figure out if there's a better solution than a buddy allocator
<netbsduser>
gorgonical: many kernels sufficed with a simple freelist of pages as allocating physically contiguous pages is a rare operation almost solely done in early boot and occasionally depending on what devices can be hotplugged
<nikolapdp>
and you can keep a small pool for that
<gorgonical>
free list gets much better memory compactness at the cost of performance though. a contiguous mapping should have higher performance becuase of prefetching right
<netbsduser>
nowadays some kernels try to use large pages (a strong case against fork() which forces you to make tough decisions since it CoWs private memory)
<netbsduser>
i don't know whether prefetch is done across page boundaries on typical processors
<gorgonical>
yeah I was just thinking that perhaps this only helps if the page mapper eagerly maps regions at the largest chunk size
<netbsduser>
large pages are an optimisation in terms of reducing TLB pressure
<gorgonical>
which, ours does fwiw
<gorgonical>
contiguous regions saves you on pfns in the region descriptors I guess
<dostoyevsky2>
isn't fork so difficult to implement efficiently?
<Ermine>
it is
<Ermine>
copying the whole process is hard
<dostoyevsky2>
so I was wondering if one could "scan" for "exec" following immediately and then just skip the whole fork thing
<Ermine>
Cow alleviates that, but then you get smp and your cow becomes dirty if you're not careful enough
<Ermine>
define "immediately"
<gorgonical>
dostoyevsky2: isn't this the point of like execve or whatever
<gorgonical>
I can't remember which one, but one of the forking syscalls was designed just for this
<nikolapdp>
vfork?
<dostoyevsky2>
Ermine: Just let fork be slow and check for the next instructions after the fork syscall, does it look like the typical fork/exec? If yes, just skip the fork, done
<gorgonical>
Yes I guess the point is that clone does all this
<dostoyevsky2>
or maybe let fork just be a syscall that returns a new pid, and if the next syscall isn't exec it executes the rest of the fork
<dostoyevsky2>
but I guess that'll don't work with memory accesses inbetween
<nikolapdp>
that'll don't work indeed
spareproject has quit [Remote host closed the connection]
<geist>
dostoyevsky2: that's a terrible idea
<geist>
the checking the next instructions
<geist>
but yeah that's what vfork/etc is all about
<geist>
and of course posix_spawn
<zid>
Okay book read, back to my cave for a week
<netbsduser>
dostoyevsky2: it is impossible to implement fork really efficiently if the goal is a way to get new processes that aren't clones of the parent
<netbsduser>
cow lets you avoid copying the entire address space and doubling memory use, but still requires a painful process to establish the copy-on-writes, including not tlb shootdowns, it's worse than that, it's a tlb massacre
SGautam has joined #osdev
* Ermine
looks at nagios
<dostoyevsky2>
geist: idk, checking the instructions is also done by ebpf... and realistically you want e.g. shell scripts to be reasonably fast in a hobby os, so wouldn't be too hard to recognize a couple of fork/execs to have a noticeable performance gain
<dostoyevsky2>
I guess when you want to actually implement a realistic OS you need to implement most of the fork optimizations like the major OSes do
<dostoyevsky2>
But I think in the Linux kernel a significant amount of complexity is probably due to fork optimizations
goliath has quit [Quit: SIGSEGV]
<netbsduser>
you could also leave it as a slow path and patch it out in favour of vfork wherever you find it
<netbsduser>
i think this is more reasonable
<dostoyevsky2>
netbsduser: Would COW typically copy on each fault, or would that also become slow very quickly? So rather you need to avoid more page faults, and copy ahead?
<dostoyevsky2>
I guess when you use huge tables it makes the faults less frequent
<netbsduser>
dostoyevsky2: i personally would regard it as quite unacceptable to do some sort of "cow clustering" like this, since it increases the memory consumed potentially needlessly
oldgalileo has joined #osdev
<netbsduser>
so i think people who fork on top of my kernel should eat the consequences of their obscenity
<dostoyevsky2>
*huge pages even
<netbsduser>
dostoyevsky2: that's a big issue i think, the done thing is to break up the large pages on fork
<netbsduser>
so you get granular sharing of pages instead of having to go from sharing 2mib to sharing 0mib in one fault
<dostoyevsky2>
netbsduser: what programs actually use a fork->COW work-flow? I can only think of something arcane like apache/nginx passing sockets to worker processes in a fork-less web server...
bradd has quit [Ping timeout: 256 seconds]
<netbsduser>
dostoyevsky2: old servers, the sort that used inetd. it's not so bad there since they will be small
<dostoyevsky2>
netbsduser: couldn't one just rewrite those couple of programs instead of implementing all the fork optimizations in the kernel? haha
<netbsduser>
dostoyevsky2: realistically programs doing fork-per-connection don't need optimisation
<netbsduser>
it's good to uphold what's accepted and traditional but for good reason this is no longer the accepted and traditional practice
<netbsduser>
threads and/or epoll/kqueue are now the accepted practice and this has been the done thing for over 20 years so it's now more venerable than the old forking-per-connection approach was at that time
dude12312414 has joined #osdev
<dostoyevsky2>
netbsduser: if one were to write some linux-compatible kernel without support for fork and just have exec, would one still need to implement most of the COW stuff for the likes of mmap()?
<netbsduser>
dostoyevsky2: the fork that is used for mmap() with MAP_PRIVATE is of an altogether different character
<netbsduser>
it's asymmetrical while the fork of cow is symmetrical
bradd has joined #osdev
<netbsduser>
that is, the shared page is only copied in the MAP_PRIVATE case when you write to a view of it within a private mapping
<dostoyevsky2>
netbsduser: I guess one could implement mmap() of a file with just huge pages, and never really want to refactor it into smaller pages... unless e.g. it was MAP_SHARED with many processes writing?
oldgalileo has quit [Ping timeout: 240 seconds]
dude12312414 has quit [Remote host closed the connection]
netbsduser has quit [Ping timeout: 268 seconds]
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]