<mjg>
utter nonsense seemed to be their thing (one of them anyway)
<heat>
no, 4chan is offensive humor (sometimes with air quotes around humor)
<mcrod>
i have it blackholed
<milesrout>
as Richard Stallman once said of /g/ "I tried to look at that page but saw only inane comments."
<mjg>
if stallman says that it may go either way man
<immibis>
mcrod: great, now i have to change the combination on my luggage
<mjg>
what is /g/ about
<heat>
linux i think?
<mjg>
aptly named
<mjg>
after GNU
<heat>
ah no, technology
<mjg>
technoloGy
<milesrout>
technology
<mjg>
i get it
<milesrout>
they probably just ran out of letters
<mjg>
some unix vibes
<milesrout>
when they creat'd it
<mjg>
i wonder how many syscalls got misnamed because the teletype malfunctioned
<mjg>
"well fuck it"
<mjg>
"who cares"
<milesrout>
there's a thread on /g/ right now about templeos
<mjg>
the really funny bit here is that the binary is named umount
<milesrout>
well about whether the CIA killed Terry actually
<mjg>
but syscall is unmount
<mjg>
like wtf man
<mcrod>
immibis: what
<heat>
milesrout, did you see the pinned post? rms, ballmer and terry + an anime girl
<mjg>
wait, are you active 4chan users?
* mjg
hands out some bans
<heat>
no, but i have a web browser
<milesrout>
very occasionally browse it but it's very low quality these days
<milesrout>
half the threads on /g/ are just about muh ai
<heat>
mjg, also it's umount on linux
<milesrout>
desperate nerds hoping for an AI chatbot girlfriend
<heat>
pog
<immibis>
mcrod: 8675309, that's the combination on my luggage
<mcrod>
you're kidding me
<mcrod>
you do realize that's also a well known song
<immibis>
mcrod: you do realize that's also a well known meme
<mcrod>
yes
<immibis>
oh hey, my cellphone internet is down. do you think they finally banned me for using multiple terabytes per month for the last several months? probably not at 2am
<immibis>
although that's midnight utc so who knows
<bslsk05>
gist.github.com: x86 is an octal machine · GitHub
<kazinsal>
also since all eight of the PDP-11's registers are all addressible using all addressing modes (iirc) there's three bits of addressing mode selection and three bits of register selection
<sham1>
I've played around with Z80 (well technically the Sharp CPU that's a stripped down Z80, found in GameBoys) and there the instructions also group neatly by octal
<heat>
in fact, you can return the scoped_lock in C++, no idea about the C hilarity here
<mjg>
i'm saying that's bad
<heat>
why?
<Griwes>
...automatic unlock on scope exit is good
<mjg>
instead, if one is concerned with locks leaking when they should not, add annotations of intended behavior
<Griwes>
it's how you avoid forgetting to unlock on some or all branches
<heat>
have you considered that scoped locks effectively clean up exit and error paths?
<mjg>
see above
<sham1>
I suppose the pessimal part about this is that locking probably shouldn't correlate with scope necessarily
<heat>
btw clang at least does not support conditional locking mate
<Griwes>
even people writing real world C have realized that and started making sure they get automatic cleanup with compiler attributes
<mjg>
by real world c you must mean systemd here
<sham1>
This is why I enforce one-place return religiously
<Griwes>
systemd is actually doing this right vOv
<heat>
mjg, qemu, glib
<sham1>
Well, not really. But reducing the amount of return paths is useful for this. Esp if you goto to an error path
<Griwes>
one-place return is silly
<mjg>
by relying on automagic unlock you are mostl iekly extending hold time
<Griwes>
just have automatic cleanup and then you don't need to deal with bullshit of goto error;
<Griwes>
mjg, have you heard... about braces
<Griwes>
I routinely introduce scopes just to limit resource lifetime, I'm sure the C extensions for this also support that (because anything else would be insanity)
<sham1>
Well the C extensions exist because the compilers are also C++ compilers, so they follow that semsntic
<sham1>
Semantic
<sham1>
I almost typo'd that twice in the same manner
<sham1>
Phones are terrible
<heat>
why are you on IRC on your phone? bad move
<sham1>
Because I'm going to bed, very, very slowly
<heat>
IRC is best used on 90s hardware or a shitty libreboot thinkpad with an FSF sticker
<sham1>
> FSF
<sham1>
I'd rather not
<heat>
also that said thinkpad does not run any conventional distribution due to BLOBS
<sham1>
*Maybe* FSFE but yeah
<sham1>
And yes, of course you'd run Parabola BTW
<mjg>
Griwes: are you for real mate
<Griwes>
Yes, and it's been working for me perfectly for years
<gog>
hi
<sham1>
hi
<heat>
you should try out java's synchronized
<heat>
that can lock ANYTHING
<heat>
int a; synchronized (a) {System.out.println("mjg <3 Sun Microsystems");}
<sham1>
It's old-fashioned to just have an Object around you can synchronize with
* gog
locks and pops
<heat>
synchronized(this)!!
<sham1>
Of course modern Java can use a try-with-resources and a real lock type
<sham1>
Even works with the virtual threads
<mcrod>
hi
<sham1>
Didn't think I'd have the opportunity to rant about Java and some of the strange decisions by Sun
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<sham1>
So, you can use synchronized on any instance of Object, i.e. any non-primitive type. You can also use any Object as a condition variable, wait on said object until another thread somewhere calls either notify or notifyAll. But this means that you have to put all this condvar stuff into every. Single. Object header
<sham1>
I suppose it's neat that you can technically lock on the resource itself instead of needing to lock on an external mutex or something, but it's still a strange choice, which can't even be removed anymore
zxrom__ has joined #osdev
zxrom_ has quit [Ping timeout: 245 seconds]
danilogondolfo has quit [Quit: Leaving]
eddof13 has joined #osdev
dutch has quit [Ping timeout: 246 seconds]
<heat>
sham1, doesn't mean that
<heat>
you could have a large futex-like hashtable
eddof13 has quit [Client Quit]
<heat>
for all of this stuff
dutch has joined #osdev
heat has quit [Remote host closed the connection]
[itchyjunk] has joined #osdev
vdamewood has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]
<mcrod>
tell me something people
<mcrod>
and bear with me
<mcrod>
there is a reason I am asking
<mcrod>
(i am not this silly)
<mcrod>
if you set your stack size to `unlimited`, does that mean you never have to use malloc()?
<mcrod>
i am waiting for the response that I expect
<mcrod>
i am being intentionally cryptic for a reason
<kof123>
ram is ram. barring cpu instructions for "stack" stuff... "it's all the same to me." but this of course is going to depend on what a language and "program" does to a stack/how it defines a stack. i mean, you could "simulate" either with either, write/recreate language x inside of language y, etc.
<gog>
no
<gog>
depending on the language
<gog>
presuming a c-like where you mean a call stack that local variables with automatic storate duration live
<kof123>
^ there's "stack" and then there's "Stack"
<gog>
but
<vdamewood>
Anyone want to exchange stacks?
<gog>
if we're talking abouta hypothetical language
<mcrod>
this is C/C++
<gog>
ok
<gog>
then no, the stack is volatile
<mcrod>
so if something stupid occurs
<vdamewood>
I'd rather program in Hypothetical.
<mcrod>
e.g., double gog[2048][2048][200]
<mcrod>
first off, under normal circumstances *duh*, yes, this will cause a stack overflow
<kof123>
that makes more sense, just make one giant global variable lol
<mcrod>
but the stack is not infinite
<gog>
yes
<gog>
your stack is limited by address space
<vdamewood>
Isn' hat how Quake 2 worked?
<vdamewood>
that*
<gog>
i think a lot of games still work this way
<mcrod>
vdamewood: you're getting closer
<mcrod>
much closer to why I'm asking
heat has joined #osdev
<vdamewood>
One reason you still can't use the stack only, though, is that you may need to allocate memory in one stack frame, and release that stack frame before you dispose of the allocated memory.
<heat>
sorry, had to pop back in just to say something
<kof123>
sounds like AOS versus SOA maybe
<heat>
mcrod: what's up with the socratic questioning you doofus
<mcrod>
gog: so, limited by address space, and on x86-64 we have 48 bits that matter only (as far as I know, I am not an x86-64 wizard)
<gog>
57 if you have 5-level paging
<mcrod>
heat: i am trying to understand if `ulimit -s unlimited` is actually somehow unlimited, which I doubted in the first place
<mcrod>
and goggers over here appears to have answered my question
<heat>
obviously not unlimited
<mcrod>
yes
<heat>
but you just don't have a limit
<heat>
as in rlimit
<gog>
it's unlimited in the sense that the kernel will do no enforcement
<gog>
but you can still OOM
<heat>
what's likely to happen is that the stack will colide with an mmap and SEGV your ass
<gog>
yes
<mcrod>
right
<heat>
or obviously exhaust mem
<mcrod>
the reason I ask
<gog>
which is more likely to happen first on most computers
<mcrod>
in more deeper terms
<mcrod>
I am investigating memory allocation schemes, and there's a bunch that basically just eat the shit out of the stack
<mcrod>
note: not for OS stuff
<vdamewood>
Sounds like fun on systems where the stack only holds 356 bytes.
<vdamewood>
256*
<gog>
i decided i'm going to learn fortran
<heat>
are u stopid
<gog>
i am in the named audience
<heat>
of stopid ppl?
<gog>
for tran
<vdamewood>
gog: Yay-ish.
<heat>
hahahahaha
<heat>
ha
<heat>
hahaha
<heat>
haha
<heat>
ha
<mcrod>
i'm just looking through game code
<heat>
are u tran?
<heat>
a single tran?
<gog>
yes
<mcrod>
and even for some playstation games malloc() is never called
<heat>
mcrod bw what allocation schemes
* vdamewood
assigns gog a fishy at birth
<mcrod>
read: my understanding is most games will call malloc() once to allocate a shit ton of 'x', then sell off slices to the application to "allocate"
<gog>
blub
<heat>
yes
<vdamewood>
mcrod: Naw, they use .bss
<vdamewood>
or the equivalent
<heat>
same shit, really
<mcrod>
i am trying to understand why this seems to not be being done by any playstation game I've looked at
<vdamewood>
but muh details!
<gog>
historically there are programs that implement their own malloc because the available one sucks
<mcrod>
first off the playstation BIOS was garbage
<mcrod>
so that's probably a good reason
<gog>
so they either just sbrk() and use that, mmap() and use that or malloc() a large chunk and use that
<gog>
so yeah
<gog>
definitely the reason for the playstation
<heat>
AIUI the cutting edge idea is to have a per-frame pool and allocate on that, then when the frame ends every allocation is implicitly fucked-off'd and you start from a clean slate
<gog>
also because a lot of games ovewrote the firmware shadow to get a little extra memory
<mcrod>
dammit. they stole my idea.
<gog>
at least the areas that were safe to do
<mcrod>
this is what I was thinking about on my way home from work
<vdamewood>
heat: That sounds like how I implemented my first command interpreter.
<gog>
yeah crash bandicoot pioneered this technique iirc
<vdamewood>
You run a command, it allocates things, then when the command finishes, the allocator or reset and all the memory is implicitly free'd.
<mcrod>
so, my understanding is correct now: `ulimit -s unlimited` means "the kernel will not enforce any bullshit, but you are at risk of being killed at any time, and you're limited by address space"
<gog>
yes
<heat>
yes
<mcrod>
now
<heat>
$address_space being limited by x86 mm layout and aslr
<mcrod>
each process has its own virtual address space
<mcrod>
however, this virtual address space is "shared" with the rest of the system implicitly, seeing as how you can still go OOM if you decided to allocate the entirety of it all
<mcrod>
or close to it, whatever, at some point you get canned, you get the idea
<heat>
btw, just checked, x86 linux gives you 128MiB of stack
<mcrod>
the fuck?
<mcrod>
it's not 8MB anymore?
<heat>
that's the ulimit
<mcrod>
hm, I see
<vdamewood>
No one will ever need more than 640k.
<heat>
i was just looking at the mmap base placement code
<mcrod>
terminology check: ulimit == soft limit
<mcrod>
and then the stack is just... the stack, the hard stack maximum
* gog
stacks
<mcrod>
gog: may I tuck you in and give you a plushie
<kof123>
i thought ulimit stuff had soft and hard limits, but maybe that is bsd
<kof123>
which is not to deny that, but those terms might already be in use
<heat>
did you pick the wrong terms? xD
<gog>
mcrod: yes actually it's almost bedtime
* mcrod
tucks gog in and gives her a plushie
<mcrod>
heat: the fuck you say
* gog
prr and snuggle plushie
<kof123>
posix me harder. just call the other one harder limit
<gog>
i do have a plushie, it's a unicorn
<heat>
soft limit and hard limit are rlimit and ulimit terms already
<mcrod>
ok
<mcrod>
so ulimit is the soft limit which is 8MB on Linux
<heat>
no
<gog>
POSIX_ME_HARDER_DADDY
<mcrod>
ok I see
<heat>
ulimit -H looks at hard limits
<mcrod>
"Each call to either getrlimit() or setrlimit() identifies a specific resource to be operated upon as well as a resource limit. A resource limit is represented by an rlimit structure. The rlim_cur member specifies the current or soft limit and the rlim_max member specifies the maximum or hard limit. Soft limits may be changed by a process to any value that is less than or equal to the hard limit. A process may (irreversibly) lower its hard limit to any value that
<mcrod>
is greater than or equal to the soft limit. Only a process with appropriate privileges can raise a hard limit. Both hard and soft limits can be changed in a single call to setrlimit() subject to the constraints described above."
<heat>
that being another mmap region or OOM exhaustion
<mcrod>
right
<mcrod>
so what I'm hearing is
<heat>
and mmap can be as close as 128MiB from the stack
<mcrod>
using a stack based allocator for a game (example, and of course it depends on the game) may go sideways
<heat>
depends on how much memory you're allocating lol
<mcrod>
well that's why i said it may go sideways :(
<mcrod>
basically, imagine allocating I don't know, 30MB
<mcrod>
that I would imagine would _not_ shit the bed
gog has quit [Quit: byee]
<heat>
at this point we're speculating on architectural and kernel details
<mcrod>
so it's safer to use malloc()
<mcrod>
until proven otherwise
<heat>
sure
<heat>
malloc a whole pool if you'd like, and allocate like that
<mcrod>
this is my questioning of you because I am but a wee fly in the land of memory allocations at a die hard implementation level
<heat>
anyway, why do you care?
<mcrod>
i just said why
<heat>
no you didn't
<heat>
are you writing a game?
<mcrod>
no
<mcrod>
this is "i want to understand more about this"
<heat>
oh ok
<heat>
so yeah we've been discussing the stack but memory allocators are different
<mcrod>
but there _are_ stack based memory allocators
<mcrod>
yknow', alloca()
<heat>
those are just pool based memory allocators with "char buf[0x2000000]; pool_alloc a{buf}"
<heat>
alloca() is not a memory allocator
<mcrod>
well, it _allocates_ space in the stack frame of the caller
<heat>
(in fact, the C++ standard library has the pmr allocators that can do exactly this, point them at a buf and a size and they alloc from that)
<heat>
sure, but it's not /really/ memory allocation
<heat>
like, you can't return, ever
<heat>
if alloca is memory allocation, int a; is memory allocation
<mcrod>
ok fair point
<heat>
do you want a brain dump about memory?
<mcrod>
yes
<heat>
ok so UNIXes traditionally layed out memory like "program .text - .data - brk (right after .data) - large gap - stack - top of the addr space"
<heat>
then mmap got retrofitted in somewhere in the middle of brk and stack
<heat>
linux's modern x86 addr space layout kind of doesn't do this and just makes mmap allocate downwards, mmap base being really high up with some distance to the top of the stack to allow it to grow
<heat>
and things like the brk and stack and mmap base are all ASLR'd
<heat>
also they randomly picked the ELF PIE executable base to like 2/3 of the address space's size because whatever
<heat>
so on a typical distro you can look at your default PIE /bin/bash with pmap $$ and see "0x5500.....(64-bit number) - program .text - .data - some random offset to brk ... mmaps - stack (with random offset to the top)" with mmaps being in reverse-ish order
eck has quit [Ping timeout: 240 seconds]
<heat>
also no one really uses the brk anymore so that's not really important anymore
<heat>
but, you know, the stack grows downwards on page faults, brk goes upwards on brk(2), whatever, it's pretty standard shit
eck has joined #osdev
linearcannon has joined #osdev
<heat>
anyway you have multiple allocation schemes, they're all decently documented in their own respective allocator docs
<heat>
power-of-2 allocators are not very good due to lots of internal fragmentation
<heat>
so that makes buddy allocation for the heap kinda cringe
<heat>
slab allocators are good tho, press 1 for more on that
<mcrod>
uh
<mcrod>
ok speak about slab allocators
linear_cannon has quit [Ping timeout: 252 seconds]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<heat>
ok so the geniuses up at Sun microsystems thought of a way to allocate objects in such a way that you could easily cache em or whatever, with minimal fragmentation, in a fast way. so slab uses caches which are essentially a struct slab_cache {size_t obj_size; list partial_slabs; list free_slabs; list used_slabs;}. what's a slab? more or less a
<heat>
page, where you allocate from
linear_cannon has joined #osdev
<heat>
turns out this object allocation thing can be fit pretty easily into a standard malloc implementation
<heat>
(use caches of various sizes as bins, and you'll be able to get the slab from the pointer on free() anyway)
<heat>
and this is all pretty great as allocating from a slab is dumb-easy, allocating from a cache is dumb-easy (allocate from partials first (to try to fill them up and reduce fragmentation), allocate from free next, else allocate a new slab)
linearcannon has quit [Ping timeout: 246 seconds]
<heat>
and you reduce lock contention cuz each cache has its own locking
<heat>
then the VMEM paper from ~the same guys came out a few years later, going on about percpu magazines and how you should have a per-cpu cache of objects as that makes it effectively scale on many CPUs
<heat>
so you end up allocating batches of objects and putting them on your percpu cache, so the allocation fast path doesn't need to lock
<heat>
and yeah, that's pretty much it. some userspace allocators do things kind of differently but the per-cpu/per-thread cache thing is common
<heat>
also worth noting that kernel allocators tend to be a lot more greedy with unused memory as page reclamation can reclaim them pretty effortlessly
<heat>
whereas userspace could get swapped to death or OOM-kill'd