<geist>
kof123: i think NT inheirited some of that with the whole system vs user variables stuff
netbsduser` has quit [Ping timeout: 256 seconds]
<geist>
i betcha those get expanded at a lower level than a posixy person would expect
dutch has joined #osdev
freakazoid332 has quit [Ping timeout: 246 seconds]
Yoofie has quit [Read error: Connection reset by peer]
<kof123>
that just leads to: daaaaaaaaaave cutler <unix person airballs> </baseketball>
Yoofie has joined #osdev
<mjg>
heat: openbsd has (or at least had until recently) a globally-locked kernel malloc
<heat>
what
<heat>
how?
<mjg>
?
<heat>
i'm starting to think literally no one uses openbsd
<geist>
hell, zircon does. it's a bottleneck in certain operations, but not most
<mjg>
they use it on their 4-cored laptops
<heat>
that's going to be a big contention point
<geist>
ther'es always a bottleneck, it's bottlenecks all the way down
<geist>
for openbsd that may not be the biggest one
<mjg>
*some* shit scales geist
<mjg>
like linearly
<mjg>
multicore-wise
<geist>
omg you're right!
<geist>
shit!!!!!111
<heat>
SHIT!
<heat>
poopy
<geist>
CRAAAAAAAP
<heat>
fart fart fart
<mjg>
applause MOTHERF^W
<geist>
ia64
<heat>
rut
<geist>
QED
<mjg>
ia64 is the shit innit
<mjg>
fuchsia port when
<mjg>
did you ask Larry if this can be your 20%
<heat>
big tech doesn't want itanium to succeed!!!!
<mjg>
FUCKING CONSPIRACY INNIT
<geist>
gosh i'd love to port to ia64
<geist>
i still should do an LK port
<geist>
if i didn't just waste time with vidjagames and irc
<heat>
everyone stfu
<heat>
let geist concentrate
<geist>
nah it's all my fault
<heat>
mjg, no way to do lockless/atomic-less fd allocation right?
<mjg>
with unix smenatics?
<mjg>
no
<mjg>
(unix semantics == *always* pick lowest fd)
<mjg>
otherwise totally doable
frkazoid333 has joined #osdev
<qookie>
atomicless?
<mjg>
you assign fd ranges to threads and are done with it
<mjg>
"big enough"
<mjg>
this assumes some other constraints tho
<heat>
yeah i thought so too
<qookie>
okay yeah, i was thinking one everincreasing counter per process for allocating fds
<mjg>
for examlpe now you can have lolthread1 open and lolthread2 close it
<mjg>
without atomics
<mjg>
but if you can suffer atomics, while achieving scalability for the most part, ranges are the way to go
<mjg>
but then, if you have a prog where you can consistently guarantee no fd overlap
<mjg>
... then maybe spawn a bunch of processes and maybe share some memory with mmap?
<mjg>
:]
<mjg>
ultra win
<mjg>
erm, that's "you CAN NOT have" in the first sentence
CaCode has quit [Ping timeout: 256 seconds]
<geist>
yeah unix semantics of fd allocation has to be one of the top 3 things that ended up being a bad idea
<geist>
its so limiting and a huge source of bugs/exploits
<mjg>
another fd-related funzy is processes blindly asuming they got 0/1/2 open
<mjg>
to rust's credit, they sanity check this state on binary startup
<mjg>
and the way they do it is least bad i can think of on unix (poll of the 3)
<heat>
musl does that on AT_SECURE
<mjg>
now that i wrote it a have a better idea, but it is kind of a hack
<heat>
(when suid, sgid, or special caps are involved)
<mjg>
brah suid is another great unix shitter innit
eddof13 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<mjg>
did you know that fucking SOLARIS respected LD_PRELOAD for suid binaries? :D
<mjg>
non plus ultra kek
<mjg>
in general if you were at least a teenager in 2005, were not an idiot and wanted to do some security
<mjg>
facepalm-worthy bugs were everywhere
<mjg>
quite frankly embarassing
<moon-child>
lol
<kof123>
re. env vars i think anything with multiple "personalities" (or subsystems) is going to have to decide hierarchy for which get passed on to a process of a specific kind, or which takes priority, or do different types of processes pull from different places to start out, etc.
<GreaseMonkey>
i wonder which versions did LD_PRELOAD on suid binaries because part of me wants to try exploiting that... but another part of me is aware that doing such an exploit is borderline trivial and not that interesting
MiningMarsh has quit [Ping timeout: 256 seconds]
<mjg>
it was something just after they released opensolaris
<mjg>
one of the components they *did not* open was the linekr
<mjg>
that was sus so people started poking around and BAM
<heat>
mjg, fstat1_threads scales very well under RCU :v
<heat>
6M single threaded, 22M 4 threads
arminweigl has joined #osdev
<mjg>
we have been over this
<mjg>
lookups with different terminal entries avoid any cacheline ping pong even on freebsd
<mjg>
everything goes to shit if you have the same target
<mjg>
interestingly this can also be made to scale perfectly in the common case
<mjg>
i even had a prototype for freebsd
<heat>
what
<heat>
mate i had a spinlock before, and added rcu
<mjg>
oh you mean in your kernel?
<heat>
yes
<mjg>
you really should have specified IDI^W
<mjg>
are you sure you are leapfrogging correctly
<heat>
leapfrogging what?
<mjg>
between components mofer
<mjg>
yoloing from one to the next is wrong
<heat>
i'm not yoloing
<mjg>
what are you doing
<heat>
adding RCU, and now i added fd_table RCU support similar to how linux does things
<mjg>
what are you doing to secure the corssing you fuck
<heat>
wdym secure the crossing?
<mjg>
so you are yoloing
<heat>
pal
<heat>
first im testing the RCU impl for bugs
<heat>
then i'll enable KASAN and look at what pops up
<mjg>
i know nobody named heat
<heat>
there's nothing else I can do
<mjg>
do you understand what sequence counters are for in this shite
<heat>
what shite?
<mjg>
i'm going to sleep
<heat>
you're not making any sense
<mjg>
read "Fast path lookup protected with SMR and sequence counters" in vfs_cache.c on freebsd
<heat>
erm
<heat>
dude
<heat>
this is not path lookup
<mjg>
you are scaling fd lookup? :d
<heat>
yes
<mjg>
brah
<heat>
<heat> mate i had a spinlock before, and added rcu
<mjg>
that's so 2004
<mjg>
you did have spinlocked lookup previously?
<heat>
yep
<mjg>
i mean path lookup
<mjg>
look man, my brain is high on vfs
<mjg>
so
<mjg>
you gotta be more specific
<heat>
oh no
<heat>
lookup is rwspin locked
<mjg>
SHITE
<heat>
for path
<heat>
s
<heat>
it's not good but, again, i just needed a quick testbench for RCU
<heat>
and this is a super great improvement on its own
<heat>
my fd lookup scaled backwards with contention :v
<mjg>
chances are your locks suKKK
vdamewood has joined #osdev
Yoofie has quit [Read error: Connection reset by peer]
Yoofie has joined #osdev
<linkdd>
well, it was in fact way easier to put my logical cpu index in the gs register, add swapgs to my ISRs, and then use that index to return the correct percpu data structure in my get_percpu_data() function
<heat>
oh ew
<heat>
why
<linkdd>
so, i don't really use the gs register as a segment with gs:0 pointing to the structure
<heat>
that's far worse
<linkdd>
how so? it works
<heat>
having gs_base = percpu data means %gs:0 gives you a direct pointer (no yucky codegen), %gs:2 gives you the data member at offset 2, etc
nvmd has joined #osdev
<linkdd>
maybe it's because it's 4am, and i haven't manipulated segmented memory layouts in the last 15 years, but it's currently unclear to me how to put a 64 bits pointer into a 16 bits register
xenos1984 has quit [Read error: Connection reset by peer]
joe9 has quit [Quit: leaving]
<heat>
haha
<heat>
%gs hasn't worked like that for 20 years
<heat>
you have a GS_BASE msr to set the segment's base
<heat>
and a KERNEL_GS_BASE for a banked copy for the kernel
<zid>
that's just how *gs* has worked for 20 years, segments haven't worked like that for 40
<linkdd>
swapgs swaps the content of the gs register with the msr one
<Cindy>
what's gs?
<linkdd>
so i do have to write to gs, or read from gs at some point
<qookie>
it swaps the contents of the gsbase and kernelgsbase msrs
<linkdd>
Cindy: the x86_64 register
<heat>
what qookie said
<linkdd>
qookie: oh, i must have misread the doc then
<linkdd>
4am again
<linkdd>
i should go to sleep
<zid>
(protected mode made those nice 16bit integers an index into a table instead, so 40 years since you put an actual address directly into a segment reg)
<linkdd>
so the gs register is totally useless, and i should use the apic to read from/write to the msr?
<heat>
why the apic?
<heat>
the APIC has nothing to do with MSRs
<linkdd>
https://bpa.st/URAA that's because i have this code somewhere, but maybe it's for a completely different purpose and my brain is as useful as a rock
<bslsk05>
bpa.st: View paste URAA
<linkdd>
4am is not a good time for my brain
<heat>
that has nothing to do with %gs or its base
<zid>
you probably wrote your rdmsr functions
<zid>
so you could mess with the apic
<linkdd>
zid: yes
<zid>
so your mine has conflated the two
<zid>
mind*
<Cindy>
oh i see, fs and gs
<Cindy>
the 2 useless registers in x86_64
<heat>
what
<heat>
???
<heat>
is everyone on crack tonight
<qookie>
well the selector registers are quite useless :^)
<zid>
no, it's perfectly cromulent
<Cindy>
no seriously
<Cindy>
they should have called it address register number 8 or 9
<linkdd>
sooooooo, gs_value = rdmsr(0xC0000101) ?
<linkdd>
kernel_gs_value = rdmsr(0xC0000102) ?
<zid>
#define IA32_KERNEL_GS_BASE 0xC0000102
<zid>
can confirm
<zid>
does linux let you do anything fun with userspace gs heat
<qookie>
wine needs it so yeah
<heat>
erm
<heat>
fsgsbase (the instructions) will work
<zid>
ah yea wine has a *lot* of one off behaviors like that
<heat>
but userspace sets it uing arch_prctl(2)
<zid>
afaik they added an entirely new syscall mechanism for them fairly recently
<zid>
so that they could trap kernel32.dll doing `syscall` and redirect it back to wine
<heat>
in fact you'll see arch_prctl ARCH_SET_FS (iirc?) on every strace, that's where they set up TLS
<zid>
yea I knew fs was TLS
<zid>
wasn't sure if gs had anything
<heat>
ARCH_SET_GS also works
<zid>
I think that syscall thing ended up going through the security engine or something? The thread was fun
<zid>
but had lots of ideas in it and I stopped following it
<heat>
i vaguely know what you're talking about but i don't know what ended up happening
<zid>
same
<zid>
I read the first 20 or so posts cus I happened across it, then never saw the end
<zid>
usually someone just drops a patchset later on and it ends up in someone's tree
<zid>
there's no "this is the resolution to all that discussion!" alert
<bslsk05>
github.com: linux/arch/x86/Kconfig at 13b9372068660fe4f7023f43081067376582ef3c · torvalds/linux · GitHub
<klange>
I think all that says is it's a boolean and defaults to yes.
<heat>
x86 does not support nommu i'm pretty sure
CaCode has joined #osdev
stolen has joined #osdev
<AmyMalik>
i had a very stupid idea
<sham1>
why
<sham1>
why would you do that
<AmyMalik>
the idea is to start writing a... sidecar OS. so you see, I want to do some fuckery with DOS. and I want to do that fuckery on multi-core computers. and i want to do multi-core fuckery. this, as long as basically all routines that could conflict with hardware the sidecar OS needs to use are trapped, and a block of memory is shaded from view of DOS programs, could be interesting to muck about with.
<AmyMalik>
it's likely impossible, without effectively implementing an emulator as the OS. but i have to try it
<AmyMalik>
even though what I want to write is not an OS and would take great pains to not call itself an OS, the problem space is essentially the OSdev problem space, because I have to do things only OSes have to do.
<kof123>
well, "bytecode" or similar is another way to basically control what instructions can be executed
<kof123>
but i would say that falls under "emulator"
<Mutabah>
AmyMalik: sounds interesting...
<Mutabah>
like a very thin hypervisor with DOS-era devices?
Terlisimo has quit [Quit: Connection reset by beer]
gildasio1 has joined #osdev
gildasio has quit [Ping timeout: 240 seconds]
stolen has quit [Quit: Connection closed for inactivity]
Terlisimo has joined #osdev
stolen has joined #osdev
air has quit [Ping timeout: 246 seconds]
CaCode has quit [Ping timeout: 246 seconds]
zxrom has quit [Quit: Leaving]
zxrom has joined #osdev
danilogondolfo has joined #osdev
CaCode has joined #osdev
air has joined #osdev
austincheney_ has joined #osdev
austincheney has quit [Ping timeout: 260 seconds]
phoooo has joined #osdev
phoooo has quit [Client Quit]
air has quit [Ping timeout: 256 seconds]
air has joined #osdev
phoooo has joined #osdev
Burgundy has joined #osdev
phoooo has quit [Ping timeout: 246 seconds]
gog has joined #osdev
austincheney_ is now known as austincheney
GeDaMo has joined #osdev
Left_Turn has joined #osdev
stolen has quit [Quit: Connection closed for inactivity]
phoooo has joined #osdev
TheCatCollective has quit [Ping timeout: 246 seconds]
ebb has quit [Ping timeout: 240 seconds]
Cindy has quit [Ping timeout: 246 seconds]
hl has quit [Ping timeout: 252 seconds]
j`ey has quit [Ping timeout: 250 seconds]
<phoooo>
hello, when switching to supervisor mode in my riscv kernel while having paging enabled, i seem to get a fault with an mstatus of 1 (instruction failure), any idea why this happens?
<phoooo>
if i allow supervisor mode to access the entire physical address space as is, it simply works
<Mutabah>
Do you have the current PC pointing to a page that will be present once paging is on?
<phoooo>
it should, i have checked with the info mem command
hl has joined #osdev
<phoooo>
note that im switching to supervisor with mret, if that has anything to do with it
TheCatCollective has joined #osdev
j`ey has joined #osdev
bnchs has joined #osdev
ebb has joined #osdev
eck has quit [Ping timeout: 260 seconds]
eck has joined #osdev
sortie has joined #osdev
gareppa has joined #osdev
<phoooo>
Mutabah: wait, one question regarding pmpaddr... if i set it to the entire physical address space, will the supervisor mode be able to access physical memory as is? or will every memory access be through virtual memory?
* Mutabah
is away (Misc)
<phoooo>
i think i'm misunderstanding something
phoooo68 has joined #osdev
phoooo68 has quit [Client Quit]
phoooo10 has joined #osdev
phoooo10 has quit [Client Quit]
phoooo15 has joined #osdev
phoooo15 is now known as phoooo_
phoooo has quit [Ping timeout: 246 seconds]
phoooo_ is now known as phoooo
Burgundy has quit [Ping timeout: 256 seconds]
<linkdd>
https://bpa.st/73RQ using this snippet to wait (in the bootstrap processor) for the AP processors to finish something. is it dumb?