Turn_Left has quit [Read error: Connection reset by peer]
heat has quit [Read error: Connection reset by peer]
heat has joined #osdev
wlemuel has quit [Quit: Ping timeout (120 seconds)]
wlemuel has joined #osdev
heat_ has joined #osdev
heat has quit [Ping timeout: 248 seconds]
heat_ has quit [Remote host closed the connection]
xenos1984 has quit [Read error: Connection reset by peer]
heat_ has joined #osdev
xenos1984 has joined #osdev
SGautam has quit [Quit: Connection closed for inactivity]
linearcannon has joined #osdev
elastic_dog is now known as Guest4508
Guest4508 has quit [Killed (iridium.libera.chat (Nickname regained by services))]
elastic_dog has joined #osdev
linear_cannon has quit [Ping timeout: 250 seconds]
vdamewood has quit [Remote host closed the connection]
elastic_dog has quit [Ping timeout: 260 seconds]
vdamewood has joined #osdev
elastic_dog has joined #osdev
nyah has quit [Quit: leaving]
[itchyjunk] has quit [Ping timeout: 264 seconds]
elastic_dog has quit [Ping timeout: 250 seconds]
[itchyjunk] has joined #osdev
elastic_dog has joined #osdev
heat has joined #osdev
heat_ has quit [Ping timeout: 260 seconds]
elastic_dog has quit [Excess Flood]
wlemuel has quit [Ping timeout: 248 seconds]
wlemuel has joined #osdev
elastic_dog has joined #osdev
elastic_dog is now known as Guest9814
elastic_dog has joined #osdev
Guest9814 has quit [Ping timeout: 265 seconds]
epony has quit [Remote host closed the connection]
elastic_dog has quit [Read error: Connection reset by peer]
heat has quit [Remote host closed the connection]
heat has joined #osdev
vdamewood has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
elastic_dog has joined #osdev
epony has joined #osdev
skipwich has quit [Quit: DISCONNECT]
skipwich has joined #osdev
goliath has quit [Quit: SIGSEGV]
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
gog has quit [Ping timeout: 264 seconds]
bradd has joined #osdev
heat has quit [Remote host closed the connection]
yyp has quit [Ping timeout: 265 seconds]
heat has joined #osdev
yyp has joined #osdev
<heat>
geist, do you actually make use of D and A in zircon?
<geist>
A yes, D no
<heat>
i find A and D really bothersome to use IMO, would rather just take a fault
<heat>
and that's exactly what I do in the case of e.g MAP_SHARED dirtying (for writeback)
<heat>
oh yes I also use the A bit for funny TLB shootdown tricks, which I'm not entirely sure if it's worth it anyway
zaquest has joined #osdev
[itchyjunk] has quit [Remote host closed the connection]
<geist>
the A bit or the A fault?
<Griwes>
how do you use the A bit for funny tlb shootdown tricks without a possible toctou somewhere?
frkazoid333 has quit [Ping timeout: 240 seconds]
wlemuel has quit [Ping timeout: 255 seconds]
wlemuel has joined #osdev
vdamewood has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
<heat>
Griwes, xchg
<heat>
geist, originally i was talking about the bits yeah
<heat>
the problem with D and A bits is that you need a background thread to sweep those bits into something more usable. which i'm not sure if it can be done efficiently, compared to faulting up front (the A/D fault, or doing stuff manually by WPing the page)
slidercrank has joined #osdev
<geist>
yeah you can *always* simulate the a bit the hard way and just constantly unmap pages and watch them get faulted back in
<geist>
the A bit lets you do it in a slightly less expensive way
<geist>
and a way keeps you from having to regenerating mappings on demand, if that's an issue with your design
<geist>
if your A bit is simulated with an actual accessed fault then it gets kinda debatable if it's worth the trouble
<heat>
yeah but i'm not entirely sure if its worth the trouble to use any sort of hw A bit because of the background page bit collection step
<geist>
but presumably it's still faster to just deal with the accessed fault rather than a full page fault where you have to regenerate the mapping
<geist>
but yeah, that assumes a background collection bit no matter what
<heat>
at least linux doesn't think so, for MAP_SHARED. I don't know what it does for LRU, etc
<geist>
if you're not actively collecting age data then yeah you dont need it
<geist>
sure, and that's a given: you can simply start with pages marked A and then forget about it
<geist>
with and without hardware assist
<geist>
for zircon we set it for certain types of VMOs that we care about age data for, but then things like all kernel mappings: A bit is set at map time and never harvested
<geist>
and most of the time user pages start with the A bit set, since you're mapping it because you touched it
<geist>
or at least pages that were demand faulted
<geist>
which is most of the time
<zid`>
heat: What does this have to do with AMD Ryzen's new product lines though?
<heat>
i don't know
bgs has joined #osdev
ThinkT510 has quit [Quit: WeeChat 3.8]
ThinkT510 has joined #osdev
heat has quit [Read error: Connection reset by peer]
heat has joined #osdev
epony has quit [Remote host closed the connection]
wlemuel has quit [Ping timeout: 252 seconds]
epony has joined #osdev
wlemuel has joined #osdev
danilogondolfo has joined #osdev
bnchs has quit [Remote host closed the connection]
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
bgs has quit [Remote host closed the connection]
GeDaMo has joined #osdev
<heat>
geist, how did you tackle PCID support?
<geist>
what do you mean?
<heat>
what was your solution for x86 pcid
<sortie>
sortix.org is now running Sortix straight from the master branch using my official nginx port, anyone can now set up such a website using my OS :D
<heat>
i see 2 obvious ways of doing it: 1) keep "hw address spaces" tagged by pcid, with an active mask for CPUs that may have it active, on tlb shootdown you shoot down any possible active 2) keep a tlb generation counter, only do shootdowns on active cpus for the actual address space, and when using a PCID cpus check the gen counter and do a full invalidation if so
<heat>
i think FreeBSD does 1)
<geist>
1
<geist>
though clearly it's not a totally scalable solution
<heat>
did you measure the 2nd option?
<geist>
linux does something like 3 that's hard to unsee
<geist>
2 is much more complicated, so 1 is a good starting point
<heat>
like 3?
<geist>
as in another solution
<heat>
ah
<geist>
linux solution is completely grody but pretty clever and still, makes you a little sad
<heat>
it's hard to figure out which of these solutions is better, without doing a full implementation plus measurement
<geist>
yeah, the problem with 1 is you run out of PCIDs
<geist>
so for linux it's unacceptable. for now it's okay limiting zircon to 4k active processes
<heat>
ah yeah that kind of sucks
<geist>
though it's noted that it isn't a good long term solution
<geist>
but it also meshes well with AMDs new global TLB shootdown mechanism
<zid`>
Did someone say amd
<geist>
which is very close to ARM. that mechanism however relies intrinsically on all cores seeing the same PCID for the same process
<heat>
if you want a silly idea you could reserve a single PCID for "couldn't find one" as a catch-all
<geist>
ie, at least at one point in time, PCID X must refer to the same process
<geist>
on all cores, or you can't do a global TLB shootdown without IPIs
<geist>
riscv effectively has the same thing if you use the SBI masked-based shootdown
<heat>
yeah i would love to play around with the AMD shootdown thing
<heat>
but alas, no support here
<geist>
the linux x86 PCID solution (for intel at least) is completely different. basically each cpu tracks independently the last 6 PCIDs it's assigned to the last 6 processes it has run
<heat>
it's bizarre that linux doesn't support it yet. i wonder if there are any hidden drawbacks that AMD has seen
<geist>
each cpu is completely different set of PCIDs, and only really uses like 1-7
<geist>
why 6? because that seems to be diminishing returns after that
<geist>
and since there's on every contxt switch a need to do a linear search through the last 6 processes, a list of 6 pcids fits in a cache line and is thus O(1) effectively and pretty much measurably
<heat>
haha ok so you just "can't unsee"'d me
<geist>
yeah. it's pretty clever but then it has a bunch of subtle complexity to it
<geist>
and then that seems to be highly intel specificl. or at least that design is not compatible with AMD's solution (though you can use an AMD cpu like an intel cpu, you just can't do the globla tlb shootdown)
<geist>
so it's kinda a clever but very x86 specific solution
<geist>
a moer general 'you have N bits of ASID you must assign to M processes' is a more general solution for ARM, AMD, riscv, etc
<heat>
i think 2) is workable
<geist>
iirc what i didn't lik about the freebsd solution is at some point it has a [MAX_CPUS] array per pmap
<heat>
for the ARM64, AMD stuff
<geist>
i forget precisely what. a gen counter maybe
<geist>
but for every pmap (aspace) it has to track per cpu a word of data, so that gets kinda size expensive
<heat>
picking between 1) and 2) is hard for me cuz I can't really know what would be more expensive. if you do overly zealous IPI shootdowns, it may suck ass if it's too overzealous. whereas 2) would avoid that over the possibility of having to flush the whole PCID on a context switch
<geist>
what i did for zircon is it tracks a dirty bit per cpu per aspace if it has changed since the cpu loaded it last
<geist>
so it does a full shootdown if a cpu switches back to an aspace
<geist>
that had been modified while it wasn't active
<geist>
but it avoids the extra IPIs
<geist>
so i guess that's your #2?
<heat>
yes
<geist>
some informal testing showed that for at least a regular workdload i was running on it it was having to flush the TLB maybe 30% of the time
<geist>
so like 70% of the time it was strictly better than before
<geist>
but that's a highly specific zircon level workload
foudfou has quit [Remote host closed the connection]
<heat>
i guess on stuff like this, scheduler etc probably has a big difference
<heat>
if your scheduler is thread-migration happy, etc
<geist>
actually no so much migration, more like if you have a lot of multithreaded processes that are running more than one thread at a time
xenos1984 has quit [Read error: Connection reset by peer]
<geist>
and if they modified their mappings during that window
foudfou has joined #osdev
xenos1984 has joined #osdev
<geist>
actually doesn't really even need to be running simultaneously. if you have a process with 20 threads and they're evenly distributed across like 8 cores, when they do get a chance to run.
<geist>
if one of them modifies the memory map then the next time the next 19 threads run they may run on the other 7 cores, and need a TLB shootdown
<geist>
or at leat the first time one of the 7 other cores gets a chance to run one of the 19 other threads it'll see that the PCID is dirty and shoot it down
<geist>
but that's still strictkly better than aggressively effectivley broadcast IPIing to all the other cores all the time
<geist>
on RISCV that's debatable. actually an interesting question mark
<heat>
does current riscv even have ASIDs?
<heat>
as in real hw, not ISA
<geist>
good question, the vision five 2 doesn't, but i'm fairly certain the sifive cores in the unleashed/unlimited do
<geist>
actually dunno what the C906/C910 based cpus do
<geist>
i know about some future riscv stuff but cannot comment
<heat>
TOP SECRET
<geist>
but basically the ISA says you can implement anywhere from 0..16 bits. and i can tell you i've seen a wide variety
<geist>
linux riscv currently requires a minimum of 2*NR_CPUS + 1. it implements some sort of dynamically assign and rotate through PCIDs thing
<geist>
the 2 * thing is i think to le tit always find one while it's releasing the last one
<geist>
and the +1 is i think for the kernel
<heat>
oh yeah how does any of this play with meltdown?
<geist>
for x86 pcid i think it always assigns each process a pair
<geist>
something like an even/odd pair, or bit 12 set for the other one or something
<geist>
so it effectively halves the number
<geist>
i do think the major problem with meltdown and PCID is you can't really use global pages anymore because reasons
<geist>
and then it gets a lot more complicated to deal with kernel mappings
<geist>
and i dont completely grok that. have not implemented KPTI for zircon yet
<heat>
ah
<heat>
so in theory fuchsia is completely vulnerable to random reading of memory?
<bslsk05>
drewdevault.com: Writing Helios drivers in the Mercury driver environment
<sham1>
// FIXME: God damn shite code. Fix this, future me
<sham1>
It's amazing how much I've had to do that over at work when doing technology migration stuff
selve has quit [Remote host closed the connection]
selve has joined #osdev
dude12312414 has joined #osdev
Bonstra_ has quit [Read error: Connection reset by peer]
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 250 seconds]
<Ermine>
ddevault: good job!
heat has quit [Remote host closed the connection]
heat has joined #osdev
<ddevault>
Ermine: thanks!
crankslider has joined #osdev
crankslider is now known as slidercrank
wlemuel has quit [Ping timeout: 265 seconds]
wlemuel has joined #osdev
goliath has quit [Quit: SIGSEGV]
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
xenos1984 has quit [Ping timeout: 265 seconds]
xenos1984 has joined #osdev
weinholt` has joined #osdev
gog` has joined #osdev
zhiayang_ has joined #osdev
stefanct__ has joined #osdev
weinholt has quit [Ping timeout: 255 seconds]
zhiayang has quit [Ping timeout: 255 seconds]
gog has quit [Ping timeout: 255 seconds]
stefanct has quit [Ping timeout: 255 seconds]
stefanct__ is now known as stefanct
zhiayang_ is now known as zhiayang
goliath has joined #osdev
danilogondolfo has quit [Remote host closed the connection]
bnchs has joined #osdev
dutch has joined #osdev
DynamiteDan has quit [Excess Flood]
DynamiteDan has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<heat>
is there an easy way to get a bunch of commits into separate branches and pushed?
<heat>
context: I'm doing separate commits of unrelated stuff I had in my staging area. would like to checkout a branch for each, based on origin/master, and push
DynamiteDan has quit [Excess Flood]
<heat>
usually you should use feature branches but I didn't
DynamiteDan has joined #osdev
<sortie>
heat, use git rev-list to iterate each commit, then in a for loop checkout a branch (using e.g. the commit message for the name, see --format= plus some sed), and cherry-pick the commit (and hope it applies)
<heat>
oh gosh
<heat>
thanks but that sounds way harder than just doing manually I guess, lol
<heat>
why do something manually for 10 minutes if you can struggle automating it for 1 hour
<sortie>
The git is not strong with this one
<heat>
sortie, have you switched to CVS like a good BSD?
<sortie>
heat, I use a got-like system called git
<heat>
>got-like system called git
<heat>
thanks i hate it
selve has quit [Remote host closed the connection]
selve has joined #osdev
troseman has joined #osdev
troseman has quit [Client Quit]
selve has quit [Remote host closed the connection]
selve has joined #osdev
<heat>
0xffffffff8100d9b1 <+17>: jne 0xffffffff8100d9b4 <__spin_lock(spinlock*)+20>
<Griwes>
I'd suspect the tail call is just done too late, maybe even in the codegen phase?
<heat>
there's also a funny detail in clang when specifying "rm" in inline asm constraints where it always picks memory for internal lack-of-information/compiler-pass reasons
troseman has quit [Client Quit]
<Griwes>
I'd try to chase tail call optimizations in the gcc source code, but I'd rather not look at gpl code if I can avoid it :P
<heat>
yeah i'm not looking at gcc either
<heat>
LLVM does seem heavily more readable to me, non-compiler-engineer
troseman has joined #osdev
<Griwes>
oh yeah, that too
<heat>
whereas GCC is the funniest piece of Old-C-code-turned-C++-ish
troseman has quit [Client Quit]
<heat>
on the other hand, rebuilding llvm is depressing. so it balances out
troseman has joined #osdev
<Griwes>
heh
<Griwes>
I want to get back to my OS work but for my own sanity I first need to port my patches from llvm 14 to 16 and I just get deflated whenever I think of it lol
<heat>
I know it's not your area so to speak, but it's very annoying that I can't build compiler-rt with GCC and vice-versa (libgcc with clang)
<Griwes>
I'm pretty sure it'll be better than what I'm dreading, but still
<heat>
so, erm, fix it magic compiler man
troseman has quit [Client Quit]
<heat>
my patches are ezpz to port between LLVM versions *except* compiler-rt sanitizer stuff, which usually takes a while to sort out
<heat>
but the TC code is solid
<Griwes>
there's something that changed in the target triple code and I haven't looked at what exactly yet
<Griwes>
I know one file moved but that's easy enough, but I remember seeing something more
<Griwes>
like, my patches are fairly trivial so far too, but it's just annoying
troseman has joined #osdev
<heat>
the funniest bit is getting my sanitizer code into my gcc patches too
troseman has quit [Client Quit]
<heat>
because they don't exactly sync up at LLVM release commits, soooooOOOOOOOOOooooooooooo
<heat>
what version is GCC libsanitizer based on? usually not the one I want
<Griwes>
the other thing that's stopping me is that I've started prototyping my library for managing IPC objects aaaaaaand I can't find an ownership model that makes me happy yet, so there's that
troseman has joined #osdev
troseman has quit [Client Quit]
<heat>
i've been super unproductive lately with a bunch of random changes locally plus some hard, annoying path walking refactoring. so i'm pushing everything that's semi-ok and finished-ish
<heat>
so I can get to fun stuff I want to do
<heat>
i've had the urge to do a bunch of work on VM, scheduling, PCID, but couldn't because I have a bunch of patches I need to finish the fuck up
troseman has joined #osdev
troseman has quit [Client Quit]
* sortie
points heat to the sign
troseman has joined #osdev
troseman has quit [Client Quit]
<heat>
what's the sign sortie
<heat>
stash everything on a staging branch for 7 years?
<sortie>
“Don't fuck up like sortie did”
<heat>
my 1.1 will also never come out, cuz I don't do releases
<sortie>
Neither do I
<heat>
facts. no one in this channel has seen a sortix release
<sortie>
assert(sortix_version < 1.1);
<sortie>
/* will never happen */
<sortie>
/* unreachable */
<heat>
floating point version numbers?
<heat>
releasing onyx 0.30000000000000004
<heat>
actually this is a cute theme isnt it? project whose releases are weird IEEE floating point numbers
<sortie>
But yeah don't bundle up a bunch of unfinished work
<sortie>
Instead have the mental fortitude to finish it or at least merge it when it's good enough
<sortie>
That way people can contribute and you can build on top of it
troseman has joined #osdev
<heat>
hell, merge it even if its not good enough
troseman has quit [Client Quit]
<heat>
that's my mantra
<heat>
facts: 1) only releases should be stable 2) I never release ----> its never stable
<sortie>
I have a no regression / no making things worse rule
<heat>
i'm fairly sure I'll never release a 1.0 because I'll never be happy enough tbh
<heat>
1.0 is such a large milestone that I'd want things to be perfect
troseman has joined #osdev
<sortie>
yolo
<sortie>
git tag onyx-1.0
<sortie>
DOO ETTT
troseman has quit [Client Quit]
<dzwdz>
if 1.0 seems too scary you could bump the major by a bit less
<dzwdz>
½.0
<sortie>
Except I screwed up 1.1, 1.0 was totally freeing btw
<sortie>
I mean I did 1.0 so noone else cares about what 1.0 is and isn't
<Griwes>
0.(9)
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
<dzwdz>
(1-ε).0 for release candidates
<heat>
getting 1.0 is, erm, scary cuz i'd like to post it around and get some attention and that would make my hobby project more real
<sortie>
The big reason to do regular releases is the natural cycle of stabilization
<sortie>
If you're always stable, then you're never stable
<sortie>
Or rather that when you do a release, then a lot more people will try it out, and many more bugs will be found than normal
<sortie>
So it's really important to do occasional releases to get more people to try out your software and find new problems
<sortie>
It's also just healthy to stop implementing features and fix the known bugs
Arthuria has joined #osdev
<sortie>
It's healthy to cycle between features and stabilization
troseman has joined #osdev
troseman has quit [Client Quit]
vdamewood has quit [Read error: Connection reset by peer]
<heat>
that's good advice, thank you
<heat>
maybe one day i'll do a feature freeze
<sortie>
Same
troseman has joined #osdev
vdamewood has joined #osdev
troseman has quit [Client Quit]
troseman has joined #osdev
<heat>
do as I say, not as I do
troseman has quit [Client Quit]
<heat>
great idea: i'll release 1.1 tonight and beat you to the punch. I successfully release, skip the 1.0 anxiety, and you lose
troseman has joined #osdev
Left_Turn has joined #osdev
troseman has quit [Client Quit]
<sortie>
I approve
Turn_Left has quit [Ping timeout: 240 seconds]
troseman has joined #osdev
troseman has quit [Client Quit]
bnchs has quit [Remote host closed the connection]
troseman has joined #osdev
troseman has quit [Client Quit]
troseman has joined #osdev
troseman has quit [Client Quit]
troseman has joined #osdev
troseman has quit [Client Quit]
dutch has joined #osdev
wlemuel has quit [Ping timeout: 240 seconds]
wlemuel has joined #osdev
epony has quit [Remote host closed the connection]
Arthuria has quit [Remote host closed the connection]
dutch has quit [Quit: WeeChat 3.8]
Arthuria has joined #osdev
ppmathis has joined #osdev
dutch has joined #osdev
ppmathis has left #osdev [We do what we must because we can]
ppmathis has joined #osdev
slidercrank has quit [Ping timeout: 264 seconds]
Gooberpatrol_66 has quit [Ping timeout: 240 seconds]
<Griwes>
heat, so I sat down to actually update my patches, and... uh... right now I'm getting annoyed just by the clone step, not even by the build itself yet
<heat>
haha
<heat>
i use tarballs :))
<Griwes>
...I may actually switch to grabbing a tarball for just llvm, because this is just silly
<Griwes>
yeah I was just typing that lol
<Griwes>
for most of the deps I'm happy with just clones
<Griwes>
but for llvm this is, well, as I said, just silly
<heat>
I do think there's a distinct advantage in getting a fork where you just queue patches, so it depends on what floats your boat really
<heat>
actually helps rebasing too, I guess
<Griwes>
by doing this change properly (ie turning patch into commit, cherry picking it on top of current tree) I just needed to fix one file and retarget changes to a different one in one other file
<Griwes>
well. at least to get the patch to apply. ;P
<Griwes>
I'm sure there'll be further changes needed
<heat>
why did you send me a CTCP time
<heat>
am I getting IRC-haxxor'd
<Griwes>
to avoid needing to ask you what timezone you are in lol
<Griwes>
clearly didn't work
<heat>
BST
<heat>
why tho
<Griwes>
I thought it was more into the night for you and was surprised you replied immediately