klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
<clever> geist: new owner decided to silently try and add analytics to it
<geist> whoops
<clever> the story i heard, is that they wanted to know how many users where using it, so they could decide how much to invest into it
<klange> "silently" no they were at least nice enough to tell us all they were doing this so we could prepare some forks ahead of time
_mrlemke_ has quit [Read error: Connection reset by peer]
_mrlemke_ has joined #osdev
silverwhitefish has joined #osdev
<ZetItUp> is it weird to do: #define insw(port, buffer, count) __asm__ __volatile__("rep insw;" : "D" (buffer), "+c" (count) : "d" (port) : "memory") instead of making a static function? :P
<clever> ZetItUp: i like the idea of a static function better, the types are more defined, and it can give a more sane error msg when mis-used
<ZetItUp> true, i could be setting myself up for headaches in the future
<clever> ive got some examples...
<bslsk05> ​gist.github.com: simple-test.cpp · GitHub
<clever> ZetItUp: in this case, its using templates because T can vary, and its using if statements to pick the right opcode for the args
<clever> and the optimizer will just delete the paths that cant happen
<ZetItUp> hmm looks kinda clean anyway
<superleaf1995> defining stuff is bad
<superleaf1995> no wait, i mean it's ok
<superleaf1995> i just wanted to add that gcc will automatically inline the single-instruction function by declaring it as a static inline header function
<superleaf1995> and it will do the same on defined macros too, but the types are not strong enough and can lead to weird situations
<ZetItUp> ýeah true
<geist> right, even in C. in C++ it's definitely much more in style to do the static
<geist> since inlines are also more strongly defined there
mrlemke has joined #osdev
<geist> also note that for something lije an in/out instruction it's probably not that strongly important it's inlined. it's an intrinsically slow instruction on a real machine, generally speaking
<geist> so kinda doesn't matter
<moon-child> c has weird rules about non-static inline, but isn't static inline effectively the same in both?
<geist> and on a VM it's probably going to vmexit which is at the minimum thousands of cycles
johnjay has quit [Ping timeout: 268 seconds]
<geist> moon-child: *basically*. i think there are subtle linkage details which i've snuffed out before but forgotten
<geist> but that's everything to do with whther or not it can elide all but one copy *if* the compiler decides to go ahead and emit a local version, which it can
<moon-child> ugh, I forgot that literally _all_ of c++ is about rvo
_mrlemke_ has quit [Ping timeout: 265 seconds]
<geist> i think in this case C++ just does the right thing, but C may end up with multiple copies
<geist> but that only matters if its declared in a header
<geist> there's a way to work around it in C, where you declare the header as extern inline, and then stamp out a copy inside a .c file somewhere. so that if the compiler decides not toinline it, it assumes the linker will find it
<moon-child> oh you mean copying the function body, not copying the function params
<moon-child> I see
<geist> yah
<geist> C++ can put it in the 'elide all but one copy' section/bits/etc i dunno (however that works at the .o file level)
_mrlemke_ has joined #osdev
<geist> what i've never gotten a good answer for is whethe ror not simply 'inline' is sufficient in a pure C++ environment, or if static inline isslightly different
<geist> i think there's still some subtle difference, but it's unclear
<geist> and AFAICT they both do basically the same thing
<clever> i think static means you cant link to it from outside that unit, which can also act as a hint to inline more agressively
<clever> ive seen rather large static functions get inlined, because it was only ever called from 1 place
<clever> and being static, the compiler knew that it had seen every possible reference
mrlemke has quit [Ping timeout: 268 seconds]
<geist> static for sure, but static inline is a different beast
<geist> and also C vs C++ is different
<geist> though static functions and static inline i think are effectively the place where they intersect between languages nicely
<geist> though in C++ you're suppose to use namespace {} nowadays for the same effect
<geist> anyway, bbiab
<superleaf1995> C++ static keyword can confuse C developers at unknown levels
<superleaf1995> for example, a static method on a class isn't only usable on the module alone, it just dettaches it from the namespace included before the function
isaacwoods has quit [Quit: WeeChat 3.2]
superleaf1995 has quit [Quit: Client closed]
zoey has quit [Ping timeout: 256 seconds]
tacco has quit []
freakazoid333 has joined #osdev
<doug16k> static variables inside functions are different in C++ as well. they are runonce constructed for you, automatically, by the language
<doug16k> thread safe
iorem has joined #osdev
nsmb has joined #osdev
johnjay has joined #osdev
sts-q has joined #osdev
ElectronApps has joined #osdev
nyah has quit [Ping timeout: 272 seconds]
ElectronApps has quit [Ping timeout: 258 seconds]
ElectronApps has joined #osdev
CryptoDavid has quit [Quit: Connection closed for inactivity]
pony has quit [Quit: WeeChat 2.8]
pony has joined #osdev
mrlemke has joined #osdev
_mrlemke_ has quit [Ping timeout: 272 seconds]
ZombieChicken has joined #osdev
<doug16k> so weird for a C++ program to have a Dictionary type that uses strcmp to binary search a vector for the key, and elsewhere, does this: for(auto &&attr : set<string>{"outfit space", "cargo space", "weapon capacity", "engine capacity"}). wut?
<pony> I only code in C :I
<pony> don't know C++
<pony> maybe I don't want to know it
ElectronApps has quit [Read error: Connection reset by peer]
<moon-child> doug16k: in a general sense, people are gonna write bullshit code, and there's NOTHING YOU CAN DO ABOUT IT
<moon-child> panic!
<bslsk05> ​www.whatyearisit.info: What Year Is It?
<doug16k> only in 2021 can drawing 2600 ships be easy, and the code bottleneck is in strcmp reading object properties :P
<moon-child> reminds me of the gta sscanf fiasco
<moon-child> oddly enough the issue there was in libc not in the actual program. You'd hope the people writing your standard library are competent ¯\_(ツ)_/¯
<doug16k> I never use scanf
<moon-child> well, ok, good point, using sscanf in the first place is not a good sign
<doug16k> it would be fascinating to give a bunch of really strong programmers an exam on all the details of scanf. it would be hilarious to see the results
<doug16k> my reasoning is, the more amazing they are, the longer it has been since they even called scanf
<moon-child> unless they were forced to implement it
<doug16k> unless something compelled them to use it
<doug16k> yeah
<moon-child> I bet rich felker knows all the scanf edge cases, and has the gray hairs to show it
<doug16k> I remember first seeing unit test code. I was like, wut? people actually use cin? :P
<doug16k> homework like
<doug16k> it's ok to use it, but usually nobody uses it
<doug16k> same with strstream. tons of it in tests, everyone else is scared to death to call it because of code size superstitions
<ZetItUp> if qemu crashes and you do info registers, are those values before the crash or after?
<doug16k> depends
<ZetItUp> crashes = halts
air has quit [*.net *.split]
wgrant has quit [*.net *.split]
yuu has quit [*.net *.split]
gruetzkopf has quit [*.net *.split]
bleb has quit [*.net *.split]
kkd has quit [*.net *.split]
LittleFox has quit [*.net *.split]
maksy has quit [*.net *.split]
kazinsal has quit [*.net *.split]
eau has quit [*.net *.split]
kanzure has quit [*.net *.split]
bleb_ has joined #osdev
air has joined #osdev
<doug16k> ZetItUp, add this to your command line: -d cpu_reset
kazinsal_ has joined #osdev
<doug16k> it will dump the context before reset
<ZetItUp> cause either my exception dump is bad or something else :D
<doug16k> also use -no-reset -no-shutdown
<ZetItUp> yeah
<bslsk05> ​gyazo.com: Screenshot - 78d0e17d6a1d0f351c5bdf52c6e7d491 - Gyazo
<doug16k> what?
<bslsk05> ​gyazo.com: Screenshot - 8607f69081d60ed09de71694e8d4c32c - Gyazo
<ZetItUp> qemu :D
j00ru has quit [*.net *.split]
edr has quit [*.net *.split]
dragestil has quit [*.net *.split]
ad__ has quit [*.net *.split]
CompanionCube has quit [*.net *.split]
renopt has quit [*.net *.split]
koon has quit [*.net *.split]
klange has quit [*.net *.split]
gjnoonan has quit [*.net *.split]
corecode has quit [*.net *.split]
<doug16k> what does your screen content have to do with resetting the machine?
<doug16k> you show that as if that is printing the reset context
<ZetItUp> the question was if the registers shown is after the invalid opcode
<ZetItUp> or before
<doug16k> you didn't try anything I said right?
moon-child has quit [*.net *.split]
j`ey has quit [*.net *.split]
augustl has quit [*.net *.split]
geist2 has quit [*.net *.split]
sginsberg has quit [*.net *.split]
travisg has quit [*.net *.split]
Stary has quit [*.net *.split]
HeTo has quit [*.net *.split]
gorgonical has quit [*.net *.split]
jinn has quit [*.net *.split]
sham1 has quit [*.net *.split]
catern has quit [*.net *.split]
mid-kid has quit [*.net *.split]
k4m1 has quit [*.net *.split]
jbg has quit [*.net *.split]
tyler569_ has quit [*.net *.split]
XgF has quit [*.net *.split]
__sen has quit [*.net *.split]
<doug16k> sorry, -no-reboot
kazinsal_ is now known as kazinsal
<ZetItUp> well the -d cpu_reset was good :D
<doug16k> yes, see qemu-system-x86_64 -d help
<doug16k> catastrophic crashes are hard to debug on qemu though. be glad it even crashed, qemu emulation doesn't crash for lots of stuff that should
ChanServ has quit [*.net *.split]
mctpyt has quit [Remote host closed the connection]
ChanServ has joined #osdev
gruetzkopf has joined #osdev
<doug16k> you should always find it being the other way around. doesn't work on real machine, works in qemu. usually kvm will make it not work in qemu if it is appearing to work from not crashing when it should
gorgonical has joined #osdev
dragestil has joined #osdev
<ZetItUp> yeah i noticed, tried it on virtual box, it crashed, but stuff works in qemu
XgF has joined #osdev
<geist> doug16k: haha re: better programmers dont know scanf
<geist> that's super true
sham1 has joined #osdev
<ZetItUp> gdb
<ZetItUp> meh wrong windwo
<ZetItUp> window
klange has joined #osdev
j00ru has joined #osdev
Stary has joined #osdev
j`ey has joined #osdev
k4m1 has joined #osdev
tyler569 has joined #osdev
jjuran has joined #osdev
thaumavorio_ has quit [Quit: ZNC 1.8.2 - https://znc.in]
HeTo has joined #osdev
thaumavorio has joined #osdev
ElectronApps has joined #osdev
CompanionCube has joined #osdev
vin has quit [Remote host closed the connection]
tenshi has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
jbg has joined #osdev
ZombieChicken has quit [Remote host closed the connection]
ZombieChicken has joined #osdev
koon has joined #osdev
<doug16k> would be neat if std::string bool operator== had an overload for char(&literal)[N] that instantly returns false if N != size()-1
<doug16k> er, size() + 1
eau has joined #osdev
<doug16k> it constructs a temporary and actually compares them, right? geez
<doug16k> this program is just a bunch of dictionary and string key abuse lol
<doug16k> fun exercise to see if I can make it use hashes to speed it way up
<doug16k> without blowing up
<doug16k> multiplying a lot of 9's is starting to sound pretty easy right now
<doug16k> I should be able to sneak a constexpr hash computation in to make the string literal comparisons instant
<doug16k> also noticed chromium creating a stupid amount of threads. once in a while, chromium hammered pthread_create so many times that it was top function in my machine for seconds straight
<doug16k> imagine how many threads you have to create to even notice pthread_create on linux
<doug16k> a lot
<clever> lol
<clever> ive seen chromium race condition so hard when loading fonts, that it exploded the max-fd counter
<clever> basically, if a font isnt in the ram cache, open the file and load it
<clever> if 2 threads race, both open the file and load it
<clever> if 1000 threads race, you run out of file handles and crash
<clever> also, that crash is in the master process, so the entire thing crashes and burns
<mjg> lol
<mjg> nice3
johnjay has quit [Ping timeout: 258 seconds]
LittleFox has joined #osdev
<clever> mjg: i had to adjust the ulimit settings in pam, to even be able to start chrome that week
tacco has joined #osdev
tacco has quit [Client Quit]
sortie has joined #osdev
<kingoffrance> strange, is that on x? i would think x even without font server would handle that, and they would have to go out of their way to do things "manually" (and doing so, would presumably try to get it right then)
<kingoffrance> like, if they purposely avoided whatever x mechanism(s), youd think that means they spent time on their "NIH" thing
<kingoffrance> not that i would guess the x mechanism(s) are necessarily any brighter
<kingoffrance> if you got tons of fonts, they might behave very poor too
<clever> kingoffrance: yep, under X
<kingoffrance> i mean, links2 has its own fonts. but thats on purpose. what are they manually loading special fonts or something? "portability" maybe.
<kingoffrance> i could believe they just dont like whatever x mechanism(s)
<clever> kingoffrance: ive heard of font bugs on windows before, that lead to ring0 control
<clever> because at one point, MS tried to cheat benchmarks by putting font parsing in the kernel
<clever> and then turing complete fonts come along....
<clever> the less untrustworthy data you throw at the kernel, the better
<kingoffrance> i wouldnt blame them if they thought x was lacking or something, its just like if you cant afford to fix x, who can
<clever> and in the case of things like X, how do you deal with loading a font from an http url?
<clever> does the standard font server allow passing a font blob you fetched from somewhere non-standard?
<kingoffrance> yeah good point, they will have to do some manual things no matter what
<HeTo> kingoffrance: noöne uses X for rendering anythign nowadays. everything is rendered either client-side, or with newer applications and especially with web browsers, through direct rendering with OpenGL and Vulkan
<HeTo> and X is just used to get a bitmap on screen or to get the OpenGL context to draw on
<HeTo> as well as various interprocess communication like talking to the window manager
<clever> HeTo: i think chromium renders everything within a tightly restricted sandbox, and basically just throws bitmaps at the upper layers
<kingoffrance> HeTo, sure, but does that mean you effectively need 3d acceleration to run chromium?
<clever> kingoffrance: if 3d accel is missing, it will probably just fallback to software rendering
<doug16k> afaik it has full fallback, all the way to software
<doug16k> if you have to add way more code to do that, that is what chromium did :P
<doug16k> in all cases apparently
<doug16k> the size of it is impressive
johnjay has joined #osdev
gmodena has joined #osdev
<doug16k> yay! got strcmp from 25% of cpu down to 0.8%
<doug16k> lol I can't believe the drastic changes that all worked
sortie has quit [Ping timeout: 252 seconds]
<doug16k> nice, big speedup
<doug16k> awesome
<klange> It's only doing really basic compound glyphs, and haven't done any of the instruction processing, but I think it's ready to start replacing the ol' sdf lib.
<doug16k> do you need to do special stuff to handle combining characters, or is it just kerning trick?
<klange> So there's a few possibilities; for the single-codepoint composed characters they're either compound glyphs or even a single glyph.
sortie has joined #osdev
<klange> For the actual combining characters, they're not so much kerning tricks as the glyphs are just plopped at negative coordinates.
flx has quit [Ping timeout: 256 seconds]
<doug16k> ah
<klange> I would imagine there's kerning pairs to get things perfectly aligned in there, but I'm not touching that yet, just basic x-advance.
<doug16k> Į̴̐ ̵̩̎m̸̡͂ȇ̷̻á̸̤ṉ̸̄ ̶̳̋T̵̜̀h̵̺͒ǐ̵̩s̴̻̑
<doug16k> people abusing combining like that reminds me of valve's brilliant face engine. valve makes a really nice face engine, and everyone can't resist making the silliest faces possible
<doug16k> combining characters are extremely neat. a program could be oblivious about them and accept them, as long as the string renderer does it correctly
<klange> Unicode is full of that, from UTF-8 to combining characters to ligatures and grapheme sequences - if you treat text as opaque sequences of bytes, the rendering backend can do the rest.
moon-child has joined #osdev
<Griwes> so I've gotten back to the point where I can just run a function across all cores... but for some reason a simple thing that just prints text to screen and serial is really slow - but only with tcg, and seemingly in my text scrolling code...
<Griwes> but the fun part is that the text scrolling code works Perfectly Fine when only invoked from a single cpu
<doug16k> prints to text mode? that's a vmexit for every video memory access
<Griwes> am I going to go on a wild chase of caching behavior now?
<doug16k> the way memory is mapped in text mode is wild
<Griwes> doug16k, my text code, lfb
<doug16k> ok then you should see high speed memory access
<doug16k> it's already a backbuffer
<doug16k> in kvm caching matters
<Griwes> I mean it's still slower, and I still do double-buffering myself, and I still had to do some write coalescing in memcpy for it to not be dogshit slow
<Griwes> but that's all in tcg
<doug16k> in tcg, it hardly even heard of a memory type
<Griwes> well, see, kvm is fast without most of what I'm doing here, and tcg is slow as hell without all of it
<Griwes> so I guess the bottom line here is that YMMV?
<doug16k> define "as hell". expect 20x
<Griwes> visibly vs not visibly
<Griwes> i.e. whether you are able to notice the delay while it's scrolling or not
<doug16k> I can do 1920x1080 clipped bitblt at 1200fps
<doug16k> 32bpp
<doug16k> which is basically my memory bandwidth
<Griwes> and again, all the scrolling it reasonably fast... until I hit the multi-core printing
<clever> that reminds me, i accidentally did 10000 fps once, not rendering but actual output rate
<Griwes> I wonder if I'm just hitting some sort of really dumb edge case in tcg regarding caching across cores and whatnot
<clever> because i configured the display at 100x100
<clever> and didnt use a low enough pixel clock
<Griwes> heh
<clever> and the hardware just went "uhhh, ok" and did it, lol
<Griwes> reminds me of that beautiful off-by-one while doing my b.eng. thesis project
<doug16k> try --accel tcg,single
<Griwes> it was to implement a text mode display over hdmi from an fpga
<doug16k> turn off the multithreaded tcg tricks
<Griwes> and I off-by-one'd my line and column comparisons
<doug16k> it will run each cpu in a burst and round robin
dragestil has quit [Ping timeout: 252 seconds]
<Griwes> got really wonky result on the screen, and took me a minute to notice that the screen is reporting that it is displaying a... 1281x721 signal
<clever> Griwes: i also had some off by one errors when i was bringing ntsc online
<clever> internally, the ntsc generation core on the rpi knows what the w/h should be, and enforces that on the generated signal
<Griwes> > qemu-system-x86_64: --accel tcg,single: Property 'tcg-accel.single' not found
<clever> but, the PV that generates the image data, needs to agree on that
<clever> if they dont agree, then the video data fed into the ntsc encoder, will be rolling relative to the ntsc vsync pulse
<doug16k> yeah, I find it amusing how intolerant a modern display is of the timing. it demands exactly the right timings, not even one cycle out is tolerated
<clever> so the ntsc signal itself is perfectly to spec, but the image is still rolling
<clever> there was also a secondary bug, due to fractional clock division being off just slightly
<clever> the ntsc timings, are all generated from a master 108mhz clock
<clever> due to a different bug, i was giving it a 108.1mhz clock
<clever> that resulted in the color burst freq being just ever so slightly too fast
<clever> so my tv went "uhh, that aint a color signal", and defaulted to b&w mode
<Griwes> doug16k, I mean you are supposed to be up to spec when sending a signal lol
<doug16k> yes, but whose signal was in spec between 1950 and 2005?
<doug16k> NTSC and PAL imply out-of-spec
<Griwes> by "up to spec" I mean "up to whatever the screen expected"
<doug16k> tv had the same issue with locking onto vertical sync. they had a mile of tolerance
<doug16k> horizontal too
<doug16k> being like a tv wasn't a requirement. I always guessed it would be
<doug16k> I mean analog crt tv
<doug16k> I mentioned blowing my C128D monitor, pushing the 80-column CRT to as high resolution as I could get it
<doug16k> at the very limit of sync, it started to make a coil whine, then a loud pop and it shut off :P
<doug16k> blew up horizontal
<clever> i have heard about invalid timings damaging things
<kingoffrance> yes, x modeline generators used to warn
<doug16k> 𝒯𝒽𝒾𝓈 𝓉𝑒𝓍𝓉 𝓌𝑜𝓊𝓁𝒹 𝓇𝑒𝒶𝓁𝓁𝓎 𝓈𝓉𝓇𝑒𝓈𝓈 𝓉𝒽𝑒 𝒸𝓊𝓇𝓋𝑒 𝒸𝑜𝒹𝑒
<doug16k> hardly any straight lines
<doug16k> 𝕺𝖗 𝖙𝖍𝖎𝖘
<kingoffrance> 🝐 i cant see it, but that should have curves :) ☤ i can see :)
flx has joined #osdev
dennis95 has joined #osdev
dragestil has joined #osdev
dragestil has quit [Ping timeout: 252 seconds]
GeDaMo has joined #osdev
<doug16k> why doesn't it call the array one? https://www.godbolt.org/z/Pd3rjj1Mz
<doug16k> but this works? https://www.godbolt.org/z/MnY7nWE7v
<doug16k> is that just builtin strlen?
ZombieChicken has quit [Remote host closed the connection]
<klange> it's rather slow because there's no glyph caching and it's constantly seeking, but hey, terminal: https://klange.dev/s/Screenshot%20from%202021-07-05%2019-09-02.png
<GeDaMo> Did you see the video from Casey Muratori on writing a simple terminal?
<GeDaMo> It's mainly a rant about performance :P
<bslsk05> ​'[EPILEPSY WARNING] How fast should an unoptimized terminal run?' by Molly Rocket (00:51:03)
<j`ey> GeDaMo: i dont like the ranting style videos
<j`ey> was it any good?
<GeDaMo> It was interesting enough, apparently he reported slow terminal performance to MS and they were giving all sorts of excuses as to why
<GeDaMo> So he wrote a simple terminal emulator in a couple of days to demonstrate the sort of thing that was possible
<clever> Griwes: that title reminds me, many many years ago, i had a very anoying problem on my laptop
<j`ey> yeah, but it's missing so many features etc
<clever> if i ran `ls -ltrh`, the cmd i type in reply, would have keys sticking
<GeDaMo> This is his code https://github.com/cmuratori/refterm
<bslsk05> ​cmuratori/refterm - Reference monospace terminal renderer (8 forks/275 stargazers/GPL-2.0)
<clever> 1: gnome-terminal used a lot of cpu to render that dir listing
<clever> 2: the cpu freq would ramp up to meet that demand
<clever> 3: then ramp back down, as i'm typing a reply cmd
<clever> 4: a hw/kernel bug, results in a 300ms hang, every time the freq changes
<GeDaMo> j`ey: that was one of the excuses, yes :P
<clever> 5: that hang causes the ps2 fifo to overflow, and loose key release events!
<clever> GeDaMo: because of that whole sequence of events, i am using xterm, still to this day!
<clever> it rendered with far less cpu usage, compared to gnome-terminal
<GeDaMo> :/
<clever> the old cpufreq was also "better" for fixing such bugs
<clever> the on-demand govenor didnt exist at the time, and was managed by a userland daemon, with CONFIG!
<clever> so i could just raise the averaging window up a bit, and then it was far less jumpy
<clever> modern cpufreq is all in-kernel, and has far less config
<clever> but is also far more stable, and doesnt need such work-arounds
<doug16k> GeDaMo, that video is correct
<doug16k> their terminal is awful
nyah has joined #osdev
<klange> Lacks a crispness the SDF renderer was achieving, but we'll get that fixed with hinting eventually... draws all the necessary glyphs for a weather report: https://klange.dev/s/Screenshot%20from%202021-07-05%2020-07-31.png
Arthuria has joined #osdev
<doug16k> I wish there were warnings that could detect silly use of shared_ptr when its refcount is never altered from 1
Arthuria has quit [Ping timeout: 240 seconds]
<doug16k> ah I see, they are using weak_ptr
flx has quit [Ping timeout: 252 seconds]
dragestil has joined #osdev
isaacwoods has joined #osdev
CryptoDavid has joined #osdev
<doug16k> std::list has a .sort method? wtf?
<doug16k> cuckoo!
<j`ey> hehe
<Mutabah> selection sort I gues?
<klange> it's soooo sloooow
<klange> it takes three seconds for the 'About ToaruOS' window to show up
<doug16k> you have a glyph cache yet?
<klange> nope
<sortie> klange, awesome work
<sortie> It, uh, looks like text to me.
<sortie> But I know there's sooo much detail and complexity underneath
<klange> It's doing the title bars as well in that
<doug16k> the antialiasing is really good
<sortie> This has full ttf font support?
<sortie> Or at least enough?
<klange> not "full", there's a lot to go with compound glyphs with transformations, kerning pairs, and of course the whole instruction interpreter, but it's enough for a good coverage of Deja Vu
<sortie> Awesome, so you can already support a lot of languages
<klange> That is the idea, plus the whole vector rasterizer should prove useful for other things, like maybe a rudimentary SVG implementation, and the obvious case of filling some gaps in my graphics lib from when I ditched Cairo...
<NieDzejkob> Mutabah: merge sort can be implemented pretty comfortably on linked lists
<Mutabah> explains the complexity mentioned on cppreference.com
flx has joined #osdev
mrlemke has quit [Ping timeout: 240 seconds]
catern- has joined #osdev
pony has quit [Quit: WeeChat 2.8]
pony has joined #osdev
onering is now known as Beato
kanzure has joined #osdev
<klange> okay i figured out why it's so slow
<klange> time to write a better qsort()
<klange> with my qsort: 4fps for this demo
<klange> with glibc's qsort: 150, seem to be capped by terrible X connection
<Mutabah> Your qsort is not very quick
<Mutabah> *is not very q
<Mutabah> :)
<klange> it's super dumb bubble sort, lol, worked fine for sorting a couple of shell commands
<Mutabah> very not q then :)
<klange> but sorting hundreds~thousands of edges, repeatedly, no bueno
<Mutabah> That'd do it
<Mutabah> Aaah, the glory of rust - `list.sort_unstable()` comes for free with the compiler (well, libcore)
<Mutabah> and with `alloc` you get a stable sort
<j`ey> hmmm how does that work?
<j`ey> I mean, how does alloc add a stable sort
<Mutabah> The implementation requires some temporary memory in order to be efficient
<Mutabah> (not sure what it is off the top of my head)
<j`ey> Mutabah: not in that sense
<j`ey> lemme look at the docs
<j`ey> Mutabah: i was thinking you meant https://doc.rust-lang.org/std/primitive.slice.html#method.sort
<bslsk05> ​doc.rust-lang.org: slice - Rust
<Mutabah> That's it
<j`ey> and not sure how including the alloc crate added that func
<Mutabah> click on `[src]`
<Mutabah> It's defined in the `alloc` crate
<Mutabah> while `sort_unstable` is defined in `core`
<j`ey> oh derp, didnt look in the url
<Mutabah> kinda useful
* Mutabah vanishes
flx has quit [Ping timeout: 252 seconds]
Vercas9 has joined #osdev
Vercas has quit [Ping timeout: 244 seconds]
Vercas9 is now known as Vercas
night has quit [Quit: goodbye]
night has joined #osdev
ElectronApps has quit [Read error: Connection reset by peer]
flx has joined #osdev
nur has quit [Remote host closed the connection]
nur has joined #osdev
heat has joined #osdev
dragestil has quit [Ping timeout: 252 seconds]
flx has quit [Ping timeout: 252 seconds]
nsmb has quit [Ping timeout: 272 seconds]
johnjay has quit [Ping timeout: 246 seconds]
nsmb has joined #osdev
<heat> yo try out my ext4-enabled ovmf: https://github.com/heatd/edk2-ext4
<heat> It Works(TM)
wootehfoot has joined #osdev
bleb_ has quit [Ping timeout: 272 seconds]
bleb has joined #osdev
smarton has quit [Quit: ZNC 1.7.2+deb3 - https://znc.in]
smarton has joined #osdev
smarton is now known as brown121407
vdamewood has joined #osdev
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
vinleod has joined #osdev
vdamewood has quit [Killed (zirconium.libera.chat (Nickname regained by services))]
vinleod is now known as vdamewood
flx has joined #osdev
johnjay has joined #osdev
bas1l has joined #osdev
basil has quit [Quit: ZNC 1.7.2+deb3 - https://znc.in]
netbsduser``` has quit [Read error: Connection reset by peer]
netbsduser` has joined #osdev
Oshawott has quit [Ping timeout: 258 seconds]
Oshawott has joined #osdev
bas1l is now known as basil
asymptotically has joined #osdev
zoey has joined #osdev
mahmutov has joined #osdev
dutch has quit [Quit: WeeChat 3.0.1]
dutch has joined #osdev
wootehfoot has quit [Ping timeout: 252 seconds]
freakazoid333 has quit [Read error: Connection reset by peer]
Skyz has quit [Quit: Client closed]
LittleFox has quit [*.net *.split]
pony has quit [*.net *.split]
CompanionCube has quit [*.net *.split]
iorem has quit [*.net *.split]
gog has quit [*.net *.split]
springb0k has quit [*.net *.split]
pieguy128 has quit [*.net *.split]
richbridger has quit [*.net *.split]
dormito has quit [*.net *.split]
mjg has quit [*.net *.split]
cultpony has quit [*.net *.split]
Mikaku has quit [*.net *.split]
dennisschagt has quit [*.net *.split]
gmodena has quit [*.net *.split]
sahibatko has quit [*.net *.split]
lava has quit [*.net *.split]
LambdaComplex has quit [*.net *.split]
solar_sea has quit [*.net *.split]
Ar0n has quit [*.net *.split]
Celelibi has quit [*.net *.split]
ornitorrincos has quit [*.net *.split]
Retr0id has quit [*.net *.split]
woky has quit [*.net *.split]
mingdao has quit [*.net *.split]
moon-child has quit [*.net *.split]
koon has quit [*.net *.split]
immibis has quit [*.net *.split]
z_is_stimky_ has quit [*.net *.split]
JerryXiao has quit [*.net *.split]
sprock has quit [*.net *.split]
Geertiebear has quit [*.net *.split]
doug16k has quit [*.net *.split]
bradd has quit [*.net *.split]
smeso has quit [*.net *.split]
eschaton has quit [*.net *.split]
grange_c has quit [*.net *.split]
nshp has quit [*.net *.split]
maurer has quit [*.net *.split]
PapaFrog has quit [*.net *.split]
nanovad has quit [*.net *.split]
Beato has quit [*.net *.split]
Mutabah has quit [*.net *.split]
warlock has quit [*.net *.split]
nanovad has joined #osdev
pony has joined #osdev
gmodena has joined #osdev
moon-child has joined #osdev
LittleFox has joined #osdev
koon has joined #osdev
CompanionCube has joined #osdev
immibis has joined #osdev
gog has joined #osdev
z_is_stimky_ has joined #osdev
sprock has joined #osdev
JerryXiao has joined #osdev
springb0k has joined #osdev
pieguy128 has joined #osdev
richbridger has joined #osdev
dormito has joined #osdev
Geertiebear has joined #osdev
mjg has joined #osdev
doug16k has joined #osdev
smeso has joined #osdev
nshp has joined #osdev
cultpony has joined #osdev
Mikaku has joined #osdev
bradd has joined #osdev
solar_sea has joined #osdev
dennisschagt has joined #osdev
lava has joined #osdev
Ar0n has joined #osdev
maurer has joined #osdev
LambdaComplex has joined #osdev
grange_c has joined #osdev
PapaFrog has joined #osdev
eschaton has joined #osdev
sahibatko has joined #osdev
woky has joined #osdev
Retr0id has joined #osdev
ornitorrincos has joined #osdev
Beato has joined #osdev
Mutabah has joined #osdev
warlock has joined #osdev
mingdao has joined #osdev
Celelibi has joined #osdev
isaacwoods has quit [Quit: WeeChat 3.2]
pieguy128 has quit [Max SendQ exceeded]
richbridger has quit [Max SendQ exceeded]
pieguy128 has joined #osdev
richbridger has joined #osdev
dennis95 has quit [Quit: Leaving]
<geist> heat: woot
<geist> GeDaMo: just skimmed through the fast terminal thing. pretty interesting
flx has quit [Ping timeout: 246 seconds]
<GeDaMo> Yeah, it was klange talking about glyph caching that reminded me of it
<geist> also interesting that the conio system does vt parsing and whatnot
<geist> but i guess it has to at least to try to deal with line editing bits and handling details like back arrow and whatnot
<geist> possible the termianl interface to conio is higher level, like it can tell you precisely what to draw where
<geist> i generally assume the conio stuff in windows was some service on the other side of a pipe tha basically acts like a pty on posix
tenshi has quit [Quit: WeeChat 3.2]
xenos1984 has quit [Ping timeout: 256 seconds]
mrlemke has joined #osdev
_mrlemke_ has joined #osdev
xenos1984 has joined #osdev
mrlemke has quit [Ping timeout: 240 seconds]
mrlemke has joined #osdev
flx has joined #osdev
_mrlemke_ has quit [Ping timeout: 246 seconds]
<gog> geist: conhost or something
<gog> or an instance of svchost? i can't remember
<moon-child> I think the deal with con stuff is that sometimes it translates between different types of escape sequences?
<moon-child> or like, sometimes it handles escape sequences on one end and logical descriptions on the other?
MiningMarsh has quit [Ping timeout: 272 seconds]
MiningMarsh has joined #osdev
freakazoid333 has joined #osdev
<sortie> * Server Up 5 days, 0:43:20
<sortie> As is very very obvious from the uptime, I have a debugger attached :|
wootehfoot has joined #osdev
<sortie> (Or rather I fixed qemu VNC so I can just inspect the registers for once)
<geist> oh is there a way to punch the monitor through VNC as well?
<geist> i've been wondering about that but never lookd into it
<sortie> Yeah it just works with the same escape codes
<geist> wait, by default? even if it's a gui?
<sortie> It's just that.. -nodefaults accidentally turned it off so I needed to manually turn it back on
<sortie> Yeah the qemu monitor is accessible by control-alt-2 or something by default even in the gui modes
<geist> reason is i use qemu to run about 8 different VMs permanently on my box, and i'd like to be able to get to the monitor so i could at least use the built in snapshotting with savevm
<geist> oh no kidding, lemme see
<sortie> -nodefaults -monitor vc
<geist> aaaah that's the key there
<sortie> That's what I do to restore the default behavior
<sortie> The -name option is also handy if you got several VMs
<sortie> Yeah depending on what I'm doing and the time of day, my server got 2-4 qemu VMs running
<sortie> I can VNC into them and also ssh into them
<sortie> Sortix freezes after a few days of uptime for some reason in practice when I use it as a server, so set all of this up properly, so I can inspect what's going on when irc.sortix.org crashes next time
GeDaMo has quit [Quit: Leaving.]
<NieDzejkob> Is there some list of things one can explicitly do differently than Unix, to notice all the biases one has when coming from a Unixy background?
<heat> no files
<heat> no directories
<sortie> no gods
<geist> tons of things, or also stuff like no fork, no parent/child process relationship
<geist> or, no concept of 'user id' in the kernel
<geist> ie, root is not implicitly uid 0, etc
<geist> devices maybe not files (if you do do files)
<geist> actually everything i just listed is a property of fuchsia/zircon. we really tossed most of those concepts from the core bits
<heat> no processes
_mrlemke_ has joined #osdev
catern- is now known as catern
mrlemke has quit [Ping timeout: 252 seconds]
dormito has quit [Ping timeout: 252 seconds]
dormito has joined #osdev
qookie has joined #osdev
asymptotically has quit [Quit: Leaving]
Arthuria has joined #osdev
<NieDzejkob> geist: so instead of fork, you have one syscall that creates a process and execs a binary into it?
<NieDzejkob> I'm thinking about a capability-based microkernel myself, so that implies no user id in the kernel
* NieDzejkob should read up on Mach ports...
<NieDzejkob> heat: how would you organize data without files? Some kind of key-value store?
<NieDzejkob> by "no processes", you mean unikernel?
<heat> i don't know
<heat> think about it
<heat> you're the one who asked ;)
<heat> certainly there are multiple answers
<NieDzejkob> I suppose I should clarify that I have a workstation usecase in mind, not anything embedded
<geist> NieDzejkob: well if you follow what we've done in zircon, for example, it's even more broken apart than that
<NieDzejkob> processes as in task + address space doesn't seem like something that has alternatives
<geist> basically to create a new process it's a series of calls
<geist> a) create new process phandle = zx_create_process();
<geist> now you have an empty process, with an empty address space
<geist> then you start mapping something into it
<NieDzejkob> ah, yeah, that makes more sense
<geist> b) zx_map_foo(phandle, stuff);
<geist> now you create a thread
<geist> c) thandle = zx_create_thread(phandle);
<geist> then you start filling in the threads state
<geist> d) zx_set_thread_state(thandle, registers...)
<geist> then you start it
<geist> e) zx_thread_start(thandle)
<NieDzejkob> is process = address space + bag for threads, or is there something more to it?
<NieDzejkob> the state of the capabilities, I suppose?
<NieDzejkob> or is that per thread
<geist> in zircon at least a process is a bag of threads, an address space (you can separately get a handle to this, but it's still bound to a process), and a handle table that's private to that process
Arthuria has quit [Read error: Connection reset by peer]
<geist> the handle table means any syscalls that involve handles by any thread in that process get a consistent view of handles
<geist> and that's basically it
Arthuria has joined #osdev
<geist> no users, no parent process, etc
<geist> any of that is entirely a user space construct
<geist> i only use this as an example because it's fresh on my mind since i work on it, but i dont think it's particularly exotic in the realm of capability based thins, or microkernels in general
<NieDzejkob> do you need to give handles to the new process?
<geist> yes. i glossed over some details
<geist> actually the way you start a process is somehting like
<geist> zx_process_start(phandle, thandle, handle_to_initial_thing);
<geist> the last thing is the one and only way you can transfer a handle to a process directly
<geist> what you usually do is push a single endpoit to a channel (an IPC mechanism)
<NieDzejkob> is that a list of handles, or just one?
<geist> and the first thing the process does is start reading messages from it
<geist> and the messages can contain new handles
<geist> and there's basically a care package set up there so the process can get its own bearings
<geist> or not at all. you can create a process that has no handles at all and jsut runs some code and dies
<geist> totally up to you
<moon-child> NieDzejkob: In general, a process is a security domain. Address space, capabilities, CPU time, ... are shared resources that you can limit access to
<geist> but i guess my point is the strategy is to think about the high level things of what you want to do, and then build the kernel as a series of atomic pieces you can construct something larger from
<moon-child> (obviously you can avoid this conception of it if you want to deviate)
<geist> it's kinda like a RISC like philosophy to kernel design. a fun mind space
<NieDzejkob> is the care package something like a simple struct with each location allocated for a specific thing (stdout, fs server, etc), or more like a key-value thing (like Linux's getauxval mechanism)?
<geist> the latter
<geist> but again the kernel doesnt' care, that's entirely up to the run time
<NieDzejkob> yeah, got that
<geist> in our case we have some sort of key value thing that has slots
<NieDzejkob> > or not at all. you can create a process that has no handles at all and jsut runs some code and dies < how do you get the result back?
<geist> the process for example doesn't even get a handle to *itself* unless it was handed
<NieDzejkob> you peek at the memory of the 'zombie'?
<geist> shared memory
<NieDzejkob> ah
<geist> it's entirely possible you can construct a process that has shared memory set up ahead of time, and if you give the process no handles it can't even allocate and map any more
<geist> so you basically have create a 'run this code cannot interact with the outside world' process
<geist> we use that in some cases for critical decompressors or whatnot. create hermetic boxes
<geist> barring a bug in the kernel the process has no way of interacting with anything
<NieDzejkob> how big (kloc) is the kernel itself?
<clever> how do you get any useful result out of the neutered proc?
<NieDzejkob> shmem
<clever> and then you still need to safely manipulate that result
<clever> NieDzejkob: ah, that works, no need to even have a fifo then
<NieDzejkob> do you handle paging to disk? is that kernel or userspace?
<geist> clever: shared memory
<geist> generally we dont completely seal it off. you could had the process a write only endpoint to an IPC channel, or an event object that the process signals when its done
<geist> but you *can* create a process with some code mapped and say an input and output buffer mapped, shared
<geist> and then say two events or an IPC to tell it to do something
<geist> and no FS access, etc
<clever> i can see that being useful for things like the chrome render proc
<clever> and JS
<geist> but that's the point. the kernel doesn't care, there's no intrinsic capability of the process unless given it
<geist> and no ambient authority to do anything
<geist> anyway ot saying it's the end all of anything, just given a recent example of how you could build a capability based thing that really doesnt look like posix at all
<geist> NieDzejkob: re: pagers and stuff, yes that's all in user space
<geist> that's actually a very complicated part of the design, sinc eyou have to be able to have the kernel work with a user pager process to handle things lke mmap() of a file and whatnot
<vdamewood> Did all DOS-based versions of Windows lack a kernel?
<geist> the kernel has noconcept of memory mapped files, but it has the concept of a virtual memory object (VMO) that you can map that has it's data sourced/sinked from an external user space pager
<geist> and based on that you build a proxy mechanism for the kernel to send pager requests to a user process
<geist> which is generally a FS server
<clever> vdamewood: i think the pre-mmu versions of win3, blurred the lines, where the "kernel" was more of a shared library that delt with context switching and common services, and every proc was in the same a-space
<geist> vdamewood: right, it gets into blurry territory as to if you concisder something like DOS (or it's predecessor CP/M) to be a 'kernel' or more of a monitor
<geist> i would say it's a kernel simply ecause it's the place the cpu starts in and then switches to a process and lets the process call back into it
<vdamewood> Oh, wow. I didn't realize Windows 3 even had a quasikernel.
<geist> ie, it's not a set of libs that the process starts with that it calls into
<NieDzejkob> so, what decides which pages to swap out? the pager process? does it need to get told about each new process, then?
<HeTo> Windows 386 to 95 blurs the line even further
<geist> NieDzejkob: in zircon the kernel does. a concious decision was made to put the VM in the kernel, so it has a holistic view of the entire system
<geist> so user pagers in zircon are just told 'do X with page Y'
<HeTo> unless you're really unreasonable, Windows 95 is a real OS with a real pre-emptive kernel with its own drivers
<geist> windows 3.1 *definitely* was kernel centric
<geist> even more, later windows 3.xes (3.11) even ran its kernel in protected mode, 32bit even
<HeTo> Windows in real mode is pretty easy to define as not a real OS
<geist> it just maintained a fiction of real mode processes
<geist> i totally disagree, later windows 3.x got more and more preemptive and whatnot
<HeTo> but where exactly you draw the line where it turns from not a real OS to a real OS is a lot murkier
<vdamewood> Oh, wow. I didn't realize it was so complicated.
<geist> they just at the end of the day kept the same programming model
<geist> yah
<HeTo> geist: AFAIK Windows 3.x was always cooperatively multitasked with Windows processes, even 32-bit ones, just the MS-DOS virtual machines were preemptively multitasked
<geist> but like i said in the simpler stuff, i'd still consider plain single tasking DOS to be a kernel
<geist> only because the cpu 'starts' there and then runs a program
<geist> vs the other way around
<NieDzejkob> so for something like swap, you'd need to tell the kernel "here's the process to talk to if you need to ship something to disk"?
mahmutov_ has joined #osdev
<geist> HeTo: it got complicated in the later ones
<geist> since they started missing 16bit protected mode and whatnot
<NieDzejkob> and there can be at most one?
<geist> NieDzejkob: lots of ways to do that
<geist> we dont curretly swap in zircon so it hasn't been defined yet
<geist> but... deman paging from files on disk is atually not that conceptually different
<NieDzejkob> can demand paged things get evicted back to disk already?
<geist> think of 'swapping' to disk as just a dynamic mapping of 'anonymous' memory to a backing swap file
<geist> yes
<NieDzejkob> yes
<geist> what would moslty likely fall out of a generic swapper design in zircon would be something like a generic swapper process registers with the kernel
<geist> 'hey i can handle anonymous swapping'
<geist> and then the kernel would come along and give it directions like 'here's a page for VMO X, offset Y'
<geist> and then the reverse would happen
<geist> ie, the swapper is responsible for indexing X:Y to a page somewhere in secondary storage
<geist> and the kernel may ask for it back, or say something like 'everything owned by X is now trash because i deleted VMO X'
mahmutov has quit [Ping timeout: 252 seconds]
<NieDzejkob> does zircon have any other global process handle variables like that?
<geist> what do you mean specificaly there?
immibis has quit [Ping timeout: 258 seconds]
<NieDzejkob> does this relationship where processes provide services for the kernel exist elsewhere in zircon yet?
<geist> yes, the user pager
<geist> but thats.... pretty much it
<NieDzejkob> how's the user pager different from the swapper?
<geist> it's pretty muchthe only place where we actually allow a critical user process to claim that its going to take on some responsibility like that
<geist> user pager operates the different way
<geist> it'll say 'i'm making a VMO X and i'm going to be responsible for the data in it'
<NieDzejkob> also, if the IPC "here's a page" is sent to a process, does it actually put the page in question in the target process's address space, or is it like a handle that can be mapped manually?
<geist> and then it creates the vmo (usually 1:1 a 'file')
<geist> NieDzejkob: implementation detail.
<geist> ie, (has't been designed)
<NieDzejkob> :D
<geist> alternatively 'too complicated and esoteric to describe here'
<geist> but bak to the user pager
<geist> in this case it's a FS server
<NieDzejkob> does the pager process for VMOs keep the pages in their own address space?
<geist> hang on. one set of questions at a time
<geist> lemme finish before
<NieDzejkob> also, are you saying that there can be only one user pager?
<geist> hang on.
<geist> you're asking very very complicated questions that i need to give you a paragraph or two to answer
<geist> so here's how it works: a user pager in zircon is a FS server (nothing else has found a use for it)
<geist> a FS server would create a user-paged-VMO when some process opens a file
<geist> say, it opens foo.txt and it's 1MB in size. the user pager (FS server) would create a 1MB VMO and register itself as the user pager for it
<geist> then it hands a copy of the VMO to some random process
<geist> the random process starts using it, and thus demand faults pieces of it in
<geist> the kernel then sees the faults and instead of zero filling the VMO like it normally would (with anonymous VMOs) it starts redirecting user pager requests to the FS, which was the creator of the VMO and had previously registered itself
<geist> then the FS provides data for it (using a private mechanism) and then the user process that demand faulted continues
<geist> on writeback the kernel has noticed pages are dirty and as it scans pages it starts generating messages to the user pager (the FS) for tha particular VMO saying 'hey page X is dirty' and the FS writes back and marks them clean
<geist> so it's a continuous relationship. in the case of user pagers it's for every single VMO the FS created, but it's per VMO
<NieDzejkob> are disk drivers in userspace too? can the disk driver provide a page on behalf of the FS server, without a context switch back to the FS server?
<geist> so you can have 5 FSes (say 5 differnt mount points) each with their own set of VMOs but the kernel tracks which user pager is registered with each of them
<geist> does that makes sense?
<NieDzejkob> yeah, it does
<geist> yes disk drivers are in user space
<geist> it *could*, the FS could forward the request to the disk driver saying 'hey just provide the data here'
Arthuria has quit [Read error: Connection reset by peer]
<geist> sinc eit's all capability bsed the kernel isn't tracking that any one given process has authority to handle pager requests, for example
Arthuria has joined #osdev
<geist> only that the process has a handle to the pager endpoint (user pagers are their own distinct object that you can have a hande to)
<geist> so the FS could easily hand the user pager handle to a disk driver and say 'service this plox'
<NieDzejkob> is a new capability minted for "can answer that specific paging request"?
Arthuria has quit [Read error: Connection reset by peer]
<geist> the capability is in the form of 'you have a handle to this thing that gives you the ability to make these particular syscalls'
Arthuria has joined #osdev
<geist> 90% of 'capability' in zircon is essentially 'do you have a handle to this with sufficient rights'
<NieDzejkob> what's the other 10%? :)
<geist> well, i only said that because im sure there's an edge case :)
<geist> there's an additional safety thing that lets you put additional restrictions on a process that supercedes rights on handles
<geist> but it's most of an additional safety rail, and only works to restrict even further
<geist> like you can set a bitmask on a procss that says 'disallow all use of handles of type X'
<geist> and it's immutable, that way even if the process got a handle to a thing somehow it can't atually *do* anything with it
<geist> that you can argue is the 10%
<NieDzejkob> so the handles are like fds, right? does each type of capability get a separate table? i.e. can you have handle=1 of type X and handle=1 of type Y, being disambugated by what type of handle a syscall expects?
<geist> no, single table
<NieDzejkob> or is that undesirable because some syscalls are polymorphic
<geist> each process has it's own handle table. handles are 32bit entries, handles are completely unique per process and allocated pseudo randomly
<geist> and furthermore it's completely impossible to 'look' at another handle table
<geist> (basically prng allocated per process and each process has it's own random seed)
<NieDzejkob> ah, so like ASLR but for capabilities
<geist> we did that so even if you got control of another process it's hard to guess what hte handle ids are for anything 'critical' like the kernel process's handle
<geist> and secondly there's an attribute you can set per proess that we generally run enabled that makes it instantly fatal per process to use an invalid handle
<geist> like, instant termination, no questions
<geist> it's all defense in depth, trying to really layer on the restrictions. we wanted to do that to avoid posix style FD table exploits, which there's a whole universe of existing issues there
<NieDzejkob> is there any reason to ever disable it?
<geist> not really
<geist> there's two places you can call a syscall with an invalid handle: the 0 handle (like free(0)) basically, which is always invalid
<NieDzejkob> debugging-wise, does a message get sent to the process handle saying "terminated because X"?
<geist> and there's one syscall for 'is this a valid handle'
mahmutov_ has quit [Quit: WeeChat 3.1]
<geist> debugging wise there's a whole structued exception model, and yes it's one of the termination reasons
<geist> but cannot be intercepted by the process itself.
mahmutov has joined #osdev
<HeTo> can you disable the "is this a valid handle" syscall? can't take that much time to go through a few billion handles asking that if you manage to get arbitrary code execution in a process
<geist> possible
<NieDzejkob> hmm, does it also terminate no questiions asked if you provide the wrong type of handle?
<geist> yes
<geist> HeTo: yeah i forget what we do about that
<heat> this reminds me, i should add seccomp + ebpf to my OS
<moon-child> make your handle a 128-bit type
Skyz has joined #osdev
<heat> moon-child: cursed
<bslsk05> ​fuchsia.dev: zx_object_get_info  |  Fuchsia
<moon-child> (aside: there was come cool stuff done on some arm processors where they used the high bits of a pointer for a signature)
<geist> doesn't look like we acutally mask that off
srjek|home has joined #osdev
<geist> yah honestly i think we should have made handles 64bits for this reason
<geist> but there was some pushback abot it early on, especially since there are places where we have large tables of handles, and doubling the size is non trivial
gioyik has joined #osdev
* NieDzejkob looks at the fuchsia docs
<heat> how would the handle table double in size?
<NieDzejkob> what's the difference between channels, sockets, streams and fifos?
<moon-child> heat: moving from 4-byte to 8-byte handles, each handle takes up twice as much space
<geist> yeah, that's all
<bslsk05> ​fuchsia.dev: Channel  |  Fuchsia
<heat> ah tables of handles
<heat> I was thinking of handle tables lol
<geist> yah though that would double as well
<geist> since internally it's a big hash table and you have to store the key
<heat> wouldn't a vector make more sense?
<heat> O(1) lookup and all that
<geist> NieDzejkob: yah they're all different. onestly we have at least one too many IPC mechanism, but they dont completely overlap
<NieDzejkob> they're randomly allocated
<geist> heat: yah the full space of the handle table is 32bits (31 actually, i think we dont use negative space)
dragestil has joined #osdev
<heat> my idea was that you could stick a cookie/random cookie on the high 32-bits and then verify that based on the handle you end up indexing to
<NieDzejkob> ooh, neat
<heat> you would still only use the lower 32 bits for the actual indexing part
<NieDzejkob> you can even try to fit all that in 32 bits
<NieDzejkob> if a process needs >64K handles, it basically degenerates to a hash table with 64K buckets
<NieDzejkob> hmm, but then the array of handles gets more complicated :/
shikhin has joined #osdev
<geist> well, what we've done for zircon is specced it such that a user programmer should not assume anything about how the handle ids are allocated
<geist> and we also try not to reuse a handle in a rasonable time frame
<geist> implementation detail may change over time
<NieDzejkob> is the randomness for handle generation cryptographically secure?
<geist> at the moment its *actually* one single global hash table, allocated not so randomly since the table grows over time, but then hashed per process, plus some of the bits are actually a generation counter (in case the same object goes and comes back)
<geist> so it's not *that* random but implementation may change
<geist> no. nor do we claim it is
<geist> we simply claim 'it's random enough that you should not assume anything from it'
<heat> ah
<heat> global?
<geist> that's what we really wanted
<geist> again the docs dont say that. and since each process hashes the true global ID you dont 'see' the globalness of it
mahmutov has quit [Ping timeout: 240 seconds]
<geist> and the global table is just an implementation detail. we may reimplement it later to be per process
<heat> don't you get nasty lock contention?
<geist> it's lock free
<geist> (mostly)
<NieDzejkob> so Instead of HashMap<Process, HashMap<Handle, _>>, it's HashMap<(Process, Handle), _>
<geist> right
gog has quit [Ping timeout: 252 seconds]
<heat> ah that's better
<geist> it's not perfect, but it's pretty fast
<heat> still doesn't sit right with me though :P
<geist> sure. it's on the list of things to redo some day
<NieDzejkob> what's the motivation for the one global table design?
<geist> simple, fast, got the job done at the time we needed it done (very very early)
<geist> perfect is the enemy of good, after all
* NieDzejkob realizes how cool it is that an OS with a novel design like that shipped to customers
<geist> but since it's an implementation detail, and we absolutely didn't spec how the handle ids work outside if IMPLEMENTATION DETAIL, we can change it
<Skyz> Is this an embedded only OS?
<geist> there's lots of that. consider this v1. we erred on the side of doing things simply and 'good enough' to ship a product
Arthuria has quit [Read error: Connection reset by peer]
<Skyz> or is embedded only atm?
<geist> but geerally erred on the side of being not permissive, since it's easier to add than to take away over time
Arthuria has joined #osdev
<geist> and a emphasis on trying to test the shit out of it and shoot holes in the design
<geist> hell, we still have a Big Scheduler Lock that is a huuuuge source of contention
<NieDzejkob> Skyz: is android an embedded OS?
<geist> but not so bad for 4 way machines. that's definitely in the works to replace
<Skyz> I guess it's mobile
geist2 has joined #osdev
Arthuria has quit [Read error: Connection reset by peer]
<NieDzejkob> (it's not a phone OS atm, but rather things like TVs and... whatever google nest is)
Arthuria has joined #osdev
<Skyz> I see
<geist> i think so too (how cool it is). i easily forget how much of a priviledge it is to be able to write a new OS and ship it nowadays
<geist> but when you're down in the trenches all you see is the mud and rats
<NieDzejkob> whew, fuchsia sure does have a lot of syscalls for a microkernel :P
<NieDzejkob> I'm sure there are good reasons for that, but I still find it slightly... hmm, is funny the right word?
<geist> well, that's one of the downsides of a capability based thing with handles
<geist> you need a lot of distinct syscalls so you can make a distinction of 'this can only be done with that'
<geist> and we very specifically didn't go in the direction of 'kernel is another IPC endpoint' which tends to shrink the syscalls greatly, but really just moves the explosion of where the large switch statement is to another layer (processing IPC)
<geist> at some point you have N things you can do, so either the switch is the syscall layer or it's later on when you handle an IPC opcode
<geist> but also tink of microkernels as a continuum
<geist> kinda like risc too. there are particular signatures of microkernels (drivers in user space, IPC based, etc) that zircon has
<geist> but then it specifically didn't choose to put the VM in user space, and didnt choose to make the kernel an IPC endpoint itself
<moon-child> 'didn't want to go in the direction of "kernel is another IPC endpoint"'
<geist> and we had no real issue rolling multiple types of IPC that overlap in different ways
<moon-child> :/
<geist> honestly i'm glad we didnt, because in the last 5 years i think we've switched user space IDL like 4 times
<NieDzejkob> IDL = interface definition language?
<geist> having to keep redoing the kernel every time would be annoying. this way we have a stable syscall API without needing to worry about what random IDL user space was using that week
<geist> yah. to descrie whatever RPC format is do jour
<geist> du
<geist> also similarly i wanted the kenrel to be run time agnostic. to have the kernel itself be an IPC ednpoint you have to at least bake into it the message format, etc
<geist> not an insurmountable problem by any means
sortie has quit [Quit: Leaving]
<Skyz> The documentation is epic
<geist> and finally we really had no interest in doing some sort of fully distributed system where some interposing server is pretending to be a kernel, etc
<geist> but yeah we have 100 something syscalls, thoug for the most part they're clusters of methods on particular handle types
<moon-child> I guess you just hate fun :)
gioyik_ has joined #osdev
<heat> you can do that by adding breakpoints on the calls to the API
<geist> at the end of the day we had to focus on doing stuff
gioyik has quit [Remote host closed the connection]
<geist> heat: yep and we actually do that
<NieDzejkob> hmm. what is the motivation for "stream" being a kernel object? what does that bring over mapping the VMO and keeping a pointer?
<geist> since we force *all* syscalls to go through the vdso we can trap on the vdso boundary
<geist> NieDzejkob: single cross thread/cross process mechanism to handle the cursor
<NieDzejkob> can you do stuff like jump in the middle of a vdso's instruction?
<geist> that came out of some need for additional posix compatibility
<geist> NieDzejkob: you can, but the syscall instruction *must* be at a particular spot
<geist> so it's pretty hard to defeat
<NieDzejkob> so the kernel knows the offsets of all syscall instructions in the vdso?
<NieDzejkob> or is there just one?
<geist> mostly we did the vdso thing not as a security thing but as a way for us to 'own' the syscall interface
<geist> ie, the zircon syscall interface is C, not sysenter/SVC/etc
<moon-child> NieDzejkob: probably just checks the syscall originated _somewhere_ in the vdso
<geist> oh no. it checks that it originated *precisely* in one spot
<geist> er i mean per syscall
<geist> ie, this call must have an instruction at exactly this address
<moon-child> right, of course, rop
<geist> and since the vdso is mapped randomly per process...
<geist> the vdso is neat: we can compile more than one (to provide different syscall interfaces) and we can dynamically choose to implement some syscalls entirely in user space (time base ones) and we can also choose to marshall args across the syscall boundary however we want
<moon-child> ooh, I guess that also lets you break compat in the kernel layer and patch it over in userspace
<geist> we can (though we haven't yet) provide a nerfed vdso that only has a narrow set of syscalls if we wanted
<NieDzejkob> how about csprng syscalls? do they get implemented entirely in userspace for cpus that have rdrand?
<moon-child> rdrand is not 'secure'
<geist> moon-child: right
<geist> it gives us wiggle room for switching some things in the future
<geist> only real downside is it kinda enforces ELF and C on processes. but push comes to shove we could define a simpler VDSO binary format for PE or whatnot
<geist> syscalls themselves are described in an IDL and we actually compile time generate both sides of the assembly in the VDSO and kernel
<moon-child> geist: I think c is probably the contentious point, rather than elf. See go
* NieDzejkob notices the threads/processes/jobs/tasks split O_o
<geist> moon-child: actually yes. and the go runtime had to do a bit of work to provide the thunk
<geist> downside of course is you always have to go through at least a function call to get to a syscall
<geist> but so it goes.
heat has quit [Ping timeout: 252 seconds]
<moon-child> ehh the context switch is slow no matter what you do
<moon-child> function call is negligible
<moon-child> and on most OSes that aren't linux, the primary kernel interface is the libc. So not really a big deal
<geist> yah. and we're basically in the right territory. last benchmark i saw we're not much slower than linux
<geist> like maybe 10ns
<geist> what *does* hurt us a lot os shit like meltdown and KPTI
<geist> now suddently the fact that we may need to do 3 or 4 syscalls where you only needed 1 on linux s a thing
<geist> but a fun thing we have left ourselves open to do in the future is allow combining of syscalls into short sequences
<NieDzejkob> perhaps an interface like io_uring would be better? store a list of syscalls in memory, and have svc take a pointer and length
<geist> note that most of the syscalls have a pretty consistent interface. args on the right, almost always handle based, result code on the left, etc
<geist> you can farly easily imagine a thing like { handle = create_foo(); do_foo(handle); close_foo(handle); } multi-syscall
<geist> and have a fairly simple set of rules about what happens if any of them return error, abort and close all open handles, etc
<geist> right, what NieDzejkob is describing. that has been on the table as a future thing to explore
<geist> hence why we designed the syscall layer to be fairly consitent about this sort of thing
freakazoid333 has quit [Read error: Connection reset by peer]
<geist> really it'd look more like
<NieDzejkob> perhaps it would be good to reserve some portion of the handle-space for indices into the results of previous syscalls in a batch
<geist> { on error abort; create_foo(&handle); do_foo(handle, ...) /* handle is implicitly closed */ }
<geist> yah
<NieDzejkob> I guess the sign bit would work well for that
Arthuria has quit [Read error: Connection reset by peer]
<moon-child> on error resume next?
<geist> could even create local variables to hold things, build up a simple interpreted thing. but obviously that has hella security implementations
Arthuria has joined #osdev
<geist> right
<moon-child> geist: if it works for ebpf...
mrlemke has joined #osdev
<geist> anyway, that might help for actually eal world sequences like
<NieDzejkob> "works"
Arthuria has quit [Read error: Connection reset by peer]
Arthuria has joined #osdev
<geist> { create_vmo(&vmo...); map_vmo(&vmo); close_vmo(vmo) }
<NieDzejkob> a friend of mine posts about having found a vuln in ebpf what seems like every other week
<geist> tat's basically mmap("/dev/zero") right now
<geist> and have been hesistant to create any syscalls that cannot be accomplished with more than one smaller syscall
<geist> very much a risc-like philosophy
mrlemke_ has joined #osdev
<geist> NieDzejkob: what do you think about the thread/process/job stuff?
<geist> that was mostly my doing
<geist> my real stamp was that heirarchy and most of the VM. stuff like vmos and vmars are mostly my design
_mrlemke_ has quit [Ping timeout: 252 seconds]
<NieDzejkob> still reading up on that
mrlemke has quit [Ping timeout: 240 seconds]
<NieDzejkob> "All the jobs on a Fuchsia system form a tree, with every job, except the root job, belonging to a single (parent) job." how does that fit in with "no parent/child process relationship"?
<NieDzejkob> anyway, having now read the description, I like this design
<raggi> A process "parent" is always a job
<raggi> Not another process
<NieDzejkob> I suppose it's more of a subset thing
<NieDzejkob> there isn't even an equivalent of getppid afaics
<NieDzejkob> why the separation between ports and channels, though?
<geist> they're different
<geist> mostly because ports can be generated by the kernel. fixed sized message, etc
<geist> and channels are more of a big heavy 'need to allocate memory, transport handles, etc'
<geist> which the kernel shouldn't be doing when it needs to synthesize a message
<NieDzejkob> okay, then why not fifos?
<geist> so for example the ekrnel uses ports in the user pager to send notifications
<geist> and it also uses it for things like waiting on multiple handles
<geist> the kernel synthesizes a port message saying 'X changed on handle Y'
<geist> hoenstly i dont remember what a fifo does
<geist> like i said i thinkw e have at least one too many ipc mechanisms
<NieDzejkob> the docs are quite sparse on this
<geist> fifos and sockets are kinda specialized
<geist> they're not all the same thing, but they kind have this venn diagram that overlaps a bit much in my opinion
<geist> ah yes. fifos. checked the docs. yeah it is i think generally used fos eomthing like pushign a HEAD/TAIL pointer between processes on some shared memory buffer
<geist> why you can't do that with a port? good question
<geist> but as it points out in the zx_fifo_create() call it's intended to be more efficient
<geist> certainly right off the top the fact that it would be able to accomplisht he task without needing to allocate any memory in the kernel puts it in a different class
<geist> it ought to be about as efficient as fiddling with an event or whatnot
freakazoid333 has joined #osdev
pyzozord has joined #osdev
<pyzozord> hey will sigttin "break" sleep?
<pyzozord> meaning I have process that's currently sleeping. The process receives sigttin. After handling sigttin, will the process go back to sleep or resume normal execution?
<geist> hmm, what was it sleeping on?
<clever> [clever@amd-nixos:~]$ man nanosleep
<clever> EINTR The nanosleep() function was interrupted by a signal.
<moon-child> pyzozord: nanosleep manpage says it can fail with EINTR if interrupted
<moon-child> ah clever beat me
<geist> yah i wanted to make sure we're really talking about sleep in the waiting for some period of time
<geist> instead of sleeping as in blocked on something
<clever> select() can also return EINTR
<clever> man page says even read() can
<clever> related, i have had problems with libpam considering EINTR to be a fatal error, even when it was SIGALARM
<doug16k> see also SA_RESTART
<clever> haskell uses SIGALARM for context switching, and that causes pam to randomly crash
<pyzozord> oh perfect! that means that nanosleep is broken on all signals?
<clever> pyzozord: yep
<pyzozord> amazing, thanks
pyzozord has left #osdev [#osdev]
<geist> yah i dunno what this means for a multithreaded app
<clever> > If the rmtp argument is non-NULL, the timespec structure referenced by it is updated to contain the amount of time remaining in the interval
<geist> wel okay. they got their answer i guess
Arthuria has quit [Read error: Connection reset by peer]
Arthuria has joined #osdev
<NieDzejkob> geist: does zircon have any equivalent for UNIX signals, or does everything need to be waited on explicitly?
<clever> geist: signals can be directed to a single thread, and i have had surpringsing success `kill -stop`'ing a single thread in WoW before, that seemed to be using a lot of cpu, but not doing any useful task
<geist> no equivalent
Arthuria has quit [Read error: Connection reset by peer]
<moon-child> so if a process faults, it can't recover or handle that?
<doug16k> geist, how do you handle ctrl-c then?
Arthuria has joined #osdev
<moon-child> what if I wanna do the hotspot static branch prediction trick
<geist> that's the job of whatever the pty system that's inspecting your tty bits to do
<geist> it can intercept it and kill your process/etc
<NieDzejkob> what if you as a process want to do things before exiting?
<NieDzejkob> do you spawn a thread for this?
<geist> you can register for your own exception handler and roll a thread, yes
Skyz has quit [Quit: Client closed]
<geist> there's a whole structued exception handler mechanism, basically. with the ability for a thread, process, job to get delivered an exception to handle
<moon-child> but still no ability to recover?
<geist> sure. you can recover
<geist> you take the exception and continue iut
<geist> though there's situations where the exception is non recoverable
<geist> so if your pty thing just straight kileld you, that would be not as good
<geist> vs doing something like suspending th process
<moon-child> so, thread a segfaults. Thread b is told about it. What can thread b do to get thread a running again?
<geist> it can manipulate the saved state of the thread and resume it
<geist> set its PC to something else or... implement posix threads if i wanted to
<geist> posix signals
<moon-child> hmmm
<geist> but note we're simply not interested in posix style signals, so this area hasn't been plumbed out yet
<geist> the whole syscall model is not built around stuff getting interrupted and whatnot like this
<moon-child> I like that a lot, actually. Basically equivalent to signals, but with cleaner execution/reentrance model
<geist> yah mind you it's complicated and that area definitely is a PITA for debuggers and whatnot
<Griwes> geist: so in fuchsia, does every IPC message necessarily go through the kernel? I've been thinking about it for some time now and I know I want ones that go through the kernel (for handle passing, like in fuchsia), but also ones that don't *have* to (i.e. like a futex but for messages, just normal shmem ringbuffer stuff), and I keep going back and forth between those two being the same abstraction or two separate ones
<geist> but posix style signals are also a super drag
<geist> Griwes: it's entirely possible that user space library builds their own intra-process IPC model that uses it's own schemes
<geist> or even transparently falls back to the kernel based IPC
<moon-child> maybe apps should be implemented similarly to video games, explicitly suspending themselves. If they segfaulted on one run, just let them know on the next one
<geist> i wouldn't be surprised if the Dart VM for example does something liek this
<geist> or even go runtime
<Griwes> But the OS provided one is always actually a syscall?
<geist> correct
<Griwes> Gotcha
<geist> well actually to be more precise, it's a VDSO call
<geist> and that *probably* will immediately use a syscall instruction to trap into the kernel
<Griwes> Right
<geist> but it's possible we could come up with some future scheme that figures out how to short circuit it
<Griwes> Does all the futex decision making also live in the vdso? I assume it does?
<geist> 'oh the top bit of the handle is set, this is a virtual handle lets try some alternative scheme
<geist> *all* syscalls go through the vdso without exception
<geist> futex ones also have to as well
<geist> local implementatinos of mutexes and whatnot have their own logic that may bottom out in a futex call
<clever> that kinda reminds me of windows syscalls
<clever> where all syscalls go thru a dll, and the kernel interface isnt documented and is subject to change without warning
<geist> yah i dunno how they work, but the idea of forcing syscalls through a lib is definitely handy for future expansion, etc
<geist> correct
<geist> i think macos does something similar
<geist> linux really just is fed a shit sandwich here because they have all this backwards compatibility they have to maintain
<clever> i think backwards compat is also why windows did it that way?
<geist> even the BSDs generally have a notion of dropping old compatibility, which seems sound to me
<clever> in the pre-mmu days, that DLL was the "kernel"
<Griwes> backwards compat is such an annoying thing
<geist> well, i dont think they had to go pre-NT
<clever> and when they switched to having a real kernel, the dll became a compat shim
<geist> oh i guess not, because win95 and win32s on win3.1
<geist> yah
<clever> and when they went to the NT era, they kept the design, when they could have changed it
<geist> sounds like they were just smart and abstracted it
<geist> FWIW x86 changing its syscall mechanism 2 or 3 times also kinda forces this to change
<geist> linux *could* have mandated that post 'int' instructions x86 must call through the vdso
Arthuria has quit [Read error: Connection reset by peer]
<geist> but most likely someomne pointed out it was .001% slower so no can do
Arthuria has joined #osdev
<clever> has linux always had a vdso?
<geist> i dont think so, also it's per arch anyway
<geist> so it's not even a clear cut answer
Arthuria has quit [Read error: Connection reset by peer]
Arthuria has joined #osdev
<clever> geist: xen also had a form of vdso, which they call the hypercall page, and it deals with the various ways of waking up the hypervisor and marshalling args from one calling convention to another
<geist> yah in classid BSD it was called a commpage
<clever> i think in xen, its also fully PIC code, so the guest can copy it to a convenient location
<ZetItUp> i hate reading documentation where they use shorts for terminology before they explains it waaay later
<clever> and its just a flat array of fixed-size functions, so you have no binary parsing to do
<clever> you could even generate some fake symbols, and the linker can just call them directly
<geist> clever: yeah, i think we thought about that for the zircon vdso but decided ELF was fine enough for now
<clever> for the RP2040 MCU, there is an array mapping a char[2] id code to a function pointer, in the boot rom
<clever> and you then need to search that array for a given util
<geist> downside is of course you can only extend that, etc
averetzi has joined #osdev
zoey has quit [Ping timeout: 246 seconds]
Arthuria has quit [Ping timeout: 240 seconds]
gog has joined #osdev
kingoffrance has quit [Ping timeout: 240 seconds]
wootehfoot has quit [Ping timeout: 246 seconds]
nsmb has quit [Quit: WeeChat 3.2]
nsmb has joined #osdev