klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
nyah has quit [Quit: leaving]
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
heat_ is now known as heat
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
<geist> kazinsal: thought you wanted to do it for a 286?
<geist> you need a 5170 for that
<kazinsal> nah, decided to go with the machine with no form of memory protection just to see how bare bones I could go with it
<geist> yah at the end of the day the thing that ultimately turns me off to doing any sort of non flat memory programming is the lack of good compiler support
<kazinsal> really aiming for that early research unix flavour of yelling "A.OUT!" down the hallway so people save their work in case you crash the machine
<geist> or at least needing to either do everything in assembly, or some sort of esoteric compiter
<kazinsal> I've definitely hit some pain points with openwatcom that's for sure
<kazinsal> it doesn't seem to understand the concept of restoring segment registers after modifying them, so I had to write some specific far pointer memory operation functions in assembly that actually restore ds/es
<mrvn> Maybe I can get my ChaOS done tomorrow night for the RPi4 (AArch64). It only has one app that renders a Mandelbrot fractal on the framebuffer so that should be doable.
heat_ has quit [Remote host closed the connection]
heat_ has joined #osdev
<mrvn> well, tonight, not tomorrow. btu after sleep and work.
<mrvn> speaking of which, I better get to it.
smeso has quit [Quit: smeso]
smeso has joined #osdev
elastic_dog has quit [Read error: Connection reset by peer]
[itchyjunk] has quit [Ping timeout: 252 seconds]
<clever> mrvn: how fast is your mandelbrot?
elastic_dog has joined #osdev
<clever> with some VPU asm and help from another user on discord, i got it down to ~90ms for a full frame
bnchs has joined #osdev
gog has quit [Ping timeout: 248 seconds]
Arthuria has joined #osdev
Arthuria has quit [Remote host closed the connection]
Arthuria has joined #osdev
m5zs7k has quit [Ping timeout: 250 seconds]
xenos1984 has quit [Read error: Connection reset by peer]
m5zs7k has joined #osdev
bradd has joined #osdev
xenos1984 has joined #osdev
vdamewood has joined #osdev
AttitudeAdjuster has left #osdev [WeeChat 3.7]
Arthuria has quit [Remote host closed the connection]
bgs has joined #osdev
slidercrank has joined #osdev
heat_ has quit [Remote host closed the connection]
frkzoid has quit [Ping timeout: 265 seconds]
bgs has quit [Remote host closed the connection]
Gooberpatrol_66 has quit [Ping timeout: 252 seconds]
asarandi2 has joined #osdev
asarandi has quit [Ping timeout: 265 seconds]
asarandi2 is now known as asarandi
jtbx has joined #osdev
<geist> starting to get pretty handy with riscv asm now
<geist> once you get the hang of it it's pretty easy to bust out a lot of code
xenos1984 has quit [Ping timeout: 248 seconds]
Gooberpatrol_66 has joined #osdev
<clever> reading more of the a53 docs, it looks like the TLB can hold any size page, up to 512mb
<clever> which differs from the article linked yest, where there was many seperate TLB's, each only holding a single page size, and each holding a different count
<clever> so on a53, 1gig pages seem like a negative, it would use up 2 TLB slots, and 512mb pages are better, if the paging tables allow it
zxrom_ has joined #osdev
<geist> separate TLB thing for different sizes is an implementation detail
<geist> mostly intel cpus
<geist> AMD cpus have had unified TLBs like ARM do for some time now
<geist> for whatever reason intel cpus have stuck with the separate TLBs for different size pages. presumably for efficiency purposes (simpler hardware to only search a single page) at the expense of not being as generically useful
zxrom has quit [Ping timeout: 255 seconds]
<moon-child> why would it be more efficient?
<geist> presumably the hardware takes more levels of gates to simultaneously match a TLB entry while dealing with the entry's size
<moon-child> I mean, presumably if a tlb entry has a bigger page, it can just store a mask to get rid of the low bits of the address being looked up?
<geist> a few more layers of muxes probably
<moon-child> so just one extra mux
<geist> that costs layers of logic, which adds time
<geist> sure, but presumably they've decided that that is not worth it, etc
<moon-child> and not for all the bits, so you can mask the low bits in parallel w/reduction of the high bits' comparisons
<geist> hey look, i know, just saying i'm guessing that's why they did
<geist> and one of those probably existing libraries of logic they just reuse over and over again and no one wants to bring it up
<moon-child> mmmm yeah
<geist> one of those cases where the implementation informs the software which informs the implementation
<moon-child> i wonder if it's the same with popcnt
<moon-child> which is 3 cycles latency on intel for a while now; 1 cycle on amd
<geist> OSes dont use it as much as they could because there's limited TLBs so then theres no reason for intel to go with a unified one, etc
<geist> the division hardware on AMD is also clearly different too, last i checked
jtbx has quit [Quit: jtbx]
zxrom_ is now known as zxrom
<geist> it's also interesting to see what sort of different tradeoffs you may make if you dont intent to run at high frequencies
<geist> see the huge size of the L1i cache on the apple silicon, for example. it's probably pretty slow to match that many ways (i think it's pretty associative) but then M1s dont run faster than 3.2Ghz at the max
<geist> so you can probably build deeper, more complex hardware
<geist> whereas AMD and intel has to design their cores to scale up to 6ghz or so
<moon-child> m1 also has a really big rob
<geist> my experience is ARM cores have ridiculously fast dividers too, but probably a completely different design since they're designed for lower mhz
<moon-child> which presumably allows them to deal better with higher latency in their caches
<moon-child> cf little
<geist> yah
<geist> lower mhz, very very wide
<geist> this is where i wonder if someone can/will pull it off with riscv, since it's even simpler to deal with decoding wise
<moon-child> idk if simple decoding helps
<moon-child> if anything the other way around
<moon-child> simple decoding lets you have smaller queues and reduce latency
<geist> you can still do a bunch of instruction fusing. lots of that already getting talked about in riscv
<geist> i saw some thing sifive was talking about where they have some cute instruction fusing dealing with short branches around register movs
<geist> ie, basically a fused compare+branch+mv+branch_target thing
<geist> ie a cmov instruction
<moon-child> fusion is so stupid
<moon-child> :P
<moon-child> especially for riscv
<geist> tell that to all the performance you get out of it, it's super important on risc hardware
<moon-child> the 'performance you get' is that you need extra circuitry to detect the fusion opportunities
<geist> what are yout alking about? it's extremely useful, especially for compare/branch or load top+load bottom sequences
<moon-child> vs supporting the instruction directly in the first place
<moon-child> pure overhead
<geist> sure, but risc has limitations. the fixed size instructions mean there are some sequences that you just cant do
<geist> so it works around it
<geist> but it's not like risc is the only thing that does it. looots of fusion on x86 too
<moon-child> right--riscv is actually a variable-length isa
<geist> still a fuckload simpler to decode than 1-15 byte instructions
<moon-child> not disagreeing there
<moon-child> x86 encodign is a shitshow
<moon-child> fusion is stupid when x86 does it too. But there's not much; just arithmetic+jcc on modern parts afaik
<geist> but yeah maybe one day folks will fiddle with the 48 and 64 bit instruction formats in riscv then we're getting back into EPIC territory
<geist> shrug. if it gets it done, i dont think it's stupid
<moon-child> I mean
<geist> the obvious sequence that both risc and ARM do a lot is the multi instruction sequence to get a full 64bit constant in a register
<geist> that's just a limitation of having fixed(ish) sized instruction
<moon-child> you might as well say, x86 decoders get it done, so x86 encoding isn't stupid
<moon-child> which ... sure, maybe, but it could be done a lot better
<geist> it dont htink it's stupid. i think given what they have to deal with, it's pretty smart
<geist> the cards you're dealt is 'decode this and do a good job'
<moon-child> the decoders are smart. The encoding is stupid
<geist> the solutino is to get creative
<geist> well, sure, but that's like getting angry at the clouds
<moon-child> riscv was designed from the ground up to be a clean isa (though I heard it 'borrows' heavily from mips...); doesn't have the same excuse
<moon-child> they planned in fusion
<geist> it's all tradeoffs. compromises. i think the general idea of having relatively small number of instruction sizes is a *massive* win, which then has some downsides
<moon-child> x86 was supposed to be simple to decode on tiny in-order chips, and then spiraled out of control
<sakasama> It should be noted that providing backwards compatibility over decades is a very heavy burden.
<geist> so you use fusion to fix that. you get some complexity back, but i think the alternatives are probably worse
<geist> riscv is designed to be clean. not perfect
<moon-child> I think being easy to decode is a win too
<geist> it's very much about simplicty first
<moon-child> but surely there is a middle ground (see eg forwardcom, which seems not unreasonable)
<geist> almost too much, which is why all these extensions are in the works to fix the missing gaps
<geist> well, they left in space for 48/64/etc bit instructions, which is probably what you're looking for
<geist> that's just some ways off
* moon-child nods
<geist> riscv is not elegant, or particularly pretty. it is not particularly pretty
<moon-child> 48-bit? I thought everything has to be 4-byte aligned; do they have 48+16 or something?
<geist> eep repeated myself
<geist> yes they do. from the ground up the bottom N bits describe the lenght of the isntruction. that part is pretty clever
<geist> it's basically a utf8 style encoding scheme where the bit pattern at the bottom bits tells you the lenght
<geist> ie, ...011 is a 32bit instruction. ....00 ....01 ...10 is a 16 bit instruction
<geist> .... 01111 is a 48 bit, ....011111 is a 64bit, etc
<geist> so it burns bits, but mostly in the larger instructions
<geist> and burns more bits as the instructions get bigger, but the decoder can trivially figure the size of the instruction
<geist> (the 48 and 64 bit i have one extra 1 there)
<moon-child> I mean like
<moon-child> for alignment
<geist> 16 bit alignment
<moon-child> does a 48-bit instruction have to be paired with a 16-bit one?
<moon-child> oh!
<moon-child> I thought 32-bit alignment
<moon-child> ok
<geist> no, unless the cpu doesn't support the 'c' extension, which is the 16 bit extension
<geist> but the c instructions actually reallyl work pretty well. in practice the compiler really gets pretty good use of it, and the code density is substantially higher than ARM64, and pretty close to thumb2
<moon-child> oh so do 16-bit instructions not have to be paired?
<moon-child> I thought they did
<geist> except it can switch between 16 and 32 at will
<moon-child> maybe confusing with another arch
<geist> nope. so that *does* mean the decoder has to handle unaligned 32 bit instructions
<geist> *if* the implementation supports compressed instructions. you *could* try to build a super high performance impl that only does 32bit instructions
<geist> and then burn code density for mega fast decoder. time will tell if the variable sized instructions are a burden with really high speed implementations
<geist> but at the moment it's simply 16 and 32 bit instructions, with a trivial way to determine the boundary. so not *too* bad to build a N way parallel decoder over a 16 or 32 byte decode buffer
<geist> and i wouldn't be surprised if some optimization manuals end up saying things like 'dont have a 32bit instruction that crosses a page boundary as a branch target' or something
<geist> or similarly with instruction fusion stuff, try to keep fused instructions on the same cache line, etc
pmaz has joined #osdev
<kazinsal> gotta be honest, the best part about this stupid 5150 unix gimmick project is that the IBM technical manuals have fully annotated BIOS source listings
<geist> nice
<kazinsal> "how does this work" usually becomes less of a case of hoping someone wrote something down somewhere on usenet in the 80s and more often just looking at the source code
<geist> and there are full schematics too
<kazinsal> so far the most confounding thing has been trying to figure out exactly what this compiler spits out
<clever> looking at armv8 mmu stuff more, and translation granule is more then just the base unit size of pages changing, it redoes how big every level of the paging table is, and how the VA bits spread over levels
gog has joined #osdev
<clever> but reading more, it seems that "block" (aka huge pages?) sizes vary with granule size
<clever> 4k granules, only allow 2mb and 1gig blocks, 16k only allows 32mb, and 64kb only allows 512mb and 4tb!
<clever> makes some sense, a block is just stopping the walk early, so your limited to however big a range that level and slot cover
GeDaMo has joined #osdev
<clever> but 16kb granules have 64gig slots in level1, and thats not supported
<clever> and armv7 had a bit of a hack, while slots where 1mb, it supported 2mb pages, and you just had to put the same details into 2 slots
<moon-child> 16k only allows 32m? Why?
<clever> moon-child: thats what i want to know
<moon-child> and why does 64k have 16g thingies?
<moon-child> and does 4k have 512g thingies?
<moon-child> why doesn't*
<moon-child> this all seems very inconsistent
<clever> moon-child: from the official armv8 docs: https://i.imgur.com/MMpZEZH.png
<moon-child> bizarre
<bslsk05> ​gist.github.com: gist:8c22de090c5b012ee55062fd07f7b7d8 · GitHub
<clever> where you can see how large a single pagetable slot is, for each granule size
<clever> so 1gig blocks with 4k granules, are just stopping at level 1, and the entire 1gig range that slot covered, is just 1 block
<clever> so logic would say, that 128g and 64g blocks are possible with 16k granules
<clever> might have a unit typo in some of that table
<clever> yeah, lines 11/12, fixed
<clever> typo fixed: so logic would say, that 128tb and 64g blocks are possible with 16k granules
<clever> ah, and i just found the tables i made, in the official docs
<kof123> what is scary is the 5150 looks to have been "open" (minus bios code, but perhaps didnt stop anyone)
<kof123> wikipedia says they assembled a team "do the opposite of what ibm normally does" lol
<kazinsal> yeah these machines are quite nice to work with
<kof123> you have a real one ?
<kazinsal> unfortunately no, though I plan on picking one up soon
<kazinsal> I've got most of june off work so I'll likely end up spending a bit of that in seattle, which is generally a good place to pick up some vintage machines
<kazinsal> drive down, get some computers, say hi to some folks, maybe see a mariners game
<kof123> im setting up a silly...'kernel' finally. this just means "reflective c" have function pointers and describe signatures, ditto for "datums" (for c, just call it a non-local (inside function) variable). dont know how far i will get, but i will allow near/far/huge lol optional. i think i want 1M-2M at least though
<kof123> so 5051 looks to max out at 384 + 256 motherboard ram == 640k
<kof123> *50
<kazinsal> yeah, the later motherboards can take 256K on the board
<nortti> you can also get plug-in cards that allow you to bank-switch memory in the I/O memory area
<kazinsal> the early ones are 16-64K
<clever> moon-child: ahhhh!, bit53 of any pagetable entry, is a hint that this is part of a contigious block of pages
<clever> moon-child: and the CPU is free to merge TLB entries for that
<clever> so, while you may not directly have 1gig pages, you can still map a 1gig aligned chunk of physical memory with smaller pieces
<clever> and the cpu might take your hint, and still turn it into a 1g TLB entry
<moon-child> oh neat
<moon-child> do any uarches actually take advantage of that?
slidercrank has quit [Remote host closed the connection]
slidercrank has joined #osdev
<clever> reading more....
<kazinsal> the main differences between the board on the 5150 and 5160 is that the 5160 has more expansion slots (with slot 8 doing some weird stuff on some signal lines for the expansion unit), the ability to use larger roms, and a bit of rerouting of some pins for peripherals to always go to the external bus instead of the internal bus
<clever> with 4k granules, you can only set that bit when you have 16 pages? blocks? in a row (and 16 aligned within the table), that are contiguous in both physical and virtual
<kazinsal> the 5160 came with a xebec disk controller and accompanying fixed disk
<clever> so a 64kb chunk can be mapped as 16*4kb, but the cpu can fuse it back into a 64kb chunk
<clever> assuming it works on blocks as well, 16*2mb->32mb, and 16*1g->16g
<clever> not as flexible as i assumed
<clever> with a 16kb granule, in level2(32mb slots) it requires 32 contiguous blocks, giving a 1gig page
<clever> moon-child: i think what is happening, is that bit53 is a promise to the cpu, that while this slot points to a 32mb block, its part of a 1gig contiguous region, so you can immediately upgrade this 32mb block to a 1gig TLB entry
<moon-child> ohh
<moon-child> so it just ignores the other entries
<clever> yeah
<clever> so you can get 1gig pages, even though it covers 32 slots in the paging table
<clever> but its just a hint, and the cpu is free to ignore this, and just treat it as 32 * 32mb pages
<clever> armv7 had something similar, where you could create 2mb pages, despite having only 1mb slots in the paging table, and having to map the upper and lower halves
<clever> your just giving the cpu a promise, that the other half is easy to guess, and it can skip reading the entries
<clever> and with just bit masking, it can expand the block/page, because you promised that its aligned
bnchs has quit [Remote host closed the connection]
xenos1984 has joined #osdev
bnchs has joined #osdev
pmaz has quit [Quit: Konversation terminated!]
danilogondolfo has joined #osdev
<kof123> well im not requiring an mmu at the moment...so will try rwxrwxrwx uid and gid for "functions" and "datums". so i am not too worried about function size or datum size, just overhead .... and whatever mechanisms to offer some level of protection, however slow (software). on a big machine...will eventually "jit" :D
<kof123> need to get further and see what mechanisms develop or look natural :D
<kof123> i mean, in practice, will probably just "god mode" at first, but fleshing things out
<kof123> dont want to litter ifdefs for "small mode" at the moment
<kof123> need to see what concept of "process" presents itself in line with such things
slidercrank has quit [Quit: Why not ask me about Sevastopol's safety protocols?]
danilogondolfo has quit [Ping timeout: 250 seconds]
danilogondolfo has joined #osdev
zxrom has quit [Quit: Leaving]
bradd has quit [Ping timeout: 268 seconds]
rurtty has joined #osdev
<mrvn> geist: With a 64bit instruction there could be a ld <reg>, <imm44> or so
<mrvn> The length encoding burns a lot of bits (6) already.
<mrvn> Would be nice to load an 48bit constant. That would cover all of the address space (for 4 level page tables)
<mrvn> clever: On ARM7 the 16k page support works by mapping 4*4k pages contiguosly and the cpu may or may not turn that into a 16k tlb entry. Same thing.
<clever> armv7 also had confusing names like section and stuff, to refer to differently sized blocks of ram
<clever> while armv8 seems to just refer to it as "1gig block" or "32mb block", and its always a single slot in a paging table
orccoin has joined #osdev
<mrvn> section/page, block/page. same difference
<clever> but i think armv7 had multiple names, each for a different size
<clever> but armv8 just uses block for anything that terminates the walk early
Left_Turn has joined #osdev
freakazoid332 has joined #osdev
clever has quit [Ping timeout: 260 seconds]
clever has joined #osdev
orccoin has quit [Read error: Connection reset by peer]
rurtty has quit [Quit: Leaving]
slidercrank has joined #osdev
gareppa has joined #osdev
gareppa has quit [Client Quit]
_xor has joined #osdev
bgs has joined #osdev
xenos1984 has quit [Ping timeout: 265 seconds]
xenos1984 has joined #osdev
zxrom has joined #osdev
xenos1984 has quit [Ping timeout: 260 seconds]
xenos1984 has joined #osdev
qubasa has joined #osdev
heat has joined #osdev
heat has quit [Remote host closed the connection]
heat has joined #osdev
<heat> kernal
* lav hands heat a cob of maize
<Ermine> kernal kernel
<lav> i hate mondays
<zid`> not petrol
<sakasama> s/mondays/reality/
<lav> hehe i made a keybind to send a random cat sound
<heat> can i be that guy that says kernal now
dude12312414 has joined #osdev
node1 has joined #osdev
<geist> typing on your C64
<zid`> heat: You can do it as much as you like, the question is how much respect will anybody have left for you?
<mjg> hu sayz kernal
<mjg> MOFER
<bnchs> heat: can i be the guy who says colonel
<mjg> maybe i'm gonna lift the schduler from onyx
<geist> actually am kinda interested in when and where the term originated
<mjg> kernel?
<geist> yah
<kof123> well it is covered with a shell
<geist> i mean obviously it's a word
<mjg> i'm not going to do my usual geezer bit
<mjg> i will note there has been funny ideas how to name stuff
<geist> but and obviously it applies, but presumbly someone started using as as a term for the core system bits at some point
<mjg> so it's probably nothing sane
<nikolar> doesn't c64 use kernal
<geist> nikolar: yeah that's why i wasmentioning c64 earlier
<mjg> personally i like that 'coredump' is still the term today
<geist> the KERNAL rom has the ore bits
<geist> core
<mjg> do you remember how 'slab' came to be?
<mjg> 's all jokes and childhood stories all the way down
<geist> probably just bonwick's article
<geist> iirc it also uses a bunch of gun turret terminology
<geist> magazines, etc
<mjg> and rounds
<mjg> i heard bonwick has strong political views on gun ownership
<zid`> I own strong political views on people who have strong political views on gun ownership
<mjg> i own guns
<zid`> I have access to guns, I do not own any
<geist> lets not go there
<zid`> mjg: Got any AA?
<mjg> only landmines, sorry!
<zid`> bah
<zid`> people never have the exact hw you need
<mjg> well you can try to throw them
<zid`> "got a 10mm wrench?" "No, just half inch"
<mjg> do you even lift?
<zid`> "got any anti-aircraft canons?" "No just land mines"
<zid`> always the same
<mjg> you should have asked me last week mate
<mjg> and a plasma rifle
<zid`> If I asked an american for a 50/127ths wrench would be give me the right thing
<geist> just eans we have to have both sets
<zid`> I want a specially marked 50/127ths now
heat has quit [Remote host closed the connection]
heat has joined #osdev
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
node1 has quit [Quit: Client closed]
radens has joined #osdev
<radens> hello! Is there an irc channel which has good advice about makefile best practices?
<zid`> workingset is about the only posixy channel I know
<radens> I have a bunch of targets like libx86.a and libcrt.a which have different inputs from variables like OBJ_LIBPC and OBJ_LIBCRT but run the same ar command
<radens> thanks
danilogondolfo has quit [Quit: Leaving]
<klange> radens: make them use the same variable and then set it for each target with `libx86.a: ARFLAGS=whatever`
FreeFull has joined #osdev
<heat> makefile sucks
<heat> and this is a fact
<heat> Deal With It
<zid`> quick, write a new build tool
<zid`> it's the only way
<heat> you mean ninja?
<heat> it's already written!
<sham1> Why do that when djb already solved builds with redo
<moon-child> clearly the solution is cmake
<zid`> why use ninja when we already have antcmakemeson
<heat> meson is nice
<zid`> bmake!
<heat> bmake is nicer than GNU make apparently
<mjg> lol
<zid`> someone needs to make 'omake'
<mjg> i'm not gonna rant today
<zid`> that's 'freebie' in japanese
<heat> watch mjg write a 200 page essay on how BSD make is pessimal
<sham1> gradle, anyone? It can build C stuff, after all
<mjg> heat: nope!
<sham1> Or on that note, maven
<mjg> heat: it is slow af tho, but maybe i'll rant about it some other day
<heat> meson is nice, cmake is pain but tolerable pain at that
<mjg> 'geezer' would make a great name for a build system
<heat> gn is ok if you have a whole team to handle your build system
<heat> bazel is ok but it's in fucking java and way too large
<sham1> Oh, so it's a build system that only works for Google
<heat> yes
<moon-child> idk pessimal--doesn't matter at my scale--but gmake is nicer to use than bsd make
<sham1> Nothing wrong with a bit of java every once in a while
<mjg> jessimal
<heat> like gn is super versatile and you can do whatever with it, but it requires you to actually write the useful stuff
bgs has quit [Remote host closed the connection]
<heat> the base targets are not very useful if you don't stack them with hundreds of lines of boilerplate
<heat> and doing things like "install onto /" is pretty much impossible
moberg has quit [Ping timeout: 255 seconds]
<Ermine> Btw do bsds all have their own makes?
moberg1 has joined #osdev
<mjg> Ermine: no
<heat> what if we merge all the BSDs into one BSD
<heat> good idea yes?
<mjg> you ike going back in time
<heat> instead of 4 crap systems we get 1 super crappy system
<Ermine> cube the pessimality
<Ermine> heat: y 4?
<mjg> dfly
<Ermine> Ah indeed
<Ermine> forgor that one
<heat> i am counting all the BSDs with at least 1 user
<mjg> in that case there is at least 5
<heat> that said, it's just 3, freebsd doesn't count here
<mjg> my dog would use it if i had one
heat has quit [Read error: Connection reset by peer]
heat has joined #osdev
jtbx has joined #osdev
<heat> god dang it is my router running BSD
<heat> <heat> yes, your cat uses RHEL
<heat> <heat> your turtle uses Linux Mint
<heat> <heat> (both are freebsd core team members)
xenos1984 has quit [Read error: Connection reset by peer]
<mjg> this would not happen on discord
<Ermine> heat: how come?
<heat> how come what
<Ermine> heat: router with BSD
<Ermine> heat: replace it with sortix or onyx and you'll be better off
<mjg> he is makign a poor taste joke it had to be bsd to cause him to disconnect
<mjg> i'm gonna call my homies to cut off your netflix
<Ermine> And I do poor advertisements
<heat> mjg, nuh uh, you can't, i know linux torval, he'll cut off your amazon
<heat> mjg, btw what's the origin on slab?
vdamewood has quit [Remote host closed the connection]
vdamewood has joined #osdev
<mjg> heat: call banwick or read his paper
<heat> reading? booooooriiiiiiiiiing
<heat> can you make a tiktok explaining it?
<mjg> dancing before i start talking will take up the whole video toh
<mjg> so you wont find out from it
<Ermine> TLDR version when
<heat> it doesn't say shit mjg
<geist> the paper?
<geist> i assume it just simply introduces it
<heat> yeah the paper doesn't really talk about why slabs are called slabs
<geist> here's a data structure, it's called a slab
<geist> probably just the notion that it's like some chunk of memory that you cut up
<geist> like a slab of rock
* Ermine slaps a slab
<geist> i suppose it could have some foundation connotation, like how slab is also used
<geist> a concrete slab you build a house on, etc
<mjg> huh
<mjg> i distinctily remember reading the story by bonwick, maybe i mistakenly thought it was in that paper
<mjg> it is some bullshit from his childhood tho
<mjg> some kid was referrign to something as a slab and he thought it would be a great fit
<heat> y'all want to see the best strchr and strrchr implementations ever?
<heat> it's reallllly high quality stuff here, OPTIMAL as mjg would say
<bslsk05> ​gist.github.com: genius_strchr.c · GitHub
<zid`> the best strchr implementation is to write it naively and have the compiler replace it with its builtin
<heat> AsciiStrStr = strstr as you may expect
<zid`> where did you get this from heat
xenos1984 has joined #osdev
<geist> well, slab is nice because it's an easy short word to type to
<geist> so in general if you can name some thing you're gonna type a millino times after something short, that's a win
<heat> zid`, tianocore mailing list
<mjg> heat: ok mjg@
<heat> i do guess this is maximally safe
<heat> maybe openbsd should pick this up
<mjg> timingsafe_strchr
[itchyjunk] has joined #osdev
Left_Turn has quit [Read error: Connection reset by peer]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
slidercrank has quit [Ping timeout: 265 seconds]
[itchyjunk] has quit [Remote host closed the connection]