klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
knusbaum has quit [Ping timeout: 248 seconds]
knusbaum has joined #osdev
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
gildasio has quit [Quit: WeeChat 3.5]
Ali_A has joined #osdev
knusbaum has quit [Ping timeout: 256 seconds]
Vercas has quit [Remote host closed the connection]
Vercas has joined #osdev
dh` has joined #osdev
knusbaum has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
knusbaum has quit [Ping timeout: 276 seconds]
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
knusbaum has joined #osdev
h4zel has quit [Ping timeout: 246 seconds]
nyah has quit [Ping timeout: 248 seconds]
knusbaum has quit [Ping timeout: 246 seconds]
knusbaum has joined #osdev
<klys> geist, what's the latest on your threadripper instability
knusbaum has quit [Ping timeout: 260 seconds]
* klys was out hiking this evening
skipwich has quit [Quit: DISCONNECT]
knusbaum has joined #osdev
skipwich has joined #osdev
knusbaum has quit [Ping timeout: 248 seconds]
knusbaum has joined #osdev
h4zel has joined #osdev
Jari-- has joined #osdev
knusbaum has quit [Ping timeout: 276 seconds]
smeso has quit [Quit: smeso]
knusbaum has joined #osdev
gog has quit [Ping timeout: 248 seconds]
smeso has joined #osdev
Vercas6 has joined #osdev
Vercas has quit [Ping timeout: 240 seconds]
Vercas6 is now known as Vercas
Jari-- has quit [Ping timeout: 276 seconds]
Ali_A has quit [Quit: Connection closed]
sebonirc has quit [Read error: Connection reset by peer]
sebonirc has joined #osdev
meisaka has quit [Ping timeout: 256 seconds]
meisaka has joined #osdev
Ali_A has joined #osdev
sebonirc has quit [Ping timeout: 276 seconds]
<Ali_A> Just wondering how do I test if the processor successfully got into protected mode? I did load a gdt, enabled PE in cr0, and did a far jump, is there any way to test if this was successful or something bad  happened (like an entry in gdt was wrong or something)
<kazinsal> if your code is executing after the far jump, it worked
sebonirc has joined #osdev
<kazinsal> if the GDT entry is invalid then it'll most likely triple fault
<Ali_A> the problem is I can not be sure how to test if I can execute the code after that, (since after getting into 32 bit mode, I can not use bios
<Ali_A> so I can not print to the screen
<Mutabah> You can use the serial port
<Mutabah> or, if VGA mode/emulation is present, you can just draw directly to 0xB8000
<Ali_A> I will try and see if I can draw directly to the screen
<Ali_A> thanks!
<kazinsal> yeah blatting a few characters to the top left of the screen is usually my first test for something like that
<kingoffrance> bochs and qemu have 0xE9 can just "out" a byte to there
<Mutabah> Oh, does qemu have E9 now?
<Mutabah> It didn't last time I looked for it (... granted that was AAAGES ago)
<kingoffrance> qemu "-debugcon dev" option no idea if that is ancient (option name/syntax changed) or what "defaults" are etc. :)
metabulation has quit [Ping timeout: 256 seconds]
<kingoffrance> i wonder why noone wires it so you can "read" from somewhere :D
<kingoffrance> i mean, seems a simple patch to code if you wanted to
<kazinsal> yeah, it's been there a while but it's an optional isa device
<Mutabah> ah.
<Mutabah> Eh, serial port is only a small amount of extra effort
<Mutabah> with the advantage of working on real hardware
<kazinsal> yeah
<kazinsal> a quick serial driver isn't much code and it's something that'll work on any platform
<kazinsal> and any hypervisor
<Ali_A> okay so it does work, after a far jump
<Ali_A> which I assume means I can execute 32-bit code
<Ali_A> however, I noticed when I try to load the SS segment with mov ax, 0x16 ; data segment offset in the gdt is 0x16 followed by `mov ss, ax` gdb somehow crashes or something
<kazinsal> selectors are aligned to 8 byte offsets -- 0x16 is not divisible by 8
<Ali_A> kazinsal, u r a genius, thanks!
<Ali_A> that was just meant to be 0x10 (16....)
<kazinsal> 👍
xenos1984 has quit [Quit: Leaving.]
Likorn has joined #osdev
<mrvn> there are 10 kinds of people
<clever> those that understand binary, and those that dont
<Ali_A> mrvn 0b10 kind of people
gog has joined #osdev
<Griwes> and those who didn't know this joke was in ternary
Ali_A has quit [Quit: Connection closed]
GeDaMo has joined #osdev
Celelibi has quit [Ping timeout: 248 seconds]
SGautam has joined #osdev
mctpyt has joined #osdev
nyah has joined #osdev
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
pretty_dumm_guy has joined #osdev
xenos1984 has quit [Client Quit]
srjek has quit [Ping timeout: 240 seconds]
Burgundy has joined #osdev
fwg_ has joined #osdev
fwg has quit [Ping timeout: 260 seconds]
fwg_ has quit [Quit: .oO( zzZzZzz ...]
gildasio has joined #osdev
fwg has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
fwg has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
Dyskos has joined #osdev
fwg has joined #osdev
dude12312414 has joined #osdev
Vercas has quit [Quit: buh bye]
fwg has quit [Quit: .oO( zzZzZzz ...]
Vercas has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
codez has joined #osdev
<sbalmos> been spending an interesting past few days reading some of the redox 0.7 code. just wish there was better architectural design documentation (I know, same can be said of all hobby OSs)
Piraty has quit [Quit: -]
Piraty has joined #osdev
srjek has joined #osdev
fwg has joined #osdev
mctpyt has quit [Ping timeout: 240 seconds]
Vercas has quit [Remote host closed the connection]
Vercas has joined #osdev
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
metabulation has joined #osdev
fwg has joined #osdev
metabulation has quit [Ping timeout: 276 seconds]
Mutabah has quit [Ping timeout: 276 seconds]
Teukka has quit [Read error: Connection reset by peer]
Teukka has joined #osdev
h4zel has quit [Ping timeout: 276 seconds]
Celelibi has joined #osdev
nur has quit [Quit: Leaving]
nur has joined #osdev
h4zel has joined #osdev
Mutabah has joined #osdev
<mrvn> you need to invest in some documentation driven design :)
SGautam has quit [Quit: Connection closed for inactivity]
Ali_A has joined #osdev
bradd has quit [Ping timeout: 248 seconds]
flx-- has quit [Ping timeout: 272 seconds]
bradd has joined #osdev
FatalNIX has quit [Quit: Lost terminal]
<Griwes> I'm in a love/hate relationship with the osdev cycle between "things you wrote work so well you make much faster progress than you expected" and "the progress you've made reveals extremely fundamental bugs in the core of the OS"
<Griwes> For the past few sessions I've been at the former, now I'm at the latter
* Griwes shakes fists at how sysret leaves the segment registers in a hugely messy state requiring irq handling to adapt
<Griwes> (it also turns out that I still have some bugs in my avl tree, oopsie)
ptrc has quit [Remote host closed the connection]
xenos1984 has joined #osdev
ptrc has joined #osdev
ptrc has quit [Remote host closed the connection]
ptrc has joined #osdev
dude12312414 has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
genpaku has quit [Ping timeout: 240 seconds]
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
genpaku has joined #osdev
Likorn has joined #osdev
Likorn has quit [Client Quit]
srjek has quit [Ping timeout: 248 seconds]
wootehfoot has joined #osdev
Likorn has joined #osdev
Ali_A has quit [Quit: Connection closed]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<geist> klys: re: ryzen instability. I did the first step: move the machine to a plcae where i can work on it, unplug the 10gbe (but leave it in). ran memtest for a few hours
<geist> then booted it and let it run. ran `watch -n1 'dmesg | tail -40'` to see if something showed up on the log just before it crashed
<geist> nope. lasted about a day
<geist> so next thing i'll do is start pulling out cards
<geist> i am suspecting the off brand mega cheap vid card
<geist> that i had to install because of the 10gbe vid card being pci-e x4 which used up the x16 slot that the old vid card was in
<clever> geist: ive started to notice the effects of your c++ support, vc4-elf-c++filt takes up a large chunk of my build times now!
<clever> while generating lk.elf.debug.lst
<bslsk05> ​github.com: Release Kuroko v1.2.5 · kuroko-lang/kuroko · GitHub
<clever> it seems to be in the all: target, and the only way to not, is to specify just the .elf as a target?
<klange> i finally got around to implementing steps in slices in kuroko, alongside switching over to slice objects and matching python3 on dropping __getslice__, et al. in favor of passing slice objects to __getitem__.
<geist> clever: really? like how long, seconds?
<clever> geist: 7 seconds
<geist> ah
<geist> well, suppose you can add a switch to turn it off (or on)
<clever> `make PROJECT=bootcode-fast-ntsc build-bootcode-fast-ntsc/lk.bin` works around it, but now i have to specify the project twice
* geist nods
<geist> well like i said it'd be easy to remove it from the all, or make it a separate target that is then optionally included (or not) based on a switch
<geist> iirc you're using a pretty old bulldozer cpu right?
<clever> yeah, fx-8350
<clever> but i'm also not using any c++ code currently, so the c++filt is pointless on my builds
<clever> oh, what if you just scanned the list of sources, and auto-disabled it?
<geist> do you use the .lst files or whatnot?
<clever> plus a flag to force it off anyways
<clever> i do use the .lst files any time i need to debug a fault
<geist> i see
<clever> and .debug.lst sometimes
<geist> well anyway6 you're a smart person. go disable it
<geist> i'm surprised it's substantially slow, usually that part is a blip compared to the cost of the disassembly in the first place
<geist> but... dunno
<geist> i do have very fast machines here so i tend to not see it
<clever> yeah, i can always just edit engine.mk or build.mk directly
<geist> does it only suhow up on VC sources?
<geist> possible your toolchain was built -O0 or something?
Mutabah has quit [Ping timeout: 256 seconds]
<geist> also surprised it runs slower with C++ symbols present, vs just the need for it to scan the file in the first place
<geist> seems that the piping and the scanning would be the slow part, and thus proportional to the size of the input
<clever> PROJECT=qemu-virt-arm64-test rebuilds in 2 seconds, from touching arch.c
<clever> so its probably a flaw in the vc4 gcc
<geist> yah might want to double check it's not compiled with -O0. i've had that problem once before
<geist> ran for liek 6 months on a project at work before discovering that whoever built the toolchain did it -O0 -g
<zid> modern -g doesn't actually slow down binaries does it?
<j`ey> do you even need a specific vc4 c++filt?
<j`ey> oh, to read the object files you do
<j`ey> (maybe?)
<geist> probably not. actually really -C i think on binutils is all you actually need for most of this
<geist> i think the notion of always piping output of objdump through c++filt as a separate step is just old habit of mine
<zid> we have -Og now though
<zid> which does what you 'want' when you think of -g slowing down binaries
<geist> primarily because i dont think -C always existed and i had some theory that piping allows for a little bit of parallelism
<clever> i could probably also cheat, and use the host c++filt instead of the vc4 cross compiler c++filt
<clever> they cant differ that much?
<geist> j`ey: probably not, i just included that for a complete description
<geist> probably not at all
<geist> or just dont include it. add a sewitch to the build system to turn it off or something and push up a patch
<clever> yeah
<zid> what is c++filt, anyway
<geist> again though, if it's substantiallyslower than an arm build either a) your toolchain is not compiled properly, you should look at that or b) you're building something substantially different
<geist> like a 2GB image file or something
<j`ey> zid: demangler
<geist> zid: basically whatever you pipe through it it looks for c++ mangled names and replaces in line
<zid> ah
<geist> so it's nice to take the output of a dissasembly, or symbol file, etc to demangle stuff
<zid> The upgraded version is called IDA, it looks at the machine code and spits out C
<geist> the LK build system basically generates a full suite of secondary files for this after linking, and runs all of them through c++filt
<geist> i was sad when we turned it off for zircon, but the much larger build there was really starting to take a substantial amount of time to disassemble/demangle
<geist> and basically zero people cared about the files but me, which IMO is frankly an issue
<geist> but i can't force folks to look at disassembly
<clever> lol
<zid> I'd troll C++ more but I'm having trouble concentrating with how hot these noodles are
<geist> i can only teach them the virtues of following along
<geist> zid: i'm aware of your trolling. i could sense you as a shark, circling around the conversation, trying to find the right spot to strike
<mrvn> clever: c++ name manging isnÄ't standardized (or at least prior to modules it isn't).
<zid> I'm doing lemaze breathing
<geist> mrvn: yeh but for a given version of gcc using a given c++abi verfsion it probably is at least the same across arches
<geist> ie, no arch specific parts to it
<mrvn> I wouldn't bet on it. And for vc4 you don't have gcc output.
<geist> and it is at lerast standardized *enough* that it's not been a problem recently. there were a few ABI breaking changes in the past but i think that's mostly gone
<geist> well, it can't change much or you'd have actual name resolution linking issues
<mrvn> worst case you are probably left with few not demangled parts.
<bslsk05> ​itszor/gcc-vc4 - Port of GCC to the VideoCore4 processor. (6 forks/4 stargazers/NOASSERTION)
<mrvn> clever: is that what the firmware uses?
<geist> anyway you can generally tell if something is -O0 by just looking at the dissasembly of it
<clever> mrvn: nope
<mrvn> clever: see
<geist> if it seems to be extraneously moving things to the stack and back that's a sure sign
<clever> the official firmware uses a metaware compiler from synopsys, that is behind NDA
<clever> but i only plan to use c++filt on code produced by the open gcc fork
<clever> the official firmware has symbols stripped anyways, so it wouldnt help
Likorn has quit [Quit: WeeChat 3.4.1]
<geist> clever: side note when you compile are you using -j switches?
<geist> ie, make -j4 or whatnot? it should parallelize these things
<clever> i forget about that often
<clever> and 99% of the time, only 1 file has changed, so there is little benefit
<mrvn> geist: My issue is that ABIs for different archs can mangle differently. Within an ABI it has to be standard or as you say linking fails.
<clever> i only remember they exist when i change a MODULES +=, and it takes a while
<mrvn> geist: so using the host c++filt might give different result than the cross c++filt
<geist> mrvn: well, you might be right, but i hae seen no proof of this
<geist> if it were different per arch it seems like each arch's ABI guide would have a section for it
<geist> maybe it does though, i'll ask folks at work today
<geist> i suppose it could for things like AVX512 registers or whatnot: ie, specialized types based on arch
<mrvn> geist: can't say that I have an example
<mrvn> some windows and unix formats could differ
<geist> *thats* absolutely true
<geist> i should have been more clear: within the same ABI family (sysv, etc) it shuoldn't change
<geist> but anwyay, moot point. i wouldn't recommend doing it anyway
<mrvn> c++filt probably has the rules for all the known formats in it. It doesn't see if the binary was elf or pe or whatever and still has to work,
<geist> yah. and anyway like i said objdump and whatnot has a -C flag for it now, which runs the filtering inline
<geist> would be interesting to have clever time that
<geist> an external c++filt vs adding -C
<clever> *looks*
<zid> does -C just pipe it through popen to c++filt though?
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -dC build-bootcode-fast-ntsc/lk.elf > /dev/null
<clever> real 0m0.093s
<geist> dunno, presumably internally since it's built out of the same thing
<clever> its virtualy the same runtime as without -C
<clever> real 0m0.092s
<zid> If it changes the timing what do you even conclude there
<clever> down into the noise
<mrvn> can't see how writing and reading back the name could save time really.
<geist> interesting, double check that it's actually demangling things
<clever> let me pop a .cpp file into the build...
<mrvn> demaningling should be linear time
<zid> not unless your name lookups are O(1)
<geist> mrvn: yeah that's my though, mostly linear based ont he input size too
<geist> which is why it feels highly odd that it'd take like 7 seconds
<mrvn> zid: what name lookups?
<geist> clever: time the pipe too
<geist> double verify it's actually taking 7 seconds
<zid> it's doing a string lookup on input tokens to output tokens isn't it
<geist> or you're not misreading one of them. also try the debug version of it. that usually takes substantially longer
<zid> I'd expect log n anyway though which will basically be O(1)
<zid> for less than a few hundred thousand symbols
<mrvn> zid: my guess would be reading char by char till it finds something that could be a mangled name. Then it demangles and if it works it outputs the demangled string, otherwise the original.
<geist> alas i gotta go. the meetings are starting. will be occupied for most of the rest of the afternoon
<geist> MEETINGS ARE THE BEST
<geist> (been watching the show Severance lately, it's *fantastic*)
<zid> oh yea could be doing that, I don't know enough about how reverseable the mangling is to know if it can do that
<mrvn> zid: O(input size). Can't be faster than touching every char once.
<zid> or if it has to LUT them
<clever> 800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<clever> that is in the default lk.elf.debug.lst
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -dC build-bootcode-fast-ntsc/lk.elf | grep test::
<clever> 800044ae <test::foo(int)>:
<clever> and its also in the -dC output
<clever> 800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<mrvn> what's the mangled name?
<clever> real 0m0.101s
<geist> cool, so now time it piped
<clever> 800044da: ff 9f ea ff bl 800044ae <_ZN4test3fooEi>
<clever> without -C, it turns into this
<mrvn> $ echo foo bar _ZN4test3fooEi baz | c++filt
<mrvn> foo bar test::foo(int) baz
<geist> yah at some point i actually grokked the format. basically _ZN is i think the return part, then each thing after that is i think a length, name, and code to modify it
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -d build-bootcode-fast-ntsc/lk.elf | vc4-elf-c++filt | grep test | grep foo
<clever> 800044ae <test::foo(int)>:
<clever> geist: ok, so at least with this, its still fast...
<clever> 800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<clever> real 0m0.098s
<zid> yea the mangle format actually looks fairly simple for gcc at least
<geist> yah that's hwy i'm suspecting your initial hypothesis is off
<clever> i checked top multiple times, and c++filt was at the top of the charts
<mrvn> you could just starce it to see if it forks c++-filt
<clever> let me shove a time into your makefiles...
<geist> right, add a echo date or whatnot before and after
Mutabah has joined #osdev
<geist> also it runs a lot of things through c++filt, it might not be the disassembly
<geist> there's a symbol table dump, etc
<geist> maybe one of the other things is really slow
<clever> yeah
<bslsk05> ​github.com: lk/build.mk at master · littlekernel/lk · GitHub
<geist> and the rules below
<geist> though the .debug.lst should be the largest by far
<geist> make sure you benchmark *that*
<clever> 42 $(info generating listing: $@)
<clever> 43 time $(OBJDUMP) $(ARCH_OBJDUMP_FLAGS) -S $< | $(CPPFILT) > $@
<geist> which generates the full debug listing files
<clever> real 0m4.450s
<geist> yes that's the -S one
<geist> that's the slow one, rerun your tests with that (as i think i keep saying)
<clever> time vc4-elf-objdump -S build-bootcode-fast-ntsc/lk.elf | vc4-elf-c++filt > build-bootcode-fast-ntsc/lk.elf.debug.lst
<clever> oh
<clever> i just realized something, *checking*
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -S build-bootcode-fast-ntsc/lk.elf | vc4-elf-c++filt > /dev/null
<clever> dev null, shaves 4 seconds off it....
<clever> because i have zfs set to lz4 everything on that filesystem
<geist> and it's a huge file
<geist> that should make any overhead of c++filt even *less* important
<geist> anyway, alas gotta go. figure it out!
<clever> yeah
<geist> JUST DO IT as the great shia lebeuf once said
<clever> its clearly not c++filt now that i check that
<clever> its the disk io
<geist> (but this does point out that i can probably switch these to -C with no real ill effect)
<zid> replace 'task manager' with 'time ./'
<geist> zid: hahaha
<clever> geist: also, this answers a second puzzling thing
<clever> i havent updated my lk reference in a while
<clever> so i was slightly puzzled as to how your recent c++ work got into my worktree, lol
<clever> it didnt!
<geist> nah there's been C++ code n LK for a long time. like 10 years
<geist> just not used that much
<geist> but doing more drivers/subsystems in it, but will remain C as the abi between modules
<clever> yeah
<clever> so this is code that ive been running since the day i started using LK
<clever> and its nothing you changed recently
<zid> Programmers hate this one weird trick to avoid C++ ABI issues: gcc -x c
<clever> my fs is just being wonky
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ ls -lhs build-bootcode-fast-ntsc/lk.elf.debug.lst
<clever> 2.4M -rw-r--r-- 1 clever users 1.1M May 3 17:28 build-bootcode-fast-ntsc/lk.elf.debug.lst
<clever> oh god, gang blocks!
<clever> its my fragmentation coming back to bite me!
GeDaMo has quit [Remote host closed the connection]
Ali_A has joined #osdev
<geist> between meetings: i just had a thought
<geist> if the writing out of the file is very expensive because of zfs and lz4 then the last process in the pipe chain is charged the cost
<geist> ie, foo | bar > baz
<geist> bar gets all the kernel time accounted for it since it's left writing to the FD
<geist> hence why c++filt maybe seems to be expensive
<clever> i think its not lz4, because turning that off didnt help
<clever> i think its the severe fragmentation
<clever> and the cpu cost, to just find free blocks
<geist> word.
<geist> `filefrag -v` is a great tool for thsi too
<geist> though i dunno if zfs is wired through for this
<clever> zfs isnt compatible with filefrag
<clever> you have to use the zdb cmd instead
<clever> filefrag assumes your fs is backed by a single block device
<clever> same reason LK isnt able to mount zfs so easily
wootehfoot has quit [Quit: pillow time]
<geist> well, works fine with btrfs
<geist> but btrfs goes through one level of translation, so the addresses filefrag returns are logical
immibis has quit [Read error: Connection reset by peer]
<geist> (i guess)
<clever> for zfs, every block is a tuple of: the vdev#, the block#, the hash of the block, and more!
<mrvn> clever: how little ram do you have that writing c++filt output to disk flushes the contents?
<clever> mrvn: 32gigs
<geist> there might be some sort of encoding to the block addresses it returns that's non obvious, but i think btrfs has a intermediate translation where the FS operates in logical address mode and theres a layer that allocates rather large chunks out of the underlying physical devices
<geist> nice thing is it can move those physical slices around without modifying the higher level FS data structures
<geist> and/or duplicate (raid1, etc)
<geist> the arge chunks are usually in order of 1GB or so, so you dont need a very expensive translation table
<bslsk05> ​gist.github.com: gist:ed74fbeda71c8e7339c49a72f26e8918 · GitHub
* klys arrives from work
<clever> in this case, each block is 128kb, and it is stored as 2 blocks (the L0's), and then there is a single L1 containing pointers to the L0's
* geist nods
<mrvn> Does c++filt have a check for /dev/null like tar does and skip outputing anything?
<clever> and i'm not sure how that totals up right, because it claims to be 469kb on-disk
<geist> mrvn: i dont think so, i think you can basically pipe a raw binary through it and it'll dutifully translate whatever it sees
<clever> [nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -S build-bootcode-fast-ntsc/lk.elf | vc4-elf-c++filt > /dev/shm/lk.elf.debug.lst
<clever> real 0m0.162s
<geist> oh i see what you mean, on the output
<geist> probably not
<clever> mrvn: and when writing to a tmpfs, its instant
<geist> well, good to know at least
<clever> so its not c++filt being cheaty, its the fs being slow
<mrvn> clever: don't tell me c
<mrvn> c++filt does a fsync
<geist> also yet another reason i dont use ZFS
<geist> sounds like you need to defrag your disk, and/or get a bigger disk
<clever> aha, and i see part of the problem
<clever> 0 L0 DVA[0]=<0:3b0ff38000:67000> DVA[1]=<0:2fc4647000:4000> [L0 ZFS plain file] fletcher4 lz4 unencrypted LE gang
<clever> the word "gang" there, means it failed to find a free space chunk that was 128kb
<clever> so it had to instead use up 3+ chunks of free space, 1 to hold a list of chunks, and then 2 (or more) chunks of actual data
<geist> geez. is your disk totally full?
<clever> 4.8gig free
<geist> or just totally fragmented? seems like zfs would have some sort of online defragmentation
<geist> out of?
<mrvn> WTF? c++filt doesn't close and FDs or free any memory. it just calls exit_group(0)
<geist> mrvn: haha
<bslsk05> ​gist.github.com: gist:a10ae8fac93d1f55ca1dac09923a3360 · GitHub
<mrvn> clever: 4.8g free out of how many TB?
<zid> destructors are annoying :P
<clever> geist: this is a histogram of the size of free space chunks
<kazinsal> "fragmentation 91%"
<clever> so i have 2909 blocks, that are 32kb in length each
<clever> and then 25k holes, that are 16kb long each
<geist> yah issume those are in powers of 2, so looks like most of your free blocks are 4K or 8K
<clever> and down and down
<mrvn> Filesystems get really really really slow approaching 100% full
<geist> and yeah that super sucks. you better fix it
<clever> exactly
<zid> I ran out of inodes once that was fun
<geist> yah that's what i was asking about. 4.8GB out of what?
<clever> and because zfs is immutable, you have no real option to defrag
<zid> I filled a small disk with lots of small files
<zid> and it just went "Nope, too full", at half capacity
<clever> 320gig total size
<clever> your only defrag choice is to basically move the data elsewhere, delete, then move it back
<geist> well, sounds like time to move it off to another disk, recreate, and add back
<mrvn> soo <2% space. That really doesn't work with zfs
<clever> exactly
<clever> mrvn: there is also a special slop-space thing, one min
<zid> you can actually do a bunch of tricks on zfs though
<zid> One of my friends is a fs nerd and he does all sorts of things to it
<clever> [root@amd-nixos:~]# cat /sys/module/zfs/parameters/spa_slop_shift
<clever> 5
<clever> this reserves a certain mount of space for internal usage
<clever> so zfs doesnt deadlock when you "run out", much like ext4 has a reserve
<mrvn> At some point zfs becomes real carefull with the remaining free space because as a COW FS if it ever runs out it's dead.
<geist> sometimes moving lots of stuff to tar files, comprssing the shit out of it to free up space might hep it get some better runs on free space
srjek has joined #osdev
<clever> if i pop a 15 into that file, i suddenly have 15gig free
<clever> so i have more free space then df claims, just because zfs is forcing me to not be that low
<clever> i forget the exact math behind slop space
<mrvn> clever: yeah, but for such a small disk you want quite a bit reserve
<clever> yep
<geist> the fact that it says 'shift' implies it's some power of something
<clever> yeah
<mrvn> full space >> slop shift?
<clever> smaller numbers mean more is reserved
<geist> mrvn: oooh yeah thats probably what it is
<clever> to the source!
<bslsk05> ​github.com: zfs/spa_misc.c at master · openzfs/zfs · GitHub
<geist> so smaller numbers would be larger and larger, yeah
<bslsk05> ​github.com: zfs/spa_misc.c at master · openzfs/zfs · GitHub
<clever> this comment explains it exactly
<mrvn> Anyway, go and buy a harddisk.
<geist> yah
<clever> [root@amd-nixos:~]# fdisk -l /dev/nvme0n1
<clever> Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
<clever> mrvn: i'm on a 470gig nvme drive
<mrvn> It's 20E well spend to double your disk space.
<Ermine> Did you see modern devices running m68k?
<kazinsal> this is why I don't use zfs
<geist> ah 320GB i thought you were using some old spinny disk
<geist> since that was a standard size for a while
<clever> geist: 320gig partition on a 470gig nvme
<mrvn> kazinsal: nothing to do with zfs. try ext4 or btrfs or any other. They all go exponential when reaching 100% full
<geist> well, tht's good. if you had a spinny disk this fragmented, it'd be a shit show on reading
<clever> with a 64gig swap partition for chrome to burn a hole in the nvme, lol
<geist> OTOH you would have noticed it much faster
<geist> mrvn: yes except COW fses will probably fragment the free space faster, on the average
<geist> but yes, running any fs that low is a bad idea
<mrvn> geist: depends on the FS design
<clever> let me double-check things...
<geist> indeed.
<clever> yep, there is a 94gig hole between zfs and swap
<clever> so i could just expand zfs by another 94gig on the spot
<kazinsal> I can honestly say I've never been (tangentially) involved in a conversation about ZFS that didn't involve a pile of esoteric troubleshooting and/or consulting hte source code
<clever> lets do it!
<geist> if your root is not ZFS you can switch to a /swap file and set it a little smaller/resize it
<geist> then reclaim that space too
<geist> kazinsal: haha
<geist> clever: back yer shit up first
<geist> alwayyyyyys do that
<clever> na!
* geist shrugs and goes back to meetings
<mrvn> geist: zfs allocates bigger chunks and uses them for data or metadata so they don't get interleaved and you can defrag too
<clever> ive done this once before, in the middle of a screen sharing session :P
<geist> because the bear didn't attack you before doesn't mean it's a good idea to sleep in the bear cave
<mrvn> Is swap on compressed zfs stable now?
<geist> yah in general i've moved to /swap files, as have a lot of distros. much nicer to not have to dork with the partition table in a fairly static way
<kazinsal> introducing the Leopards Eating Peoples Faces File System
<klange> _I didn't think the leopards would eat _my_ files/faces!_
<geist> hah on a related note i noticed in one of the recent netbsds it actually mentions LFS
<clever> geist: i did it the scary way, i just deleted the partition, then remade it! https://gist.github.com/cleverca22/4ffa587af2bfe0dc283f9ff4afa44368
<bslsk05> ​gist.github.com: gist:4ffa587af2bfe0dc283f9ff4afa44368 · GitHub
<geist> like 'LFS got some stability improvements' in netbsd 9 i think
<geist> like. wow someone uses LFS?
<clever> and the device node is just magically bigger, and still contains an fs
<mrvn> geist: linux filehirachy standard or large file support?
<geist> mrvn: oh silly. log based file system
<kazinsal> log-structured file system
<geist> the *old* one, from BSDs, back in the 80s
<mrvn> .oO(how is that still unstable?)
<geist> interesting idea, didn't go anywhere, has serious downsides, but one can argue that lots of the modern stuff is based on the idea
<clever> gist updated
<clever> i now have 3 holes, that are 2^28 bytes long
<geist> or at least it was potentially a source of ideas
<clever> 256mb each
<geist> though i hear DEC had some sort of log based fs at some point. Spiralog i think?
<mrvn> clever: have you defraged the fs lately?
<clever> mrvn: you cant really, zfs is immutable
<clever> your only option is to move+delete, then copy it back
<mrvn> clever: zfs has a defrag
<clever> what is the cmd called?
<mrvn> zdb something something
Likorn has joined #osdev
<clever> sounds like an offline operation
<geist> so take it offline and defrag it
<clever> oh, yeah, now i remember why i wasnt expanding it the last ~100gig
<clever> i had intentionally ran a blkdiscard on that 100gig partition, to force the nvme to have more free blocks internally
<clever> so its wear leveling had more room to flex
<geist> yah makes sense, but you can accomplish the same thing by just not using up the last of your zfs and making sure it trims things
<geist> OTOH, given your presumed nature, you'll probably now just run it down to the last bit
<clever> at the time, zfs didnt support trim
<geist> side note: i noticed that the `nvme list` command will show you the internal concept of how much the drive thinks it's in use
<geist> the 'namespace usage' column appears to track with recent trims
<clever> Node SN Model Namespace Usage Format FW Rev
<clever> /dev/nvme0n1 BTPY652506Q0512F INTEL SSDPEKKW512G7 1 512.11 GB / 512.11 GB 512 B + 0 B PSF109C
<clever> pretty useless in my case
<geist> yes. that means you have *zero* trimming going on
<clever> but i remember running a blkdiscard on a 100gig partition in the past
<clever> to create a 100gig hole in the device
<clever> its possible the firmware doesnt support things?
<geist> that is interesting, indeed
<mrvn> clever: I did it a few years ago and it's fully online. It just goes through the zfs data and copies data and metadata around that's fragmenting
<clever> /dev/nvme0n1 S3EUNB0J506630H Samsung SSD 960 EVO 500GB 1 498.80 GB / 500.11 GB 512 B + 0 B 2B7QCXE7
<clever> on my laptop, it reports this instead
<geist> and note you're also running it right to the edge
heat has joined #osdev
<geist> what fs are you using there?
<clever> zfs on both desktop and laptop
<geist> i think i'm starting to see a common pattern here
<geist> (zfs aint trimming, yo)
<clever> the laptop is also zfs ontop of luks
<clever> so i would need to get luks to also trim
<heat> TRIM is disabled on certain ssd's im pretty sure
<mrvn> zpool can trim
<geist> yah but not on those SSDs
<kazinsal> mmm, nested block device abstractions
<geist> i have actually i think that exact model
<clever> kazinsal: and lvm too!
<mrvn> zpool-trim — Initiate immediate TRIM operations for all free space in a ZFS storage pool
<geist> lsblk -D should show you if it is supported
<heat> hmm, maybe not trim. there was a common command that was disabled on a bunch of ssds
<clever> reports 512 byte block size for the nvme on both machines, but 0 for the lvm nodes that zfs sits ontop of
<heat> oh wait, yeah
<geist> theres your problem clever
<heat> queued trim, that's what it is
<clever> zfs ontop of lvm ontop of luks ontop of nvme
<geist> need to figure out how to let LVM punch that through
<geist> ah it's luks for sure
<mrvn> or just not use lvm
<geist> but iirc there's a mechanism to tell luks to allow punch throgh discards
<geist> though you hypothetically lose a bit of security that way
<kazinsal> does zfs not do encryption?
<geist> but it's an opt in, since by default you just fill the drive with garbage
<mrvn> totaly, can't trim a luks or everyone can see where you have unused space.
<geist> so even your 100GB wasn't doing anything because you did it on top of a luks that wasn't letting you punch it through
<geist> but like i said there's a flag or whatnot to allow it, if you're willing to accept the punch through
<clever> kazinsal: zfs encryption came around after i installed this laptop
<clever> geist: that 100gig hole was on a non-luks system
<geist> fine, anyway
<mrvn> clever: I forgot what exactly it was but zfs encryption has faults in the design.
<clever> it was a 100gig bare partition, that i directly ran blkdiscard on, and then deleted
<clever> so that range of the nvme was just not mapped to any blocks
<geist> okay, anyway, for your laptop i'd personally punch discards through
<kazinsal> kitchen sink systems tend to end up with design faults
<clever> mrvn: yeah, i trust luks more then zfs
<clever> geist: yeah, checking the man pages for how
<mrvn> zfs encryption has the problem of being glued on after the fact
<geist> i think you add 'discard' to the crypttab
<geist> i see it here at least
<clever> but i dont think i'm using a crypttab
<clever> *checks*
<bslsk05> ​github.com: nixpkgs/luksroot.nix at master · NixOS/nixpkgs · GitHub
<clever> aha, its just plain cryptsetup luksOpen, and there is an allowDiscards option right there
<kazinsal> ha. now we're doing a "consult the source code" for the whole fucking operating system
<geist> yah and lsblk -D should tell you upon reboot if it stuck
<geist> they're using that nix thing which is <shrug>
<clever> kazinsal: and even the source for a specific install! https://github.com/cleverca22/nixos-configs/blob/master/system76.nix#L25-L27
<bslsk05> ​github.com: nixos-configs/system76.nix at master · cleverca22/nixos-configs · GitHub
<clever> kazinsal: nixos lets you use source to define how the entire machine is configured
<clever> so i just have to add allowDiscards=true; to line 26 and rebuild
<heat> kazinsal, the linux way
<heat> open source isn't broken, you just need to check the source code
<geist> heh, it's now the linux way huh?
<geist> sheesh
<heat> because everyone needs to know how to program
<heat> and use terminals
<CompanionCube> zdb is not a defrag
<heat> GUIs are for noobs
<CompanionCube> zdb is dumpe2fs
<geist> i was just mostly thinking how generally polished linux distros have been compared to what existed at the time
<geist> ie, installig slakware linux in 1995 was downright ez compared to a BSD
<geist> and that trend hs generally continued
<heat> right, but bsd is bsd
<heat> you don't need to check windows's source code, it just works
<heat> (tm)
<clever> geist: and once i flipped on allowDiscards and rebooted, i see discard support clean thru lvm to the block dev zfs uses
<clever> so lvm just passes trim on automatically, and luks was the only problem
<kazinsal> I think the idea of "infrastructure as code" has started causing people to slide back towards that older era of things not working out of the box
Ali_A has quit [Quit: Connection closed]
<geist> and the intel ssd probably just doesn't report it right
<clever> started a `zpool trim tank`, and i can see the usage in `nvme list` ticking down
<kazinsal> when you're declaratively describing your environment at every level in a manner that is then used to "compile" that to a working system there's so many different aspects that can be huge pain points
<kazinsal> you wouldn't use terraform to put together a desktop environment
<clever> kazinsal: i do!
<heat> well, that's a problem with linux
<clever> its called nix, not terraform, but same idea applies
<heat> so much choice that 75% of the combinations end up broken
<kazinsal> and you're the only one having issues with what should be an extremely solved problem
<clever> kazinsal: what issues?
<heat> also instead of a great desktop environment you end up with 5 crap ones because "erm, muh choice"
<kazinsal> the past hour and a half of janitoring your filesystems
<clever> kazinsal: thats not because of nixos, thats because ive got data-hoarding problems :P
<clever> and have been killing the drive with 0% free for months
<clever> i'm doing the same thing to a gentoo system :P
<clever> Filesystem Size Used Avail Use% Mounted on
<clever> /dev/sda1 73G 61G 13G 84% /
<clever> /dev/sdb1 3.7T 3.7T 4.9G 100% /media/videos/4tb
<clever> the free space has just vanished due to rounding errors, lol
<mrvn> data4 21.8T 43.8M 21.7T - - 0% 0% 1.00x ONLINE -
<mrvn> rpool 1.80T 1.44T 365G - - 15% 80% 1.00x ONLINE -
<mrvn> clever: and that is my desktop
Dyskos has quit [Quit: Leaving]
<clever> NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
<mrvn> data hoprding, lol
<clever> amd 414G 305G 109G - - 70% 73% 1.12x ONLINE -
<clever> my desktop, after expanding the partition to fill out the rest of the drive
<clever> Data Units Read: 235,638,176 [120 TB]
<clever> Data Units Written: 685,035,746 [350 TB]
<clever> and what smartctl reports
<clever> Percentage Used: 71%
<clever> ive read elsewhere, that this is a percentage of the lifetime
<mrvn> clever: all those smart values and life time estimates are pretty mutch fiction. Accoring to specs my m2.key has a expected liftime of a few hours under load.
<bauen1> i have a question for cross compiling to arm-none-eabi, libm (math.h, ...) isn't defined as freestanding, but it only seems to reference __errno, so how bad of an idea is it to just link with libm in a freestanding env ?
<mrvn> bauen1: just check the license
<heat> geist, btw your printf tests are pretty cool
<heat> really comprehensive
<geist> yah was thinking of putting those in the unit tests too by sprintfing to a buffer, etc
<mrvn> Do they check bit correct float, double and long double scanf/printf?
<heat> no, kernel tests
<heat> ahh wait these come from lk?
<heat> i was looking at the fuchsia ones xD
<heat> they seem pretty decently unit-testy
<bauen1> mrvn: good thing I don't care about that, so I guess there aren't any other hidden surprises apart from __errno
<heat> bauen1, which libm?
<mrvn> bauen1: are you sure it only links __errno? You might get more symbols when something actually uses some functions
<bauen1> heat: mrvn: libm from newlib I _think_ but yes, I will probably push to get it replace with a header that just does a bunch of `#define atan2 __builtin_atan2` or something like that
<mrvn> bauen1: you can just link any lib in freestanding as long as the ABI, hard/soft float, red-zone, ... matches.
<heat> bauen1, -fbuiltin does that by default
<mrvn> bauen1: I don't think arm has a lot of builtin trig functions
<heat> -fbuiltin even lets you optimise a sin + cos to a sincos
<mrvn> Does aarch64 have trig functions in the fpu?
<bauen1> mrvn: https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html makes it sound like gcc provides builtins for all of those math functions listed there
<geist> no
<bslsk05> ​gcc.gnu.org: Other Builtins (Using the GNU Compiler Collection (GCC))
<mrvn> bauen1: hmm, are they in libgcc then?
<heat> builtins in gcc may just call the library function
<bauen1> ah
<heat> __builtin_sin() is just a way to refer to the compiler that you want the compiler optimised version, if it exists
Ali_A has joined #osdev
<mrvn> The builtin might just be so gcc can assume a function called "cos" is the cos function and optimize it
<heat> if you do -fbuiltin, __builtin_sin() is implicit wrt sin()
<mrvn> e.g. cos(0) == 1
<heat> ^^this is also why every libc and libm needs to be compiled with -fno-builtin, it will realise you're calculating the sin() and optimise it to a sin call - boom, stack blew up
<klange> it will absolutely not realize you are calculating sin, but for a lot of other stuff sure
<mrvn> heat: really? it's that smart? I've only had that happen for memcpy/strcpy so far.
<heat> idk
<heat> but i've seen a sincos implementation recurse onto itself by accident in #llvm
<heat> (that's a pretty simple example tho)
<mrvn> I really would hope that gcc would not optimize a function names memcpy to call memcpy
<heat> sin probably won't, but that's just an example
<bauen1> heat: do you have the documentation where it says that e.g. __builtin_cos could just call cos ?
<klange> mrvn: unfortunately, the optimizer has no idea of the name of the function it's optimizing, it seems
<klange> bauen1: there is no documentation, but I can tell you very plainly that it absolutely will just do that
<mrvn> klange: so make it push "builtin=off" when recursing into a buitin function
<bslsk05> ​godbolt.org: Compiler Explorer
<heat> __builtin_<standard C library function>() is pretty redundant if you're compiling normally though
<bauen1> thanks, i guess i will just continue to (ab)use the cross compiled newlibs libm
<mrvn> __builtin_abs() makes sense
<mrvn> fabs even
<Ali_A> on intel's manual, VOL3 section 9.9 (switching modes) it says I need to provide IDT in order to switch to protected mode from real mode
<Ali_A> I need to do the following, load IDT using LIDT instruction, executes LTR to load TASK segment and execute STI, however, I only loaded GDT and enabled the cr0.PE followed by far jmp and it did switch to 32-bit mode and I verified that, by compiling 32-bit C code and it did run it, so what where those 3 steps for? or did I misunderstand the steps
<Ali_A> from manual?
<bslsk05> ​godbolt.org: Compiler Explorer
<heat> Ali_A, that's not true
<heat> you don't need an IDT to switch to protected mode
<mrvn> you only need an IDT if you want to do anything interesting
<bslsk05> ​godbolt.org: Compiler Explorer
<mrvn> heat: -fno-builtin
<bslsk05> ​godbolt.org: Compiler Explorer
<heat> -fno-builtin is stupid don't use it unless you must
<heat> usually you don't need to
<mrvn> If you don't use -fno-builtin then all the __builtin_* are implicit
<heat> yes
<heat> "if you're compiling normally" <-- that's normally
<Ali_A> heat
<heat> Ali_A, well, that's a lie. you only need a GDT and paging structures if you're enabling paging (bet you're not right now)
<kazinsal> "to support reliable operation of the processor" is the key phrase there
<kazinsal> I would not call "any interrupt causes an immediate triple fault due to no IDT" to be reliable operation
<mrvn> kazinsal: works 100% reliable. Just don't turn on interrupts or fault
<Ali_A> No, it is okay I will attempt to enable paging today, but I just wanted to be sure, that I read the manual right and I was not missing something something.
<heat> tip: don't
<psykose> simply run zero code, and then so it will be perfectly run
<kazinsal> no operation is more reliable than disabling interrupts and NMIs and then halting
<heat> paging is totally non-trivial
<mrvn> Ali_A: as soon as you want to do something interesting you will need the IDT. But you can set that up in 32bit code.
<heat> in fact, it's hard
<kazinsal> paging is math, and math is hard, let's go shopping
<mrvn> kazinsal: can't disable NMIs. :)
<heat> do not rush paging, just take your time in 32-bit mode
<Ali_A> I have to enable at least 4 level paging to get to 64-bit mode so it is a must for me '=D
<heat> well, you've got your hands full then
<mrvn> Ali_A: you can map 2MB pages or even 1GiB pages if your CPU supports that. Much fewer levels.
<mrvn> Ali_A: Most people just map the first 2GB of memory to 0 and -2GB.
<mrvn> or even just 1
<Ali_A> I was expecting it to be something as simple as getting into 32-bit mode (turns out that was not simple at all I wasted 6 hours to get to work) + I read in the manual that to switch to 64-bit mode, u have to have at least 4 level paging (not sure what advantage I will get from 4-level paging or 5 level paging but it is just a step required by the
<Ali_A> processor)
<kazinsal> mrvn: if your machine has just booted then it's in legacy mode and you're using an XT PIC and can disable external NMI routing on it!
<kazinsal> now, I don't know what happens if a cosmic bitflip occurs while the processor is in a HALT state in a manner that causes it to resume from HALT state...
<mrvn> kazinsal: oh, I'm never in that mode, that's pre UEFI
<mrvn> Ali_A: 5 level page tables are for servers with tons and tons and tons of memory.
<heat> Ali_A: 4+ level paging is the only paging you have in 64-bit mode
<heat> the easier 2-level 32-bit paging won't work
<mrvn> 1GB pages only needs 2 levels, 2MB pages needs 3 levels.
<Ali_A> mrvn I don't really understand what u mentioned (I will need to read more theory about paging, because I just read the chapters from the intel's manual and it didn't say a lot about the structure, I just know I have to load specific data structure in specific format and so on, will probably read AMD manual about paging as well to see if I can
<Ali_A> understand)
<heat> yes, it's hard
<mrvn> Ali_A: in the 2nd level page table there is a bit that says the address it points to is a 3rd level page table or a 1GB physical page. Same for the 3rd level table but with 2MB pages.
<Ali_A> yeah I read that 5level paging allows u to address a lot larger address space something like 4 zetabyte or something (I did the calculation, just don't remember the number)
<heat> you probably should take a quick dip in 32-bit mode
<heat> you can safely-ish learn paging from there without the confusion of raw assembly
<heat> a lot of it is trial and error, really
<mrvn> Ali_A: when you read the paging stuff draw it out on paper. It's really confusing in words but as diagrams it's much easier to learn.
<heat> paging is one of those concepts that are completely alien to you unless you've done it before
<mrvn> Ali_A: and keep in mind: it's just a (radix) tree and you lookup and address.
<Ali_A> heat what do u mean by safely learn it in 32-bit ? oh, do u mean like enable level 2 paging before trying to enable level 4? make senes
<heat> like play around in 32-bit x86 C
<heat> get your basic printf going, do whatever, then do paging
<heat> easier to debug if you've got a printf for instance
<Ali_A> well, I implemented a hacky printf through VGA just by writing to memory location 0xb8000
<heat> x86_64 paging is just 2-level paging with extra steps (and levels :P)
fwg has quit [Quit: .oO( zzZzZzz ...]
<heat> i mean like an actual printf, with %x and everything
<heat> for instance, you could build a function that dumps your page tables
fwg has joined #osdev
<heat> of course, you can try to sniff around with qemu's info tlb and info mem and 'x' if you so desire
<Ali_A> make sense, thanks! will definitely try this before attempting the paging thing (I am surprised people here called it hard, because here people call many of the hard stuff easy)
<heat> this is just my take, of course
Likorn has quit [Quit: WeeChat 3.4.1]
<heat> big tip: *EVERYTHING* in page table land uses physical addresses
<heat> this is a common pitfall for newbies
<zid> page tables are easy to do, hard to conceptualize for the first time
<heat> hard to debug too
<zid> It's effectively a sparse 9bit trie
<zid> with interesting tricks like loops
<zid> (recursive paging)
<heat> i think recursive paging is really hard in practice because of tlb shootdowns and whatnot
<zid> good job nobody needs tlb shootdowns
<heat> well, not shootdowns, just TLB invalidation
<zid> howso? if you unmap/restrict a page, use that addr in invlpg gg
sonny has joined #osdev
papaya has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<heat> yeah but you get extra pages mapped because of the recursive mapping
<heat> if you need to change the paging structures, invlpg you go
<zid> 'you get extra pages mapped' ?
<heat> yeah, as part of the recursive mappig
<heat> mapping*
<zid> yea what do you mean
<heat> your page tables also get mapped
<heat> if you remove a page table, you need to invlpg that recursive mapping as well
<heat> otherwise, boom
<zid> right
<zid> I was talking about invlpg'ing the page tables
<zid> as the extra step
<zid> because you're always invlpging the mapping you're unmapping (I hope)
<heat> not always but sure
* heat looks at the A bit first
dude12312414 has joined #osdev
Vercas has quit [Quit: buh bye]
Vercas has joined #osdev
blei has joined #osdev
Burgundy has quit [Ping timeout: 256 seconds]
blei has quit [Quit: Client closed]
srjek has quit [Ping timeout: 260 seconds]