Vercas has quit [Remote host closed the connection]
Vercas has joined #osdev
dh` has joined #osdev
knusbaum has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
knusbaum has quit [Ping timeout: 276 seconds]
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
knusbaum has joined #osdev
h4zel has quit [Ping timeout: 246 seconds]
nyah has quit [Ping timeout: 248 seconds]
knusbaum has quit [Ping timeout: 246 seconds]
knusbaum has joined #osdev
<klys>
geist, what's the latest on your threadripper instability
knusbaum has quit [Ping timeout: 260 seconds]
* klys
was out hiking this evening
skipwich has quit [Quit: DISCONNECT]
knusbaum has joined #osdev
skipwich has joined #osdev
knusbaum has quit [Ping timeout: 248 seconds]
knusbaum has joined #osdev
h4zel has joined #osdev
Jari-- has joined #osdev
knusbaum has quit [Ping timeout: 276 seconds]
smeso has quit [Quit: smeso]
knusbaum has joined #osdev
gog has quit [Ping timeout: 248 seconds]
smeso has joined #osdev
Vercas6 has joined #osdev
Vercas has quit [Ping timeout: 240 seconds]
Vercas6 is now known as Vercas
Jari-- has quit [Ping timeout: 276 seconds]
Ali_A has quit [Quit: Connection closed]
sebonirc has quit [Read error: Connection reset by peer]
sebonirc has joined #osdev
meisaka has quit [Ping timeout: 256 seconds]
meisaka has joined #osdev
Ali_A has joined #osdev
sebonirc has quit [Ping timeout: 276 seconds]
<Ali_A>
Just wondering how do I test if the processor successfully got into protected mode? I did load a gdt, enabled PE in cr0, and did a far jump, is there any way to test if this was successful or something bad happened (like an entry in gdt was wrong or something)
<kazinsal>
if your code is executing after the far jump, it worked
sebonirc has joined #osdev
<kazinsal>
if the GDT entry is invalid then it'll most likely triple fault
<Ali_A>
the problem is I can not be sure how to test if I can execute the code after that, (since after getting into 32 bit mode, I can not use bios
<Ali_A>
so I can not print to the screen
<Mutabah>
You can use the serial port
<Mutabah>
or, if VGA mode/emulation is present, you can just draw directly to 0xB8000
<Ali_A>
I will try and see if I can draw directly to the screen
<Ali_A>
thanks!
<kazinsal>
yeah blatting a few characters to the top left of the screen is usually my first test for something like that
<kingoffrance>
bochs and qemu have 0xE9 can just "out" a byte to there
<Mutabah>
Oh, does qemu have E9 now?
<Mutabah>
It didn't last time I looked for it (... granted that was AAAGES ago)
<kingoffrance>
qemu "-debugcon dev" option no idea if that is ancient (option name/syntax changed) or what "defaults" are etc. :)
metabulation has quit [Ping timeout: 256 seconds]
<kingoffrance>
i wonder why noone wires it so you can "read" from somewhere :D
<kingoffrance>
i mean, seems a simple patch to code if you wanted to
<kazinsal>
yeah, it's been there a while but it's an optional isa device
<Mutabah>
ah.
<Mutabah>
Eh, serial port is only a small amount of extra effort
<Mutabah>
with the advantage of working on real hardware
<kazinsal>
yeah
<kazinsal>
a quick serial driver isn't much code and it's something that'll work on any platform
<kazinsal>
and any hypervisor
<Ali_A>
okay so it does work, after a far jump
<Ali_A>
which I assume means I can execute 32-bit code
<Ali_A>
however, I noticed when I try to load the SS segment with mov ax, 0x16 ; data segment offset in the gdt is 0x16 followed by `mov ss, ax` gdb somehow crashes or something
<kazinsal>
selectors are aligned to 8 byte offsets -- 0x16 is not divisible by 8
<Ali_A>
kazinsal, u r a genius, thanks!
<Ali_A>
that was just meant to be 0x10 (16....)
<kazinsal>
👍
xenos1984 has quit [Quit: Leaving.]
Likorn has joined #osdev
<mrvn>
there are 10 kinds of people
<clever>
those that understand binary, and those that dont
<Ali_A>
mrvn 0b10 kind of people
gog has joined #osdev
<Griwes>
and those who didn't know this joke was in ternary
Ali_A has quit [Quit: Connection closed]
GeDaMo has joined #osdev
Celelibi has quit [Ping timeout: 248 seconds]
SGautam has joined #osdev
mctpyt has joined #osdev
nyah has joined #osdev
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
xenos1984 has quit [Remote host closed the connection]
xenos1984 has joined #osdev
pretty_dumm_guy has joined #osdev
xenos1984 has quit [Client Quit]
srjek has quit [Ping timeout: 240 seconds]
Burgundy has joined #osdev
fwg_ has joined #osdev
fwg has quit [Ping timeout: 260 seconds]
fwg_ has quit [Quit: .oO( zzZzZzz ...]
gildasio has joined #osdev
fwg has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
fwg has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
Dyskos has joined #osdev
fwg has joined #osdev
dude12312414 has joined #osdev
Vercas has quit [Quit: buh bye]
fwg has quit [Quit: .oO( zzZzZzz ...]
Vercas has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
codez has joined #osdev
<sbalmos>
been spending an interesting past few days reading some of the redox 0.7 code. just wish there was better architectural design documentation (I know, same can be said of all hobby OSs)
Piraty has quit [Quit: -]
Piraty has joined #osdev
srjek has joined #osdev
fwg has joined #osdev
mctpyt has quit [Ping timeout: 240 seconds]
Vercas has quit [Remote host closed the connection]
Vercas has joined #osdev
gxt has quit [Remote host closed the connection]
gxt has joined #osdev
fwg has quit [Quit: .oO( zzZzZzz ...]
metabulation has joined #osdev
fwg has joined #osdev
metabulation has quit [Ping timeout: 276 seconds]
Mutabah has quit [Ping timeout: 276 seconds]
Teukka has quit [Read error: Connection reset by peer]
Teukka has joined #osdev
h4zel has quit [Ping timeout: 276 seconds]
Celelibi has joined #osdev
nur has quit [Quit: Leaving]
nur has joined #osdev
h4zel has joined #osdev
Mutabah has joined #osdev
<mrvn>
you need to invest in some documentation driven design :)
SGautam has quit [Quit: Connection closed for inactivity]
Ali_A has joined #osdev
bradd has quit [Ping timeout: 248 seconds]
flx-- has quit [Ping timeout: 272 seconds]
bradd has joined #osdev
FatalNIX has quit [Quit: Lost terminal]
<Griwes>
I'm in a love/hate relationship with the osdev cycle between "things you wrote work so well you make much faster progress than you expected" and "the progress you've made reveals extremely fundamental bugs in the core of the OS"
<Griwes>
For the past few sessions I've been at the former, now I'm at the latter
* Griwes
shakes fists at how sysret leaves the segment registers in a hugely messy state requiring irq handling to adapt
<Griwes>
(it also turns out that I still have some bugs in my avl tree, oopsie)
ptrc has quit [Remote host closed the connection]
xenos1984 has joined #osdev
ptrc has joined #osdev
ptrc has quit [Remote host closed the connection]
ptrc has joined #osdev
dude12312414 has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
genpaku has quit [Ping timeout: 240 seconds]
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
genpaku has joined #osdev
Likorn has joined #osdev
Likorn has quit [Client Quit]
srjek has quit [Ping timeout: 248 seconds]
wootehfoot has joined #osdev
Likorn has joined #osdev
Ali_A has quit [Quit: Connection closed]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<geist>
klys: re: ryzen instability. I did the first step: move the machine to a plcae where i can work on it, unplug the 10gbe (but leave it in). ran memtest for a few hours
<geist>
then booted it and let it run. ran `watch -n1 'dmesg | tail -40'` to see if something showed up on the log just before it crashed
<geist>
nope. lasted about a day
<geist>
so next thing i'll do is start pulling out cards
<geist>
i am suspecting the off brand mega cheap vid card
<geist>
that i had to install because of the 10gbe vid card being pci-e x4 which used up the x16 slot that the old vid card was in
<clever>
geist: ive started to notice the effects of your c++ support, vc4-elf-c++filt takes up a large chunk of my build times now!
<clever>
it seems to be in the all: target, and the only way to not, is to specify just the .elf as a target?
<klange>
i finally got around to implementing steps in slices in kuroko, alongside switching over to slice objects and matching python3 on dropping __getslice__, et al. in favor of passing slice objects to __getitem__.
<geist>
clever: really? like how long, seconds?
<clever>
geist: 7 seconds
<geist>
ah
<geist>
well, suppose you can add a switch to turn it off (or on)
<clever>
`make PROJECT=bootcode-fast-ntsc build-bootcode-fast-ntsc/lk.bin` works around it, but now i have to specify the project twice
* geist
nods
<geist>
well like i said it'd be easy to remove it from the all, or make it a separate target that is then optionally included (or not) based on a switch
<geist>
iirc you're using a pretty old bulldozer cpu right?
<clever>
yeah, fx-8350
<clever>
but i'm also not using any c++ code currently, so the c++filt is pointless on my builds
<clever>
oh, what if you just scanned the list of sources, and auto-disabled it?
<geist>
do you use the .lst files or whatnot?
<clever>
plus a flag to force it off anyways
<clever>
i do use the .lst files any time i need to debug a fault
<geist>
i see
<clever>
and .debug.lst sometimes
<geist>
well anyway6 you're a smart person. go disable it
<geist>
i'm surprised it's substantially slow, usually that part is a blip compared to the cost of the disassembly in the first place
<geist>
but... dunno
<geist>
i do have very fast machines here so i tend to not see it
<clever>
yeah, i can always just edit engine.mk or build.mk directly
<geist>
does it only suhow up on VC sources?
<geist>
possible your toolchain was built -O0 or something?
Mutabah has quit [Ping timeout: 256 seconds]
<geist>
also surprised it runs slower with C++ symbols present, vs just the need for it to scan the file in the first place
<geist>
seems that the piping and the scanning would be the slow part, and thus proportional to the size of the input
<clever>
PROJECT=qemu-virt-arm64-test rebuilds in 2 seconds, from touching arch.c
<clever>
so its probably a flaw in the vc4 gcc
<geist>
yah might want to double check it's not compiled with -O0. i've had that problem once before
<geist>
ran for liek 6 months on a project at work before discovering that whoever built the toolchain did it -O0 -g
<zid>
modern -g doesn't actually slow down binaries does it?
<j`ey>
do you even need a specific vc4 c++filt?
<j`ey>
oh, to read the object files you do
<j`ey>
(maybe?)
<geist>
probably not. actually really -C i think on binutils is all you actually need for most of this
<geist>
i think the notion of always piping output of objdump through c++filt as a separate step is just old habit of mine
<zid>
we have -Og now though
<zid>
which does what you 'want' when you think of -g slowing down binaries
<geist>
primarily because i dont think -C always existed and i had some theory that piping allows for a little bit of parallelism
<clever>
i could probably also cheat, and use the host c++filt instead of the vc4 cross compiler c++filt
<clever>
they cant differ that much?
<geist>
j`ey: probably not, i just included that for a complete description
<geist>
probably not at all
<geist>
or just dont include it. add a sewitch to the build system to turn it off or something and push up a patch
<clever>
yeah
<zid>
what is c++filt, anyway
<geist>
again though, if it's substantiallyslower than an arm build either a) your toolchain is not compiled properly, you should look at that or b) you're building something substantially different
<geist>
like a 2GB image file or something
<j`ey>
zid: demangler
<geist>
zid: basically whatever you pipe through it it looks for c++ mangled names and replaces in line
<zid>
ah
<geist>
so it's nice to take the output of a dissasembly, or symbol file, etc to demangle stuff
<zid>
The upgraded version is called IDA, it looks at the machine code and spits out C
<geist>
the LK build system basically generates a full suite of secondary files for this after linking, and runs all of them through c++filt
<geist>
i was sad when we turned it off for zircon, but the much larger build there was really starting to take a substantial amount of time to disassemble/demangle
<geist>
and basically zero people cared about the files but me, which IMO is frankly an issue
<geist>
but i can't force folks to look at disassembly
<clever>
lol
<zid>
I'd troll C++ more but I'm having trouble concentrating with how hot these noodles are
<geist>
i can only teach them the virtues of following along
<geist>
zid: i'm aware of your trolling. i could sense you as a shark, circling around the conversation, trying to find the right spot to strike
<mrvn>
clever: c++ name manging isnÄ't standardized (or at least prior to modules it isn't).
<zid>
I'm doing lemaze breathing
<geist>
mrvn: yeh but for a given version of gcc using a given c++abi verfsion it probably is at least the same across arches
<geist>
ie, no arch specific parts to it
<mrvn>
I wouldn't bet on it. And for vc4 you don't have gcc output.
<geist>
and it is at lerast standardized *enough* that it's not been a problem recently. there were a few ABI breaking changes in the past but i think that's mostly gone
<geist>
well, it can't change much or you'd have actual name resolution linking issues
<mrvn>
worst case you are probably left with few not demangled parts.
<zid>
it's doing a string lookup on input tokens to output tokens isn't it
<geist>
or you're not misreading one of them. also try the debug version of it. that usually takes substantially longer
<zid>
I'd expect log n anyway though which will basically be O(1)
<zid>
for less than a few hundred thousand symbols
<mrvn>
zid: my guess would be reading char by char till it finds something that could be a mangled name. Then it demangles and if it works it outputs the demangled string, otherwise the original.
<geist>
alas i gotta go. the meetings are starting. will be occupied for most of the rest of the afternoon
<geist>
MEETINGS ARE THE BEST
<geist>
(been watching the show Severance lately, it's *fantastic*)
<zid>
oh yea could be doing that, I don't know enough about how reverseable the mangling is to know if it can do that
<mrvn>
zid: O(input size). Can't be faster than touching every char once.
<zid>
or if it has to LUT them
<clever>
800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<clever>
that is in the default lk.elf.debug.lst
<clever>
[nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -dC build-bootcode-fast-ntsc/lk.elf | grep test::
<clever>
800044ae <test::foo(int)>:
<clever>
and its also in the -dC output
<clever>
800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<mrvn>
what's the mangled name?
<clever>
real 0m0.101s
<geist>
cool, so now time it piped
<clever>
800044da: ff 9f ea ff bl 800044ae <_ZN4test3fooEi>
<clever>
without -C, it turns into this
<mrvn>
$ echo foo bar _ZN4test3fooEi baz | c++filt
<mrvn>
foo bar test::foo(int) baz
<geist>
yah at some point i actually grokked the format. basically _ZN is i think the return part, then each thing after that is i think a length, name, and code to modify it
<clever>
[nix-shell:~/apps/rpi/lk-overlay]$ time vc4-elf-objdump -d build-bootcode-fast-ntsc/lk.elf | vc4-elf-c++filt | grep test | grep foo
<clever>
800044ae <test::foo(int)>:
<clever>
geist: ok, so at least with this, its still fast...
<clever>
800044da: ff 9f ea ff bl 800044ae <test::foo(int)>
<clever>
real 0m0.098s
<zid>
yea the mangle format actually looks fairly simple for gcc at least
<geist>
yah that's hwy i'm suspecting your initial hypothesis is off
<clever>
i checked top multiple times, and c++filt was at the top of the charts
<mrvn>
you could just starce it to see if it forks c++-filt
<clever>
let me shove a time into your makefiles...
<geist>
right, add a echo date or whatnot before and after
Mutabah has joined #osdev
<geist>
also it runs a lot of things through c++filt, it might not be the disassembly
<geist>
there's a symbol table dump, etc
<geist>
maybe one of the other things is really slow
<clever>
its my fragmentation coming back to bite me!
GeDaMo has quit [Remote host closed the connection]
Ali_A has joined #osdev
<geist>
between meetings: i just had a thought
<geist>
if the writing out of the file is very expensive because of zfs and lz4 then the last process in the pipe chain is charged the cost
<geist>
ie, foo | bar > baz
<geist>
bar gets all the kernel time accounted for it since it's left writing to the FD
<geist>
hence why c++filt maybe seems to be expensive
<clever>
i think its not lz4, because turning that off didnt help
<clever>
i think its the severe fragmentation
<clever>
and the cpu cost, to just find free blocks
<geist>
word.
<geist>
`filefrag -v` is a great tool for thsi too
<geist>
though i dunno if zfs is wired through for this
<clever>
zfs isnt compatible with filefrag
<clever>
you have to use the zdb cmd instead
<clever>
filefrag assumes your fs is backed by a single block device
<clever>
same reason LK isnt able to mount zfs so easily
wootehfoot has quit [Quit: pillow time]
<geist>
well, works fine with btrfs
<geist>
but btrfs goes through one level of translation, so the addresses filefrag returns are logical
immibis has quit [Read error: Connection reset by peer]
<geist>
(i guess)
<clever>
for zfs, every block is a tuple of: the vdev#, the block#, the hash of the block, and more!
<mrvn>
clever: how little ram do you have that writing c++filt output to disk flushes the contents?
<clever>
mrvn: 32gigs
<geist>
there might be some sort of encoding to the block addresses it returns that's non obvious, but i think btrfs has a intermediate translation where the FS operates in logical address mode and theres a layer that allocates rather large chunks out of the underlying physical devices
<geist>
nice thing is it can move those physical slices around without modifying the higher level FS data structures
<geist>
and/or duplicate (raid1, etc)
<geist>
the arge chunks are usually in order of 1GB or so, so you dont need a very expensive translation table
<clever>
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
<clever>
mrvn: i'm on a 470gig nvme drive
<mrvn>
It's 20E well spend to double your disk space.
<Ermine>
Did you see modern devices running m68k?
<kazinsal>
this is why I don't use zfs
<geist>
ah 320GB i thought you were using some old spinny disk
<geist>
since that was a standard size for a while
<clever>
geist: 320gig partition on a 470gig nvme
<mrvn>
kazinsal: nothing to do with zfs. try ext4 or btrfs or any other. They all go exponential when reaching 100% full
<geist>
well, tht's good. if you had a spinny disk this fragmented, it'd be a shit show on reading
<clever>
with a 64gig swap partition for chrome to burn a hole in the nvme, lol
<geist>
OTOH you would have noticed it much faster
<geist>
mrvn: yes except COW fses will probably fragment the free space faster, on the average
<geist>
but yes, running any fs that low is a bad idea
<mrvn>
geist: depends on the FS design
<clever>
let me double-check things...
<geist>
indeed.
<clever>
yep, there is a 94gig hole between zfs and swap
<clever>
so i could just expand zfs by another 94gig on the spot
<kazinsal>
I can honestly say I've never been (tangentially) involved in a conversation about ZFS that didn't involve a pile of esoteric troubleshooting and/or consulting hte source code
<clever>
lets do it!
<geist>
if your root is not ZFS you can switch to a /swap file and set it a little smaller/resize it
<geist>
then reclaim that space too
<geist>
kazinsal: haha
<geist>
clever: back yer shit up first
<geist>
alwayyyyyys do that
<clever>
na!
* geist
shrugs and goes back to meetings
<mrvn>
geist: zfs allocates bigger chunks and uses them for data or metadata so they don't get interleaved and you can defrag too
<clever>
ive done this once before, in the middle of a screen sharing session :P
<geist>
because the bear didn't attack you before doesn't mean it's a good idea to sleep in the bear cave
<mrvn>
Is swap on compressed zfs stable now?
<geist>
yah in general i've moved to /swap files, as have a lot of distros. much nicer to not have to dork with the partition table in a fairly static way
<kazinsal>
introducing the Leopards Eating Peoples Faces File System
<klange>
_I didn't think the leopards would eat _my_ files/faces!_
<geist>
hah on a related note i noticed in one of the recent netbsds it actually mentions LFS
<geist>
like 'LFS got some stability improvements' in netbsd 9 i think
<geist>
like. wow someone uses LFS?
<clever>
and the device node is just magically bigger, and still contains an fs
<mrvn>
geist: linux filehirachy standard or large file support?
<geist>
mrvn: oh silly. log based file system
<kazinsal>
log-structured file system
<geist>
the *old* one, from BSDs, back in the 80s
<mrvn>
.oO(how is that still unstable?)
<geist>
interesting idea, didn't go anywhere, has serious downsides, but one can argue that lots of the modern stuff is based on the idea
<clever>
gist updated
<clever>
i now have 3 holes, that are 2^28 bytes long
<geist>
or at least it was potentially a source of ideas
<clever>
256mb each
<geist>
though i hear DEC had some sort of log based fs at some point. Spiralog i think?
<mrvn>
clever: have you defraged the fs lately?
<clever>
mrvn: you cant really, zfs is immutable
<clever>
your only option is to move+delete, then copy it back
<mrvn>
clever: zfs has a defrag
<clever>
what is the cmd called?
<mrvn>
zdb something something
Likorn has joined #osdev
<clever>
sounds like an offline operation
<geist>
so take it offline and defrag it
<clever>
oh, yeah, now i remember why i wasnt expanding it the last ~100gig
<clever>
i had intentionally ran a blkdiscard on that 100gig partition, to force the nvme to have more free blocks internally
<clever>
so its wear leveling had more room to flex
<geist>
yah makes sense, but you can accomplish the same thing by just not using up the last of your zfs and making sure it trims things
<geist>
OTOH, given your presumed nature, you'll probably now just run it down to the last bit
<clever>
at the time, zfs didnt support trim
<geist>
side note: i noticed that the `nvme list` command will show you the internal concept of how much the drive thinks it's in use
<geist>
the 'namespace usage' column appears to track with recent trims
<clever>
Node SN Model Namespace Usage Format FW Rev
<clever>
/dev/nvme0n1 BTPY652506Q0512F INTEL SSDPEKKW512G7 1 512.11 GB / 512.11 GB 512 B + 0 B PSF109C
<clever>
pretty useless in my case
<geist>
yes. that means you have *zero* trimming going on
<clever>
but i remember running a blkdiscard on a 100gig partition in the past
<clever>
to create a 100gig hole in the device
<clever>
its possible the firmware doesnt support things?
<geist>
that is interesting, indeed
<mrvn>
clever: I did it a few years ago and it's fully online. It just goes through the zfs data and copies data and metadata around that's fragmenting
<clever>
/dev/nvme0n1 S3EUNB0J506630H Samsung SSD 960 EVO 500GB 1 498.80 GB / 500.11 GB 512 B + 0 B 2B7QCXE7
<clever>
on my laptop, it reports this instead
<geist>
and note you're also running it right to the edge
heat has joined #osdev
<geist>
what fs are you using there?
<clever>
zfs on both desktop and laptop
<geist>
i think i'm starting to see a common pattern here
<geist>
(zfs aint trimming, yo)
<clever>
the laptop is also zfs ontop of luks
<clever>
so i would need to get luks to also trim
<heat>
TRIM is disabled on certain ssd's im pretty sure
<mrvn>
zpool can trim
<geist>
yah but not on those SSDs
<kazinsal>
mmm, nested block device abstractions
<geist>
i have actually i think that exact model
<clever>
kazinsal: and lvm too!
<mrvn>
zpool-trim — Initiate immediate TRIM operations for all free space in a ZFS storage pool
<geist>
lsblk -D should show you if it is supported
<heat>
hmm, maybe not trim. there was a common command that was disabled on a bunch of ssds
<clever>
reports 512 byte block size for the nvme on both machines, but 0 for the lvm nodes that zfs sits ontop of
<heat>
oh wait, yeah
<geist>
theres your problem clever
<heat>
queued trim, that's what it is
<clever>
zfs ontop of lvm ontop of luks ontop of nvme
<geist>
need to figure out how to let LVM punch that through
<geist>
ah it's luks for sure
<mrvn>
or just not use lvm
<geist>
but iirc there's a mechanism to tell luks to allow punch throgh discards
<geist>
though you hypothetically lose a bit of security that way
<kazinsal>
does zfs not do encryption?
<geist>
but it's an opt in, since by default you just fill the drive with garbage
<mrvn>
totaly, can't trim a luks or everyone can see where you have unused space.
<geist>
so even your 100GB wasn't doing anything because you did it on top of a luks that wasn't letting you punch it through
<geist>
but like i said there's a flag or whatnot to allow it, if you're willing to accept the punch through
<clever>
kazinsal: zfs encryption came around after i installed this laptop
<clever>
geist: that 100gig hole was on a non-luks system
<geist>
fine, anyway
<mrvn>
clever: I forgot what exactly it was but zfs encryption has faults in the design.
<clever>
it was a 100gig bare partition, that i directly ran blkdiscard on, and then deleted
<clever>
so that range of the nvme was just not mapped to any blocks
<geist>
okay, anyway, for your laptop i'd personally punch discards through
<kazinsal>
kitchen sink systems tend to end up with design faults
<clever>
mrvn: yeah, i trust luks more then zfs
<clever>
geist: yeah, checking the man pages for how
<mrvn>
zfs encryption has the problem of being glued on after the fact
<bslsk05>
github.com: nixos-configs/system76.nix at master · cleverca22/nixos-configs · GitHub
<clever>
kazinsal: nixos lets you use source to define how the entire machine is configured
<clever>
so i just have to add allowDiscards=true; to line 26 and rebuild
<heat>
kazinsal, the linux way
<heat>
open source isn't broken, you just need to check the source code
<geist>
heh, it's now the linux way huh?
<geist>
sheesh
<heat>
because everyone needs to know how to program
<heat>
and use terminals
<CompanionCube>
zdb is not a defrag
<heat>
GUIs are for noobs
<CompanionCube>
zdb is dumpe2fs
<geist>
i was just mostly thinking how generally polished linux distros have been compared to what existed at the time
<geist>
ie, installig slakware linux in 1995 was downright ez compared to a BSD
<geist>
and that trend hs generally continued
<heat>
right, but bsd is bsd
<heat>
you don't need to check windows's source code, it just works
<heat>
(tm)
<clever>
geist: and once i flipped on allowDiscards and rebooted, i see discard support clean thru lvm to the block dev zfs uses
<clever>
so lvm just passes trim on automatically, and luks was the only problem
<kazinsal>
I think the idea of "infrastructure as code" has started causing people to slide back towards that older era of things not working out of the box
Ali_A has quit [Quit: Connection closed]
<geist>
and the intel ssd probably just doesn't report it right
<clever>
started a `zpool trim tank`, and i can see the usage in `nvme list` ticking down
<kazinsal>
when you're declaratively describing your environment at every level in a manner that is then used to "compile" that to a working system there's so many different aspects that can be huge pain points
<kazinsal>
you wouldn't use terraform to put together a desktop environment
<clever>
kazinsal: i do!
<heat>
well, that's a problem with linux
<clever>
its called nix, not terraform, but same idea applies
<heat>
so much choice that 75% of the combinations end up broken
<kazinsal>
and you're the only one having issues with what should be an extremely solved problem
<clever>
kazinsal: what issues?
<heat>
also instead of a great desktop environment you end up with 5 crap ones because "erm, muh choice"
<kazinsal>
the past hour and a half of janitoring your filesystems
<clever>
kazinsal: thats not because of nixos, thats because ive got data-hoarding problems :P
<clever>
and have been killing the drive with 0% free for months
<clever>
i'm doing the same thing to a gentoo system :P
<clever>
Filesystem Size Used Avail Use% Mounted on
<clever>
my desktop, after expanding the partition to fill out the rest of the drive
<clever>
Data Units Read: 235,638,176 [120 TB]
<clever>
Data Units Written: 685,035,746 [350 TB]
<clever>
and what smartctl reports
<clever>
Percentage Used: 71%
<clever>
ive read elsewhere, that this is a percentage of the lifetime
<mrvn>
clever: all those smart values and life time estimates are pretty mutch fiction. Accoring to specs my m2.key has a expected liftime of a few hours under load.
<bauen1>
i have a question for cross compiling to arm-none-eabi, libm (math.h, ...) isn't defined as freestanding, but it only seems to reference __errno, so how bad of an idea is it to just link with libm in a freestanding env ?
<mrvn>
bauen1: just check the license
<heat>
geist, btw your printf tests are pretty cool
<heat>
really comprehensive
<geist>
yah was thinking of putting those in the unit tests too by sprintfing to a buffer, etc
<mrvn>
Do they check bit correct float, double and long double scanf/printf?
<heat>
no, kernel tests
<heat>
ahh wait these come from lk?
<heat>
i was looking at the fuchsia ones xD
<heat>
they seem pretty decently unit-testy
<bauen1>
mrvn: good thing I don't care about that, so I guess there aren't any other hidden surprises apart from __errno
<heat>
bauen1, which libm?
<mrvn>
bauen1: are you sure it only links __errno? You might get more symbols when something actually uses some functions
<bauen1>
heat: mrvn: libm from newlib I _think_ but yes, I will probably push to get it replace with a header that just does a bunch of `#define atan2 __builtin_atan2` or something like that
<mrvn>
bauen1: you can just link any lib in freestanding as long as the ABI, hard/soft float, red-zone, ... matches.
<heat>
bauen1, -fbuiltin does that by default
<mrvn>
bauen1: I don't think arm has a lot of builtin trig functions
<heat>
-fbuiltin even lets you optimise a sin + cos to a sincos
<mrvn>
Does aarch64 have trig functions in the fpu?
<bslsk05>
gcc.gnu.org: Other Builtins (Using the GNU Compiler Collection (GCC))
<mrvn>
bauen1: hmm, are they in libgcc then?
<heat>
builtins in gcc may just call the library function
<bauen1>
ah
<heat>
__builtin_sin() is just a way to refer to the compiler that you want the compiler optimised version, if it exists
Ali_A has joined #osdev
<mrvn>
The builtin might just be so gcc can assume a function called "cos" is the cos function and optimize it
<heat>
if you do -fbuiltin, __builtin_sin() is implicit wrt sin()
<mrvn>
e.g. cos(0) == 1
<heat>
^^this is also why every libc and libm needs to be compiled with -fno-builtin, it will realise you're calculating the sin() and optimise it to a sin call - boom, stack blew up
<klange>
it will absolutely not realize you are calculating sin, but for a lot of other stuff sure
<mrvn>
heat: really? it's that smart? I've only had that happen for memcpy/strcpy so far.
<heat>
idk
<heat>
but i've seen a sincos implementation recurse onto itself by accident in #llvm
<heat>
(that's a pretty simple example tho)
<mrvn>
I really would hope that gcc would not optimize a function names memcpy to call memcpy
<heat>
sin probably won't, but that's just an example
<bauen1>
heat: do you have the documentation where it says that e.g. __builtin_cos could just call cos ?
<klange>
mrvn: unfortunately, the optimizer has no idea of the name of the function it's optimizing, it seems
<klange>
bauen1: there is no documentation, but I can tell you very plainly that it absolutely will just do that
<mrvn>
klange: so make it push "builtin=off" when recursing into a buitin function
<heat>
__builtin_<standard C library function>() is pretty redundant if you're compiling normally though
<bauen1>
thanks, i guess i will just continue to (ab)use the cross compiled newlibs libm
<mrvn>
__builtin_abs() makes sense
<mrvn>
fabs even
<Ali_A>
on intel's manual, VOL3 section 9.9 (switching modes) it says I need to provide IDT in order to switch to protected mode from real mode
<Ali_A>
I need to do the following, load IDT using LIDT instruction, executes LTR to load TASK segment and execute STI, however, I only loaded GDT and enabled the cr0.PE followed by far jmp and it did switch to 32-bit mode and I verified that, by compiling 32-bit C code and it did run it, so what where those 3 steps for? or did I misunderstand the steps
<heat>
Ali_A, well, that's a lie. you only need a GDT and paging structures if you're enabling paging (bet you're not right now)
<kazinsal>
"to support reliable operation of the processor" is the key phrase there
<kazinsal>
I would not call "any interrupt causes an immediate triple fault due to no IDT" to be reliable operation
<mrvn>
kazinsal: works 100% reliable. Just don't turn on interrupts or fault
<Ali_A>
No, it is okay I will attempt to enable paging today, but I just wanted to be sure, that I read the manual right and I was not missing something something.
<heat>
tip: don't
<psykose>
simply run zero code, and then so it will be perfectly run
<kazinsal>
no operation is more reliable than disabling interrupts and NMIs and then halting
<heat>
paging is totally non-trivial
<mrvn>
Ali_A: as soon as you want to do something interesting you will need the IDT. But you can set that up in 32bit code.
<heat>
in fact, it's hard
<kazinsal>
paging is math, and math is hard, let's go shopping
<mrvn>
kazinsal: can't disable NMIs. :)
<heat>
do not rush paging, just take your time in 32-bit mode
<Ali_A>
I have to enable at least 4 level paging to get to 64-bit mode so it is a must for me '=D
<heat>
well, you've got your hands full then
<mrvn>
Ali_A: you can map 2MB pages or even 1GiB pages if your CPU supports that. Much fewer levels.
<mrvn>
Ali_A: Most people just map the first 2GB of memory to 0 and -2GB.
<mrvn>
or even just 1
<Ali_A>
I was expecting it to be something as simple as getting into 32-bit mode (turns out that was not simple at all I wasted 6 hours to get to work) + I read in the manual that to switch to 64-bit mode, u have to have at least 4 level paging (not sure what advantage I will get from 4-level paging or 5 level paging but it is just a step required by the
<Ali_A>
processor)
<kazinsal>
mrvn: if your machine has just booted then it's in legacy mode and you're using an XT PIC and can disable external NMI routing on it!
<kazinsal>
now, I don't know what happens if a cosmic bitflip occurs while the processor is in a HALT state in a manner that causes it to resume from HALT state...
<mrvn>
kazinsal: oh, I'm never in that mode, that's pre UEFI
<mrvn>
Ali_A: 5 level page tables are for servers with tons and tons and tons of memory.
<heat>
Ali_A: 4+ level paging is the only paging you have in 64-bit mode
<heat>
the easier 2-level 32-bit paging won't work
<Ali_A>
mrvn I don't really understand what u mentioned (I will need to read more theory about paging, because I just read the chapters from the intel's manual and it didn't say a lot about the structure, I just know I have to load specific data structure in specific format and so on, will probably read AMD manual about paging as well to see if I can
<Ali_A>
understand)
<heat>
yes, it's hard
<mrvn>
Ali_A: in the 2nd level page table there is a bit that says the address it points to is a 3rd level page table or a 1GB physical page. Same for the 3rd level table but with 2MB pages.
<Ali_A>
yeah I read that 5level paging allows u to address a lot larger address space something like 4 zetabyte or something (I did the calculation, just don't remember the number)
<heat>
you probably should take a quick dip in 32-bit mode
<heat>
you can safely-ish learn paging from there without the confusion of raw assembly
<heat>
a lot of it is trial and error, really
<mrvn>
Ali_A: when you read the paging stuff draw it out on paper. It's really confusing in words but as diagrams it's much easier to learn.
<heat>
paging is one of those concepts that are completely alien to you unless you've done it before
<mrvn>
Ali_A: and keep in mind: it's just a (radix) tree and you lookup and address.
<Ali_A>
heat what do u mean by safely learn it in 32-bit ? oh, do u mean like enable level 2 paging before trying to enable level 4? make senes
<heat>
like play around in 32-bit x86 C
<heat>
get your basic printf going, do whatever, then do paging
<heat>
easier to debug if you've got a printf for instance
<Ali_A>
well, I implemented a hacky printf through VGA just by writing to memory location 0xb8000
<heat>
x86_64 paging is just 2-level paging with extra steps (and levels :P)
fwg has quit [Quit: .oO( zzZzZzz ...]
<heat>
i mean like an actual printf, with %x and everything
<heat>
for instance, you could build a function that dumps your page tables
fwg has joined #osdev
<heat>
of course, you can try to sniff around with qemu's info tlb and info mem and 'x' if you so desire
<Ali_A>
make sense, thanks! will definitely try this before attempting the paging thing (I am surprised people here called it hard, because here people call many of the hard stuff easy)
<heat>
this is just my take, of course
Likorn has quit [Quit: WeeChat 3.4.1]
<heat>
big tip: *EVERYTHING* in page table land uses physical addresses
<heat>
this is a common pitfall for newbies
<zid>
page tables are easy to do, hard to conceptualize for the first time
<heat>
hard to debug too
<zid>
It's effectively a sparse 9bit trie
<zid>
with interesting tricks like loops
<zid>
(recursive paging)
<heat>
i think recursive paging is really hard in practice because of tlb shootdowns and whatnot
<zid>
good job nobody needs tlb shootdowns
<heat>
well, not shootdowns, just TLB invalidation
<zid>
howso? if you unmap/restrict a page, use that addr in invlpg gg