<bslsk05>
github.com: zfs/module/zfs/spa_misc.c at master · openzfs/zfs · GitHub
<clever>
ah, its basically just 1 / (1<<spa_slop_shift) percent is reserved
<clever>
so at 5 (the default), 3.125% is reserved
<clever>
so earlier, when i changed it to 4 and got 0 free, its because i doubled the reservation, to 6.25%
<nikolapdp>
yeah checks out
<mjg>
clever: what's your biggest zfs pool
<clever>
mjg: 3 x 4tb in a raidz1 array
<clever>
so 8tb of capacity, with 4tb of parity
<clever>
but, those numbers are also fuzzy, because of how raidz1 stores shorter records
goliath has quit [Quit: SIGSEGV]
<mjg>
rookie numbers man
<clever>
lets say the block size is 4kb, and i write an 8kb record to disk, raidz1 will split into 4k+4kb of data, and 4kb of parity, and spread it over 3 disks
<nikolapdp>
we're not rich here mjg
<mjg>
not to throw shade!
<clever>
but if i write a 4kb record, raidz1 will basically degrade down to a mirror, and store just 2 copies
<clever>
so the parity overhead varies, depending on the size of the record
<mjg>
what's that, your local nas for storing movies 'n shit?
<mjg>
or a backup server
<clever>
mjg: i still need to get the 3 x 16tb sas drives going
<mjg>
(perhaps both?)
<clever>
both and everything, lol
<clever>
its got backups of a laptop i had over a decade ago
<nikolapdp>
same actually lol
<clever>
its got more shows then i could possibly rewatch
<clever>
i think it has a corrupt ext4 volume from when lvm ate things
<clever>
its also got a chunk of my windows steam library, shared over iscsi
<mjg>
lol
<clever>
the windows machine was full, the nas wasnt, samba hates steam, so iscsi it was :P
<nikolapdp>
neat
rpnx_ has quit [Ping timeout: 256 seconds]
<clever>
i also ran talos princicle 2 over iscsi like that for a few days, and ouch, the LOD generation is shit
<kof673>
"a most DE-LIGHTful shade" </alchemy land is almost 180 of modern things>
<heat57>
clever: ext4 got "fast commit" which is basically just logging ext4 operations instead of block writes
<clever>
the low LOD models are auto-generated, and look horrible
<clever>
and due to the fragmentation on my nas, it can take minutes to load
<clever>
then i freed up space on my nvme, and it loads so fast i cant even catch the low LOD model, lol
<mjg>
i have to say the old bsd installers i used to consider shite back in the day (2005-ish)
<mjg>
beat the shit out of linux installers in the same timeframe
<mjg>
at least ubuntu and slackware
<mjg>
erm, debian
<mjg>
and salckware
<clever>
ive installed LFS before, so gentoo and nixos where easy mode :P
<mjg>
it's not about being easy or not, it's about being CUMBERSOME or not
<mjg>
fucking xfree86 configuration vibes
<clever>
oh, that reminds me, the first time i switched to wpa_supplicant, because i was away from home and was forced to go wpa, lol
<clever>
and the problem, is looking up how it works, when you dont have wifi :P
<mjg>
clever: there is a funny quote somewhere in freebsd's kernel developer guide
<mjg>
clever: they recommend printing the docs because "it is difficult to read online documentation while single-stepping the kernel"
<clever>
lol
Matt|home has quit [Quit: Leaving]
<clever>
thats why i basically always install a new nixos machine over ssh
<clever>
so i have a working env with copy/paste and all of my usual tools
<clever>
nixos does allow making a custom iso that has all of your usual tools, but it wont have your state (chrome bookmarks and such)
<clever>
and the usual problem of fitting 2 keyboards and mice on one desk :P
<clever>
when you can just throw the pc in a random corner, plug in a cat5, and ssh into it
<clever>
the `gang` flag in that listing, means the block is fragmented
bauen1 has joined #osdev
<clever>
the record on line 30 for example, if i take the 2 sizes listed (0x188000 + 0x2000), then i get 1.5mb, but the dblk on line 4 says it should be 1mb
<kof673>
the only thing i ever had an issue with on freebsd install, is i believe you need to manually glabel and gmirror ahead of time if you want those, don't know if they ever added that (or zfs now i suppose)
<clever>
ah, but because of raidz1, it should be 1.5mb (exactly 50% overhead), but its actually 1.5390625mb
<clever>
an extra 40kb wasted, on that record then
<clever>
kof673: for zfs, you can upgrade a lone disk into a mirror at any time, and downgrade it back into a lone disk
<clever>
converting between single/mirror, and raid i dont think is possible, and removing a disk i dont think is possible (just replacing it with the same type, and same or larger size)
<kof673>
you can do that, just you have to create the mirror on one disk first :D
rpnx_ has quit [Ping timeout: 260 seconds]
<kof673>
the installer did not do that :D
Arthuria has quit [Ping timeout: 268 seconds]
<clever>
ah, for zfs, you dont need that, it can seamlessly turn a single disk into a mirror at any time
<clever>
all that really does, is change some top level metadata, and clone all of the data over
<kof673>
^^^ yeah so if you create them ahead of time, then you can just point the installer there
<kof673>
or it is all "transparent" the installer is none the wiser
<clever>
have you seen the labels in the vdev stuff?
<kof673>
i have no idea what vdev stuff is like /dev/label? yes there was that, or something similar
<clever>
basically, a vdev is a disk within the pool
<clever>
when doing raid, the entire raid collection is considered a single vdev
<clever>
so if you do raidz1 over sda/sdb/sdc, then that whole set is a single raidz1 vdev
<clever>
the gist i linked, shows the headers at the start of a single partition
<kof673>
oh yeah, i just meant it makes entries like that: > path: '/dev/disk/by-id/wwn-0x50014ee2654606d2-part2'
<kof673>
so fstab/whatever can use those IIRC
<clever>
on my desktop, you can see the headers define a single disk (no redundancy)
<clever>
the path in there, is a performance thing, when mounting the pool, you can look there first
<clever>
but if the disk cant be found there, you can take the guid from desktop line 19, and just search every disk in the box
<clever>
but when you start to get 100+ disks in a box, that search could become costly
<kof673>
what i do know, the geom stuff is all producer/consumer so i believe it is supposed to be entirely transparent to whatever else you do (e.g. what filesystem you create on it), and arbitrary stacking of whatever components
<clever>
that sounds like lvm
<kof673>
so there is also an encryption thing etc.
<clever>
if you look at my nas in the above gist, line 18 says the type is raidz, line 21 says 1 parity disk, and then lines 28-51 define the children, what guid to expect for each member of the raidz1
carbonfiber has quit [Quit: Connection closed for inactivity]
<kof673>
yeah, i just use labels for those :D
<clever>
zfs uses guid's all over the place
<clever>
for my nas, lines 31/39/47 define the guid of each child of the raidz1, line 20 is then the guid of the whole raidz1 vdev, and line 13 is the "top guid", the guid for the root containing all data
<clever>
in this case, the raidz1 and top guid match, so the raidz1 is the root
troseman has quit [Quit: troseman]
<clever>
but, if you instead did `raidz1(sda,sdb,sdc) + raidz1(sdd,sde,sdf)`, the top guid wouldnt match
<clever>
so then you need to search for other things, with the same pool_guid, and assemble the parts
<clever>
the pool_guid is the closest thing to an ext4 uuid
carbonfiber has joined #osdev
rpnx_ has joined #osdev
rpnx_ has quit [Ping timeout: 245 seconds]
<kof673>
*or, freebsd maybe i didn't even use the installer, just make your label and mirror, put an MBR at some point, make FSses, mount, then untar IIRC lol
<kof673>
that is all the base system "sets" were, is tar files
rpnx_ has joined #osdev
netbsduser` has quit [Ping timeout: 256 seconds]
xelxebar has quit [Ping timeout: 240 seconds]
xelxebar has joined #osdev
Arthuria has joined #osdev
xFCFFDFFFFEFFFAF has joined #osdev
Arthuria has quit [Ping timeout: 268 seconds]
Arthuria has joined #osdev
Arthuria has quit [Killed (NickServ (GHOST command used by Guest684531))]
Arthuria has joined #osdev
vaihome- has quit [Remote host closed the connection]
<clever>
kof673: gentoo is the same, the stage3 tarball is a complete working userland, but no kernel
<clever>
kof673: unpack that to a rootfs, chroot in, build/install a kernel, setup the bootloader
FreeFull has quit []
FreeFull has joined #osdev
heat57 has quit [Quit: Client closed]
heat has joined #osdev
xFCFFDFFFFEFFFAF has quit [Ping timeout: 240 seconds]