<bslsk05>
github.com: rust_os/Kernel/Modules/usb_xhci at master · thepowersgang/rust_os · GitHub
alpha2023 has quit [Quit: No Ping reply in 180 seconds.]
alpha2023 has joined #osdev
nyah has quit [Quit: leaving]
<dzwdz>
any advice on managing software ports?
<mjg>
Mutabah: the tomorrow that never comes
ghee has quit [Quit: EOF]
stux has joined #osdev
heat has joined #osdev
<heat>
dzwdz, run
<dzwdz>
too late
<heat>
what are you porting
<dzwdz>
i ported ed
<heat>
oh shit
<heat>
great choice
<heat>
which ed?
<dzwdz>
oed
<dzwdz>
which is based on the openbsd one
<heat>
yeah that should be pretty portable
<dzwdz>
my current setup is terrible though
<dzwdz>
`make clean; ./port ed clean; make -j4; ./port ed install; make -j4` if i want to build an image with ed
<heat>
add it to your makefile?
<dzwdz>
how would i handle the clean target?
the_lanetly_052 has quit [Ping timeout: 268 seconds]
<dzwdz>
i don't want make cleaning my main repo to also clean all the ports in the future
<heat>
why?
[itchyjunk] has joined #osdev
<dzwdz>
i run make clean on my main repo pretty frequently (i didn't set up header dependency tracking, etc)
<dzwdz>
it builds in a few seconds so it isn't a big deal
<dzwdz>
but once i go mad enough to build something bigger like gcc, that'd start being an issue
<mjg>
ed? man of culture
<j`ey>
do joe next
<mjg>
now that is uncultured
<j`ey>
what about joe's window manager?
<dzwdz>
also, any recs for easy to port languages?
<sbalmos>
I'd rather turn to starboard
Matt|home has joined #osdev
Fannie_Chmeller has joined #osdev
<Fannie_Chmeller>
Dear #osdev, I am wondering about implementing CoW snapshot into my filesystem. I have a very abstract understanding of how it works in e.g. ZFS, Btreefs, Bcachefs, u.s.w. but I am wondering about what the concrete application of it looks like
saltd has joined #osdev
frkzoid has joined #osdev
<Fannie_Chmeller>
Let us compare it to the virtual memory. There, to Copy on Write 'snapshot', You mark all the "blocks" with something to indicate they need to be copied, and you create a new "Superblock" (a VMOBJECT) to "Root" them. Perhaps, you use a Reference Count to decide whether a 'Block' (page) need to be copied. But it goes without saying, no one will implement a reference count for every block in a filesystem! So clearly I approach this wr
<Fannie_Chmeller>
How then does a man do it? Is there a good documentation specific to how snapshoting works in a filesystem such as ZFS. APFS, Btrfs? I have seen only summaries of the on-disk structures in general, and abstract explanations, and no concrete description - yet it will surely exist
<clever>
Fannie_Chmeller: i believe the biggest trick zfs uses, is sequence numbers that act like timestamps
<clever>
every transaction done to the fs, has a transaction# on it
<clever>
every object you create, has the transaction# it was created, in its metadata
<clever>
objects can never be modified, so if your changing a block in a file, you create a new version of that block, and a new version of the indirection tree pointing to those blocks
<clever>
for garbage collection, you can check the transaction number of the snapshots, to quickly tell when an object was made, relative to snapshots and the tip
<clever>
if a snapshot was made a T=10, it is now T=20, and the object you just replaced was modified at T=15, then its garbage
<clever>
but if the object was modified at T=5, then the snapshot is referencing it
<clever>
Fannie_Chmeller: but that entire system, assumes a linear history of all objects, and snapshots are just pointers to a certain time in history, that means you cant CoW between zfs datasets
<mrvn>
Fannie_Chmeller: In an FS you don't have anything that makes you decide to copy a block. You always copy the block. So all you are left with is figuring out which blocks are no longer referenced and that's a simple GC problem.
<mrvn>
Fannie_Chmeller: in memory you might keep reference counts or sequence number tricks or something. But again you would always copy the block on write.
<mrvn>
(unless you are something like ext2/3/4 which then breaks raid 1 because disks end up with different data)
<Fannie_Chmeller>
Clever: Thank you, this is helping to transport me from "Total Ignorant" to only "Mostly Ignorant". I confirm my understanding: ZFS being a grand tree and being founded on copy on write, every object can be timestamped (i.e. versioned). Every time an object is modified, she is copied (and parent objects in tree too as it is a Merkel tree?). So the snapshot only needs to maintain pointer to the root of tree, and by holding that refer
<Fannie_Chmeller>
*by holding that reference, he preserves her from the garbage collection
<clever>
there are ~3 layers, from memory
<clever>
first is the file layer, where you have L0 blocks containing actual file data
<clever>
then L1 blocks that are just a big array of L0 block pointers
<clever>
repeat up the tree, until you have a single root
<clever>
each inode (but zfs calls it something else) then has a pointer to that
<clever>
i'm guessing the inode's are in a big list? and then repeat the indirection tree again?
<clever>
so now you have a root object for each dataset, containing the latest version of the inode tree? and the root dir inode#?
sikkiladho has quit [Quit: Connection closed for inactivity]
<clever>
and then do that all over again, to store the latest version of every dataset
<mrvn>
clever: inodes are most certainly in a tree
<clever>
for ext2/3/4, inodes are just in a big flat on-disk array, and are modified in-place
<mrvn>
clever: yes, in ext2/3/4. But that isn't COW.
<clever>
yeah, was just giving another example, where it isnt tree baseed
<clever>
the uberblock array is at the start&end of the disk, and is just a flat on-disk array of every version of the entire pool
<frkzoid>
... many other system designs have come and gone, and some of those systems have had neat ideas that were nevertheless not enough to achieve commercial success" https://www.youtube.com/watch?v=7RNbIEJvjUA
<bslsk05>
'#rC3 - What have we lost?' by media.ccc.de (01:01:34)
<mrvn>
You basically can't do COW without it being a tree. No way to update data without rewriting too many blocks.
<clever>
and the uberblocks are padded up to block size
<clever>
so a failed block write, doesnt trash neighboring uberblocks
<clever>
section 4.2, page 31, explains the MOS, which the uberblock points to
<clever>
and now i get a bit lost, not sure exactly where the inodes are held
<mrvn>
frkzoid: that talk is horrible
<Fannie_Chmeller>
ZFS is marvelous, but as we say in my country, it is like speaking the Persian to an Arab. The terminology is divergent and it renders quick skimming impossible without a sound understanding.
justDeez is now known as justache
<clever>
Fannie_Chmeller: i mostly just hang out in #zfsonlinux and ask questions at odd hours, and i also use zfs on all of my systems
<frkzoid>
theres also #openzfs
<ckie>
this is an XY but how does the assembler know if I want a (x64) near/far return? they're two different opcodes and yet both "ret"?
<ckie>
it's gIANT and i typed q the first time and it was angry
<ckie>
:P
<zid>
ah I know it's either g or q depending on tool
<zid>
but which is which fuck knows
<ckie>
the lack of a .S spec is many things
<ckie>
(at least for x86, I guess)
<zid>
your limit is fucky in your cs btw
<zid>
has this selector been used before, or is this the first time it's being loaded?
<ckie>
there's already been a gdt before
<zid>
right but
<zid>
has this selector been loaded before
<ckie>
as in the offset or the addr+offset? used, first-time respectively
terminalpusher has joined #osdev
<zid>
selector 8 in this gdt
<zid>
if it has been loaded and used successfully before then I'm stuck, if it hasn't then it's just malformed (and looks it anyway because that limit value is weird)
<ckie>
ah, no, it's new
<zid>
the limit value being weird doesn't *make* it malformed though
<zid>
limit checking is not performed in long mode except on the gdt itself
<zid>
oops broke my qemu, depclean removed capstone
<ckie>
wiki says «As well, [paging] is strictly enforced in Long Mode, as the base and limit values are ignored.»
<zid>
Do you happen to have the exact value in selector 8
<zid>
available
<zid>
(meanwhile I am checking if qemu is supposed to output 0 for GDT= line)
<zid>
nope, GDT limit field
<ckie>
the descriptor is 0000FFFF,00209800 if that's what you mean
<zid>
your GDT is 0 bytes long
<zid>
might wanna fix that
* ckie
processes
<gog>
wait, it doesn't do limit checking for fs and gs also right, but base does matter for them?
gelatram has quit [Ping timeout: 252 seconds]
<zid>
GDT= ffffffff80015080 00000037 is what my qemu shows
<zid>
yours shows GDT= ... 00000000
<gog>
or can those only be set with fs gs base msrs?
<zid>
and my gdt is about 0x38 bytes long, so presumably yours is 0 bytes long
<zid>
or rather, 1
<ckie>
so my lgdt is guilty?
<zid>
no, your gdt structure
<zid>
lgdt loads a base + limit struct from memory
remexre has quit [Remote host closed the connection]
PapaFrog has quit [Read error: Connection reset by peer]
remexre has joined #osdev
PapaFrog has joined #osdev
saltd has joined #osdev
remexre has quit [Ping timeout: 248 seconds]
remexre has joined #osdev
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
<watabo>
textinfo
watabo has quit [Quit: leaving]
<heat>
helo
remexre has quit [Remote host closed the connection]
saltd has quit [Remote host closed the connection]
remexre has joined #osdev
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
FreeFull has quit [Ping timeout: 268 seconds]
FreeFull has joined #osdev
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
remexre has quit [Remote host closed the connection]
remexre has joined #osdev
matt__ has joined #osdev
frkzoid has quit [Ping timeout: 255 seconds]
smach has quit [Ping timeout: 260 seconds]
saltd has joined #osdev
SpikeHeron has quit [Quit: WeeChat 3.6]
SpikeHeron has joined #osdev
* saltd
being developed by British scientists. The idea is to produce electricity by catching flies and digesting them in special fuel cells that will break
<heat>
is a memcpy bound to be slow when decompressing an initrd onto a tmpfs?
<heat>
so, i've got a decompression stream
<heat>
I read the tar's header, parse it, create the file, blah blah; then I need to copy the data onto the file
<heat>
I could do it like the decompression stream does (point the decompression output buffer to an internal buffer and memcpy from that), but that seems slow
<heat>
I could preallocate tmpfs pages and then decompress a bit at a time onto the pages, but maybe the start-stop nature of this will also make it slow
<zid>
can you not sys_splice it so it doesn't memcpy