MiningMarsh has quit [Remote host closed the connection]
MiningMarsh has joined #osdev
theyneversleep has joined #osdev
karenthedorf has quit [Remote host closed the connection]
karenthedorf has joined #osdev
edr has joined #osdev
antranigv_ is now known as antranigv
<Ermine>
Is there a way to fix damaged git repositories?
<Ermine>
.. re-clone is easier
<karenthedorf>
Define damaged. Define fix. :D
<karenthedorf>
git fsck, and git reflog are your friends, but yes, often a reclone is easier.
<zid>
something something zfs rollback
rustyy has joined #osdev
xx0a_q has joined #osdev
junon has joined #osdev
<Ermine>
loose file is corrupt, yadda yadda
<junon>
Having a hell of a time switching out ttbr0_el1. Using Limine, it sets up a direct map that I then switch out. I've made sure TCR_EL1 matches the same format of the page tables and everything. I've verified the page tables are correct. I'm doing an isb, tlbi vmalle1, dsb nsh, ic iallu, dc isw,xzr, then another isb after I switch ttbr0_el1. I've made sure the access flags and everything match
<junon>
others that are known good (they translate fine). However when I run an `at s1e1r` on the newly mapped address, I get an access flag fault at level 3. I can't figure out why the pages before were fine, and what's different about my setup that makes the translation system unhappy. Any tips on what I should do to debug this?
<junon>
Accesses to the page tables set up by limine work fine, is what I mean. So TCR_EL1 config (e.g. turning off TTBR0 translations is disabled, etc.) is fine.
<junon>
It's only after I switch out the page tables for my own that things break.
MrCryo has joined #osdev
<junon>
Happy to dump out the page tables / output of the `at` instruction decode, along with decoded tcr if someone would like to look at them. I'm stumped at this point.
karenthedorf has quit [Read error: Connection reset by peer]
bradd has quit [Ping timeout: 260 seconds]
xx0a_q is now known as \x0314xx0a_q
\x0314xx0a_q is now known as xx0a_q
guideX has joined #osdev
xx0a_q has quit [Quit: WeeChat 4.3.5]
hwpplayer1 has joined #osdev
Gooberpatrol66 has quit [Quit: Konversation terminated!]
<bslsk05>
horsle.glitch.me: Horsle- the horse-based word game
<GeDaMo>
:P
heat_ has joined #osdev
heat_ has quit [Client Quit]
Arthuria has joined #osdev
theyneversleep has quit [Remote host closed the connection]
xx0a_q has joined #osdev
junon has quit [Quit: thanks again!]
hwpplayer1 has joined #osdev
hwpplayer1 has quit [Read error: Connection reset by peer]
eddof13 has joined #osdev
X-Scale has quit [Quit: Client closed]
Arthuria has quit [Ping timeout: 276 seconds]
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
josuedhg has joined #osdev
hwpplayer1 has joined #osdev
memset has quit [Remote host closed the connection]
memset has joined #osdev
goliath has joined #osdev
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
<vin>
> Is there a way to break down the transparent huge pages to base pages? Will calling MADV_NOHUGEPAGE trigger the breakdown? -- continuing my question from yesterday. Assume a page is 2 MiB at address 0xABC and you call madvise with start at (0xABC + 1 MiB) with length of 1 MiB, this should not work right -- to clarify here we are trying to punch a hole in the last half of a 2 MiB huge page.
<vin>
Afaik MADV_DONTNEED works only on page aligned requests and on entier pages (base/huge).
hwpplayer1 has quit [Remote host closed the connection]
<heat_>
i already told you MADV_NOHUGEPAGE will not break down mapped thps
<heat_>
a trivial idea would be to mprotect the middle of a hugepage with a different prot, then mprotect it back - it might work
heat_ is now known as heat
xenos1984 has quit [Ping timeout: 272 seconds]
xenos1984 has joined #osdev
qubasa has quit [Remote host closed the connection]
MrCryo has quit [Remote host closed the connection]
hwpplayer1 has joined #osdev
memset has quit [Remote host closed the connection]
memset has joined #osdev
hwpplayer1 has quit [Quit: brb]
gog has joined #osdev
<adder>
anybody got tests for their allocator running as a part of kernel init, as opposed to being tested from userspace by mocking stuff?
josuedhg has quit [Ping timeout: 256 seconds]
<heat>
my allocator test is "does it run"
<heat>
which i dont necessarily recommend but is far easier than unit testing little things
Turn_Left has quit [Ping timeout: 248 seconds]
Turn_Left has joined #osdev
gorgonical has joined #osdev
<mjg>
so a real technical note about pipe support, most notably with regard to writes
<mjg>
there is a rather nasty problem there
<mjg>
and nobody has it truly fixed per se, it is merely worked around afaics
<mjg>
here it goes: both read and write can stall indefinitely on a page fault, but in that case what to do about locking the pipe around the transfer?
<mjg>
notably other mechanisms (say poll, epoll/kqueue etc.) may want to take a peek to find out the state
<mjg>
but they can't sensibly do it if the lock owner is indefinitely off cpu
<mjg>
linux loses on accuracy by avoiding the same lock (and not providing a fully reliable replacement)
<mjg>
while everyon else has a lock dropped around the transfer
<heat>
why is that a problem?
<mjg>
and a magic flag set
<mjg>
heat: you can't block in poll indefinitely
<heat>
sure you can?
<mjg>
what
<mjg>
suppose i called with a bunch of fds and a timeout
<heat>
poll takes locks all the time
<mjg>
yes, but they don't stall indefinitely mofo
<mjg>
even in the bsd land it's stuff with forward progress guarantee
<heat>
getting around that seems exquisitely terrible
<mjg>
it is
<heat>
i just hold the pipe mutex all around mon
<mjg>
that's bad
<heat>
might be
<mjg>
as i tried to explain above, it's not pleasant
<heat>
i dont really want pipe buf refcounts
<mjg>
nobody is doing pipe buf refcounts that i know of
<mjg>
nor i don't think it would make things any easier
<heat>
how would you read without the lock without taking a ref in some way?
<mjg>
flag
<mjg>
which is a de facto hand-rolled lock
<heat>
flag for what?
<mjg>
which i was going to complain about before you rolled in
<mjg>
and solaris has the same thing expressed in a different way
<mjg>
bottom line tho i found that this causes perf trouble
<heat>
grok.dragonflybsd is ded
<heat>
maybe dflybsd went out of business :v
<mjg>
in a microbench where one thread writes shit and another reads this results in avoidable off cpu trips
<mjg>
flooring perf
<mjg>
on the other hand in a real workload i found this accounts for about 10% off cpu time
<mjg>
namely when buliding shit at -j 104 bmake will block here *a lot*
<mjg>
so this is a real problem
<mjg>
with no satisfying solution that i see right now
<mjg>
(it would be much less of a problem with fucking make was not pounding on the pipe, but that's another story)
<heat>
i prefer the linux solution over shitlock
<heat>
even if not perfect
<mjg>
it is a hack but tolerable, except not trivially applicable to this fucker
<mjg>
in bsd i mean
<mjg>
because of how kqueue works
<mjg>
otherwise i would just bite the bullet
<mjg>
i shall also note that almost 90% off cpu time above accounts for a lol problem in the vm layer, which after getting fixed should make this pipe bullshit skyrocket
<heat>
what lol problem?
xx0a_q has quit [Quit: WeeChat 4.3.5]
<mjg>
locking a page is also using a hand-rolled mechanism which goes to sleep for any contention
<mjg>
so faults on the loader et al are just hanging out off cpu a lot
<heat>
le classique
<mjg>
things are funnier because once that happens the entire machinery unrolls the state and tries again
<mjg>
as in literally as if it just got there
<mjg>
everything looked up again 'n shit
<heat>
i think with the speculative page fault bs you mostly get around the page lock contention
<mjg>
this is just plain bad
<mjg>
where, in linux?
<heat>
yes
<mjg>
i don't see how this can happen
<heat>
why? it opportunistically maps everything around
<mjg>
at the end of the day you gonna have to synchronize inserting the page in some manner
<mjg>
well maybe they have something to backpedal, i have not looked into what they are doing
netbsduser has joined #osdev
<heat>
i tried to cause some contention on an unrealistically small file and couldn't do it
<heat>
might be different on huge systems but then ofc files aren't 4 pages large
<mjg>
were you faulting on something which was meant to be pulled up from a backing object?
<heat>
yes
<mjg>
to borrow from bsd verbiage
<heat>
basically speculative page faults grab the pte lock, then opportunistically grab pagen and their locks (with try_lock)
<mjg>
i can't mess with it right now, when i was looking page_fault* benchen vs a profile i found tons contention
<mjg>
but on semaphores
<heat>
oh, mmap_sem?
<mjg>
that or the inode thing
<mjg>
i don't remember
<mjg>
but it did not scale particularly well at merely 32
<mjg>
it is plausible things improved since i last looked
<mjg>
it got off cpu time 'n shit
<mjg>
because of read vs write contention on rw shitter
npc has joined #osdev
<heat>
i think some/most page fault paths dont need the mmap_sem now
<mjg>
there are some corner cases where it falls of the fast path
<mjg>
and plausibly some are related to the level of encountered parallelism
<mjg>
as in it shits the bed if you pound on it too much
<mjg>
i think faulting on unrelated vmas scales fine
<heat>
i should try and get a vma_lock scheme working but i have other priorities atm
<heat>
but, hey, maple trees are dope
<mjg>
how is indoe reclamation going
<mjg>
i don't know if i can deploy onyx yet
<heat>
it'z not yet, i'm finishing off the vm work before restarting fs work
<mjg>
ok so things already suck at 24
<mjg>
well not suck but there is some breakage
<mjg>
i'm seeing 15% idle while running ./page_fault1_threads -t 24
<mjg>
namely i instead of a flag i can store the lock owner
<mjg>
and apart from that add a lock which is only taken to serialize waiters
<heat>
a hack??? hacking is illegal sir
<mjg>
my bad
hwpplayer1 has joined #osdev
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
xx0a_q has joined #osdev
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
eddof13 has joined #osdev
Turn_Left has quit [Ping timeout: 244 seconds]
eddof13 has quit [Client Quit]
Turn_Left has joined #osdev
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 260 seconds]
netbsduser has quit [Ping timeout: 272 seconds]
Matt|home has joined #osdev
<Matt|home>
hi. im not sure if this is off-topic (prolly, if it is just ignore me) - im curious about something. i've been having to do a lot of OS installations lately using my shitty thumb drive, and im assuming each time i create the installer it doesn't literally zero out the drive every single time
<Matt|home>
so if it doesn't do that.. how does the drive differentiate between free memory and used? e.g. say some block of data is left over from a previous write,and that data is like a movie file or something. how does the thumb drive know to just ignore and safely overwrite that memory region
<Matt|home>
i assume there's some kinda table it keeps track of stuff somewhere
netbsduser has joined #osdev
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<sortie>
Matt|home: Hi Matt, it's called filesystems.
<Matt|home>
oh the filesystem is responsible for that on drives? luls, go figure
<sortie>
The filesystem is a data structure on your harddisk and other devices whose job is to organize the data. For every directory, it has a listing of what files are inside it, and where their metadata is located. For every time, there is a list of where its data is stored.
<sortie>
There is also a table of what blocks on the disk are unused/used, so it can allocate new blocks to files when they grow in size, and make sure they aren't used for other files.
<Matt|home>
do most ppl write their own filesystems for os dev projects, or do they not bother/just use an existing one? im on the fence rn, just looking 4 opinions
tjf has quit [Quit: l8r]
<sortie>
When you reformat a filesystem, it usually only zeroes those tables. The actual data blocks on the disk are not zeroed, but nothing references them anymore, so the data isn't accessible.
tjf has joined #osdev
<sortie>
When those blocks are reassigned to a new file later, the blocks are zeroed on the first use, and then replaced with the new file contents. That's why people may be able to recover deleted data
<zid>
I remember anally doing FULL FORMATS on single digit gigabyte drives and it taking forever <3
<Matt|home>
yeah .. i have PTSD flashbacks from the early days when my hardware didn't support this newfangled thing called "usb", and i was stuck with my goddamn floppy drive.. in 2003...
torresjrjr has joined #osdev
josuedhg has joined #osdev
<Matt|home>
nightmare ;_;
<sortie>
Matt|home: People do many different things. It is common for people to start out with an existing filesystem as their first filesystem. I recommend ext2 because it actually is easy and powerful, but not too simple, because it has all of the concepts properly it forces you to do good design.
<Matt|home>
mkay. yeh seems like there's a few with wide support, fat32, ntfs
<Matt|home>
im prooooooooollly lookin Matt|home fat32 or ext2, idk which yet
<Matt|home>
ty <3
<sortie>
Matt|home: ext2.
<Matt|home>
"website blocked due to trojan" i'll find the pdf elsewhere ;p
<sortie>
Just wget it or whatever
<Matt|home>
any particular reason over fat32 ? easier?
X-Scale has joined #osdev
<sortie>
FAT has the big problem that it's too simple which actually makes it hard to implement. It is very inefficiently implemented and has weird choices. It doesn't have an inode concept, where the metadata is for a file is stored in own location, but rather the metadata is stored inside the directory entry (which is really, really weird)
<Matt|home>
copy that
<Matt|home>
and im guessing NTFS is overkill ?
<heat>
ntfs is fine
<sortie>
The FAT timestamp precision is also whack, with two-second precision (weird as fuck). The root directory on FAT12/FAT16 is also super weird. You also cannot ignore FAT12/FAT16 because the FAT type depends 100% on the size of the filesystem.
<Matt|home>
hi heat <3
<heat>
hi
<heat>
lol two-second precision fuck me
<sortie>
Matt|home: Honestly I don't know too much about NTFS but I imagine it's complicated to do correctly. ext2 is in a very sweet spot where it has all the right standard Unix semantics but is dead simple because it doesn't have any advanced features.
<heat>
ext2 is cat poop
<Matt|home>
roger that <3 ty sortie
<heat>
ntfs is ok
<heat>
ext4 is the amish filesystem
<heat>
xfs if you use RHEL
<sortie>
heat: You gotta remember the context is the first filesystem for a newcomer to implement themselves.
<Matt|home>
before all y'all start arguin, keep in mind that i the person who instigated the beginning of all this had no opinion on the topic whatsoever ~
<sortie>
Not the choice for selecting a filesystem.
<heat>
ok
<heat>
yes for the first filesystem, ext2
<Matt|home>
look, at the end of the day it doesn't really matter
<heat>
frankly i'd go for ext3 but it's somewhat poorly documented
<Matt|home>
when i write my own it'll blow the others out of the water
<sortie>
ext2 teaches you a lot and it does things right. It doesn't do them super well in a modern context but it teaches you the right things.
<Matt|home>
it'll be the best commercial fs on the market, just need a little bit of time
<heat>
the journaling thing is eye opening and goated
<sortie>
heat: Well you basically implement ext3/ext4 by implementing ext2 and then implementing each of the extensions and improvements in order
<heat>
yes but a filesystem without journaling is, frankly, fucking bad
<heat>
or soft updates or whatever they're calling it
<sortie>
Matt|home: I recommend reading that book I linked "Practical File System Design with the Be File System". It's easy to find online and it's full of good experience.
<Matt|home>
sigh, speaking of.. time for my................ 14th.. yep, 14th, installation of debian.. and hopefully my last install for a while
<heat>
>debian
<heat>
see i may have found your problem
<sortie>
Debian is fine. I use it.
<heat>
i love how you're my polar opposite
<heat>
also weren't you on mint before?
<sortie>
heat: You do you. Just remember this is osdev. People are beginners and need good journeys where they get to their goal in a reasonable fashion. It's fine for you to believe in the technical excellence and you're usually not wrong, but pay attention to what people actually need bef
<sortie>
ore selling them on your strong opinions that may not apply well to them.
<Matt|home>
hm. what the helllllllllllllll........ sigh. 4th time today i have to do another write. great
<Matt|home>
heat - hahah, i have an e-waste laptop and im just using it as a shitty fileserver/linux computer for quick coding bs
<sortie>
I was on mint years ago yeah but replaced that with debian a few years ago
<heat>
yes but what i am telling you is that journaling (or any other kind of way to guarantee data consistency) is essential if you're studying storage in general
<Matt|home>
okay, this is legit a problem
<Matt|home>
i have a laptop who's architecture i do _not_ know, and there's no model number anywhere
<Matt|home>
sheet
<Matt|home>
maybe in the bios..
<sortie>
heat: Indeed a topic people should learn. Like most topics, it's not strictly needed, but it helps, you know? The book does have a whole chapter on it
<zid>
everybody should write gameboy emulators
<heat>
ext2 is a solid-ish filesystem but still in most ways a deprevation of the original UFS. like 1970s technology at its best
<Matt|home>
intel core m3 6Y30 at .9 ghz
<heat>
we don't look to the orginal v6 unix in how to write an OS, and we probably shouldn't directly look at ext2
<heat>
considering just ext3 is like... 24 years old or so
<heat>
ext4 itself may have hit the 15 yo mark
<sortie>
Indeed, that's why I recommend ext2. It has all the right Unix semantics to be minimal and it's simple and easy. It's better to learn than FAT.
<Matt|home>
hm.. im 90% sure this is i386 architecture, so i legit dfk why it's not recognizing the installer
<heat>
that's x86_64
<Matt|home>
.. are you serious
<heat>
yes
<Matt|home>
i downloaded three different ISOs and they're all the wrong architecture.. rofl. thanks man
<sortie>
x86_64 is surprisingly old
* Matt|home
is sad now
<Matt|home>
so is that shitty laptop :p
<sortie>
You still gotta be really old to not be x86_64
<heat>
6th gen is not that old
<sortie>
My 2004 machine was x86_64
<heat>
my laptop is 8th gen
Turn_Left has joined #osdev
<Matt|home>
uh, is x86_64 the same as amd64 ?
<heat>
yes
<sortie>
Yes
<Matt|home>
that's weird. why didn't the amd install work
<Matt|home>
... weeeeeeird
<sortie>
What are you botting off?
<sortie>
booting
<Matt|home>
usb drive, creating installer in windows via rufus (as per suggestion on the official debian website)
<Matt|home>
so just to be safe i'll redownload the ISO, and reinstall it on the drive
<Matt|home>
i doubt it but maybe a packet got corrupted or smth, idk
<netbsduser>
ext2 is not unreasonable to look at
<netbsduser>
you could journal it without excessive effort (just look at ext.3)
<heat>
correct, but if you want a good example off the bat, just look at ext3
<heat>
ext4 is okay but starts to get more fucky with the btrees
Left_Turn has quit [Ping timeout: 244 seconds]
<heat>
and the actual good features, like inline data and encryption, yadda yadda
<netbsduser>
yes, i suspect getting into btrees is probably essential for modern FSes, but it's a huge step up
<Matt|home>
to be safe i'll use a classic MBR
<heat>
probably, but for sane btrees themselves you want journalling to begin with
<heat>
or COW
<sortie>
Matt|home: Remember to run sync before you unplug it.
<sortie>
Check the sha256sum too
<Matt|home>
this is gonna take forever but im checking for bad blocks too
<Matt|home>
just to make sure, it is an old drive anyway
<sortie>
Did you run sync before you unplugged it? If not, that is likely the problem.
<Matt|home>
no i unplugged it while i was in bios
<Matt|home>
nbd
<sortie>
When you imaged the drive
<Matt|home>
oh yeah i properly ejected it
<Matt|home>
like i said it's an old drive, at least 10 years old, idk how much wear and tear it has but i assume at least a little. we'll see
<netbsduser>
the extent map in ext4, how does it do that? a btree?
<heat>
yea
<sortie>
I'm guessing there's a #debian channel where you people can give you the exact help you need :)
<netbsduser>
not surprising
<Matt|home>
nah it's prolly just the installer, i've done it a trillion times already. i gots this, ty <3
<netbsduser>
zfs defiantly opted for blocks
<Matt|home>
legit question, are you still using netbsd netbsduser? :D
<netbsduser>
but they are variably sized iirc
<netbsduser>
Matt|home: i'm called netbsduser not dg/uxuser
<netbsduser>
it's a current OS
<Matt|home>
wasn't zfs supposed to be like, the best filesystem ever or something
<Matt|home>
u are my hero :D
* Matt|home
snuggles netbsduser
<netbsduser>
it's called by the motto "the last word in file systems"
<Matt|home>
i legit wanted to slap freebsd on my laptop.. was the first OS i ever played around with after windows
<Matt|home>
but debian was quicker cuz i already had the iso's so, it lost out :p
<netbsduser>
i don't understand zfs much yet, i would like to
<Matt|home>
dw, once im done with mine you won't have to
<netbsduser>
jeff bonwick (who famously invented the slab allocator, which is now used by everyone) had a big hand in its design and he apparently leant on his experience in dealing with (main) memory
<dostoyevsky2>
netbsduser: I heard zfs is intermeshed so much with solaris that they ship half a solaris kernel with zfs... at least according to the OpenBSD guys
Arthuria has joined #osdev
<heat>
very wrong
<heat>
openbsd guys are experts in being wrong actually
<heat>
basically they emulate the solaris interfaces, but that's about it
<mjg>
wtf heat
<heat>
what mjg
<mjg>
insert your usual knee jerk denial when i claim breakage
<heat>
what
<mjg>
23:17 < heat> openbsd guys are experts in being wrong actually
<mjg>
hmm how about this one
<mjg>
"maybe you can admit people are doing their best and it's not all bad"
<heat>
haha great quote
<heat>
i'll tell you this though: being confidently wrong does piss me off to no end
<dostoyevsky2>
Well, the Solaris Porting Layer is 10 LoC, so I can understand the OpenBSD guys in saying: We're not going to port zfs to OpenBSD because the SPL makes it impossible to do an audit of the code
<dostoyevsky2>
10K LoC even
<Matt|home>
luls.. 30% after an hour.. oh well.. maybe we'll invent usb a trillion and be done with it already..
<heat>
they don't want to port zfs to openbsd because they think UFS is peak technology
X-Scale has quit [Ping timeout: 256 seconds]
<mjg>
they don't think that
<Matt|home>
...
<mjg>
zfs is legitimiately incompatible with obsd approach
<dostoyevsky2>
Matt|home: What FS did you find that has the most support on OSes so far?
<mjg>
but that's a comment on openbsd, not zfs
<Matt|home>
silly question.. actually i think i just realized it's a dumb one, i was thinking shells
<Matt|home>
but i was wondering if you could have as part of the isntaller a user's choice for what FS they want
<heat>
xfs!
<Matt|home>
and theoretically make it as easy as switching between shells :D
<Matt|home>
e.g. just run "gotontfs" boom ur on ntfs
<netbsduser>
xfs is unfortunately only the antepenultimate word in file systems
<Matt|home>
dostoyevsky2 - eh FAT32 seems to be the most widely supported , so we'll see if it works on my shitty laptop :p im still doing a block check on it tho
<dostoyevsky2>
Matt|home: couldn't you just emulate any FS within an nfsd in userland on any OS these days?
<Matt|home>
i legit have no idea
* Matt|home
packs a bowl and passes it around~
<heat>
dude believe it or not using nfsd as your personal FUSE isn't the greatest idea ever
<mjg>
i use nfs root hosted by my mips router
<mjg>
fuck the haters
Brnocrist has quit [Ping timeout: 260 seconds]
Brnocrist has joined #osdev
<kof673>
i like how everyone based their decision on licensing </sarcasm> :D
<Matt|home>
i didn't kof673 ?!
<Matt|home>
i want that on the record
<Matt|home>
i said _my_ fs choice would be my own bestest design which would blow every other one out of the water
<kof673>
</sarcasm> means sarcasm precedes that :D
<Matt|home>
ltierally on the internet markets or whatever it's called where you make money
<Matt|home>
my filesystem will be _universally_ adopted and fully supported across the spectrum, on everything from VHS players to weird chinese knockoff north korean "andrude" tablets with those weird plugs
<Matt|home>
just watch -_-
* Matt|home
goes to pass out
hwpplayer1 has quit [Read error: Connection reset by peer]
<sham1>
Matt|home: hopefully you don't pass out with your away nick
netbsduser has quit [Ping timeout: 265 seconds]
npc has quit [Remote host closed the connection]
hwpplayer1 has joined #osdev
netbsduser has joined #osdev
gcoakes has joined #osdev
goliath has quit [Quit: SIGSEGV]
gcoakes has quit [Remote host closed the connection]
gcoakes has joined #osdev
hwpplayer1 has quit [Remote host closed the connection]
netbsduser has quit [Ping timeout: 252 seconds]
gcoakes has quit [Ping timeout: 260 seconds]
Turn_Left has quit [Read error: Connection reset by peer]
xx0a_q has quit [Quit: WeeChat 4.3.5]
linearcannon has joined #osdev
linear_cannon has quit [Ping timeout: 260 seconds]
<Matt|home>
rofl.. my shit's so old it's literally 20x faster to transmit over wifi than by writing to my shitty USB drive :p maaaan
<kof673>
it is hopefully rare nowadays, but years ago ebay drives (like 18650 batteries) sometimes were sold with fake capacity....it would loop around and just start overwriting IIRC rufus or one of those windows usb write programs mentions that
<kof673>
just throwing that out there...
<kof673>
it would never show up, until you hit whatever actual capacit
<kof673>
*capacity
<Matt|home>
yeh i remember thoze
<kof673>
on linux i know you can get wildly different speeds e.g. with dd depending on block size, but rufus and such should handle all that