klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
Celelibi has quit [Ping timeout: 244 seconds]
gog has quit [Quit: byee]
Celelibi has joined #osdev
kwilczynski has quit []
isaacwoods has quit [Quit: WeeChat 3.2]
iorem has joined #osdev
bas1l has joined #osdev
basil has quit [Remote host closed the connection]
jstoker has quit [Quit: *disappears in a cloud of bits*]
bas1l is now known as basil
jstoker has joined #osdev
V has quit [Remote host closed the connection]
ElementW has quit [Remote host closed the connection]
NieDzejkob has quit [Ping timeout: 244 seconds]
NieDzejkob has joined #osdev
V has joined #osdev
ElectronApps has joined #osdev
Sos has quit [Quit: Leaving]
NieDzejkob has quit [Ping timeout: 252 seconds]
NieDzejkob has joined #osdev
piotr_ has joined #osdev
sts-q has quit [Ping timeout: 265 seconds]
sts-q has joined #osdev
vdamewood has joined #osdev
vinleod has joined #osdev
vdamewood is now known as Guest4642
Guest4642 has quit [Killed (zirconium.libera.chat (Nickname regained by services))]
vinleod is now known as vdamewood
mctpyt has quit [Ping timeout: 252 seconds]
silverwhitefish has joined #osdev
smeso has quit [Quit: smeso]
smeso has joined #osdev
Izem has joined #osdev
ZetItUp has joined #osdev
<ZetItUp> damn thunderstorms in sweden right now
<ZetItUp> 3 days in a row
Izem has quit [Quit: Izem]
iorem has quit [Quit: Connection closed]
Izem has joined #osdev
Izem has left #osdev [Good Bye]
<geist2> wow
srjek_ has quit [Ping timeout: 244 seconds]
mahmutov has joined #osdev
mahmutov has quit [Ping timeout: 265 seconds]
Burgundy has joined #osdev
Izem has joined #osdev
<Izem> have you heard of any memory management scheme that tags any memory the os hands out?
<moon-child> what kind of tagging?
<Izem> nothing specific, just a marker of some sort
<moon-child> how do you want to mark it? According to its permissions (rwx)? According to which process requested it? Its size?
<Izem> oh sorry, process that requested it
<moon-child> hmm. With 47 bits of userspace address space, you could use 15 for a pid, leaving 32 bits of memory. That's not very much (and doesn't support very many processes)
<Izem> oh :/
<Izem> thanks, I think I can do some more research now
<moon-child> why do you want to do that?
<Izem> I was trying to think about a general interface for a garbage collector
<moon-child> oh. In that case you could use a small number of bits (4-5, maybe) as a hash of the pid, and use that as a fastpath, and fall back for potentially-nonlocal pointers
<meisaka> another idea would be to allocate a large table where each slot corresponds to a physical page, make each slot as big as needed
<meisaka> then store some reference in the slots, either the PID or some index to one
Izem has quit [Quit: Izem]
piotr_ has quit [Ping timeout: 264 seconds]
<vancz> are there any books that cover "modern" os developments?
<vancz> i feel like the question probably is "not even wrong" because cutting edge os research has always been happening I imagine
<vancz> we just have a bit more hardware now
ephemer0l has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
<kazinsal> You're probably more likely to find modern papers than books
<moon-child> cutting edge os research has not been happening for 2-3 decades
<kazinsal> Most traditional systems research courses these days are either Tanenbaum's Minix stuff or some sort of unix reimplementation course
<kazinsal> So you need to look at either masters theses or papers written by researchers from corporations
<moon-child> I mean--there's been some stuff. Look at usenix papers
<moon-child> but nothing really interesting; if you don't read anything, you won't miss much
<kazinsal> The last two interesting modern systems research implementations I can think of are Singularity/Midori (never realized as a commercial product) and Fuchsia/Zircon (currently being deployed)
<kazinsal> Singularity had the interesting idea of resilience through being completely managed code
<kazinsal> Part of its software stack ended up going into the Azure native language services stack at one point
<moon-child> yeah, I mean there's not _nothing_ happening
<moon-child> io_uring is another fairly recent development
<moon-child> but if you ignore everything that's happened since 2000--you're not missing much
elastic_dog has quit [Ping timeout: 250 seconds]
elastic_dog has joined #osdev
doug16k has quit [Remote host closed the connection]
sortie has joined #osdev
doug16k has joined #osdev
sortie has quit [Ping timeout: 265 seconds]
sortie has joined #osdev
<Belxjander> moon-child: each Application gets a 32bit address space to itself and then anything beyond that is "shared" within a 48bit pointer address space?
<moon-child> huh?
<Belxjander> moon-child: I'm personally looking at "tag-extending" address space handling in a virtual machine implimentation...
<moon-child> you could do that, sure. Some applications want to handle >4g at once, you don't want to make that painful
<moon-child> 48 bits is what the CPU gives you (usually). Generally you give the kernel half of that, so userspace only gets 47 bits total. Relying excessively on page table mappings makes the TLB sad
<Belxjander> moon-child: well I am allowing for up to 2GB of direct addressing as "private" memory... and anything beyond that is "0xC0...00" flagged (0x80000000 bit flagging) as "4MB page" extended with an extended "BaseAddress" table based on a 2:6:24 flag:entry:offset arrangement
<Belxjander> but I haven't worked out all the specifics yet
<Belxjander> where the "entry" is an extended memory management table
<klange> followup to last night's vfs adventure: I figured out what 2012-me was thinking with this design and... well, I don't like it, but I fixed a refcounting bug and I'm resigned to leave the rest of the implementation as-is until a future VFS revisit...
<mjg> :)
<mjg> old me: biggest asshat
<klange> "what 2012-me was thinking" seems to have been "every 'open' operation produces a free-able 'fs_node_t' structure owned by the caller" but then also-2012-me went and screwed up how that works with mountpoints
<klange> also 2013+-me kinda ignored the cost of embedding method tables into everything and _should_ have used vtable pointers
zoey has joined #osdev
<klange> so a mountpoint is an fs_node_t with a -1 refcount, an fs_node_t can have multiple owners and is refcounted so that close() only happens once... but in my forgetfulness and age I forgot about the ownership thing and was kinda assuming you got the actual mountpoint instance?
<klange> So for a while there if you opened anything that was below a mountpoint, you'd end up with a leaked fs_node_t for the mountpoint because it never got its own refcount...
<klange> anyway it seems the system built on top of this was at least sane enough that fixing that didn't break anything
<klange> and to fix the original issue that led to this rabbit hole, the ramdisk objects now get a back-reference to their 'master' instance so ioctl can modify ->length on that and now FINALLY, /dev/ram0 reports 0 bytes after tmpfs migration and I can sleep again
<klange> How this probably should work is... get rid of the copies, refcount everything, being in the mount tree is a counted reference in itself, and nodes should be reused on opens because that's what an inode cache is for???
zoey has quit [Ping timeout: 252 seconds]
GeDaMo has joined #osdev
<vancz> kazinsal: yeah i keep tripping over singularity these days
<vancz> i suppose i should find some stuff out about fuchsia
<vancz> zircon_
<vancz> ?
<klange> moon-child: the most amazing thing about io_uring to me is that it's such an obvious idea in retrospect
<moon-child> a lot of great ideas are like that
<bslsk05> ​en.wikipedia.org: Burroughs MCP - Wikipedia
<moon-child> we had vdsos already. io_uring was just a natural extension of that
piotr_ has joined #osdev
<doug16k> if you run tune2fs -l on your root partition, is it configured with unlimited mount count, so it never ever runs fsck?
<doug16k> I just realized my machine hasn't fsck'd my install since 2016
<moon-child> I've never fscked
<moon-child> zfs :3
<doug16k> I prefer filesystems that don't corrupt stuff
<moon-child> ?
<moon-child> zfs is, like, the _one_ fs I would trust to not corrupt my stuff
<klange> I have, like, one requirement for my filesystems and thankfully they've all met it so far, but there were some close calls.
<doug16k> moon-child, it does too much
<doug16k> it takes your data and scrubs it through your memory and cpu way more than a normal fs, giving it plenty of opportunities to mess it up
<moon-child> In principle, sure. In practice...the results speak for themselves
<moon-child> fat does barely anything; would you trust it?
<doug16k> a fairer comparison would be with something with a journal
<klange> I should revive my ext2 driver... it's been neglected for years and has yet to be ported forward to misaka...
<doug16k> the simplest thing that has a journal and directory lookups are good (b+) is good enough
<doug16k> ntfs is fine
<doug16k> it's quite simple
<doug16k> range based, B+ tree directories, journal
<doug16k> perfect
<moon-child> good enough for what? If what you want is 'probably won't corrupt everything given a sudden power out' then sure
<moon-child> if you want snapshotting (small backups), protection against hardware failure (mirroring), not so much
<moon-child> and you need explicit fsck with a journal, not with cow (which was what led us here :P)
<doug16k> being cow makes errors impossible and ensures infallibility. neat
kingoffrance has quit [Ping timeout: 264 seconds]
<doug16k> isn't it more like "I can't fsck because you can't check zfs"?
<moon-child> eh, no. Btrfs is a cow fs and it's horrid :)
<moon-child> doug16k: no, you straight up don't need to fsck
<moon-child> if you have a powerout before committing a write, you'll still point to the old file
<doug16k> imagine being so cocky that you make a filesystem implementation, with no check?
<doug16k> that's next level
<moon-child> ?
<moon-child> what's there to check?
<moon-child> I mean, there is a resilver. Recheck hashes
<moon-child> scrub, not resilver
Sos has joined #osdev
<doug16k> every database needs a "pack/reindex", since 1950
<moon-child> that's a different operation
<doug16k> no it isn't
<doug16k> a filesystem is a database of metadata
<moon-child> pack/reindex's primary job isn't to check for errors, though, is it?
<doug16k> the B+ trees could end up pathologically half full
<doug16k> you could make the directories take half the space in the worst case
<doug16k> the leaves
<doug16k> it does use B+ trees right?
<doug16k> when it deletes, does it do the full operation? or does it just cheat and not really keep every leaf half full?
<doug16k> at least half full*
<moon-child> not sure on that count. But I don't see what that has to do with journaling/cow
<doug16k> it means you need to pack/reindex
<doug16k> because you could end up with most of the leaves in the B+ tree half full
<doug16k> or worse if it uses a shortcut/cheat delete algorithm for speed
<moon-child> but you don't need to pack/reindex with cow any more than you do with journaling
<moon-child> so what's your point?
<doug16k> fsck should be doing that compaction
<doug16k> does it do it online all the time?
<doug16k> it runs fsck forever while operating in a way?
<bslsk05> ​docs.oracle.com: Repairing a Corrupted File or Directory - Managing ZFS File Systems in Oracle® Solaris 11.2
<doug16k> use zsomething tool?
<klange> y'all fightin' over this stuff and i'm just happy none of my filesystems have murdered anyone
<moon-child> haven't murdered anyone YET! :)
<doug16k> fighting? na
<doug16k> moon-child is right. zfs does have a reputation for reliability
<doug16k> my inner pessimist sees all the data going into some overclocker's PRNG/DDR4 hardware
<moon-child> doug16k: afaik it tries to avoid fragmentation in the first place. Not super knowledgeable there. It does have issues with fragmentation though
<doug16k> which is "stable" at 4100MHz clock
<moon-child> like if you get above 80-90% usage writes start to get unusably slow
<meisaka> I think the most concerning thing about zfs is the amount of ram it's eats through to run
<moon-child> doug16k: that's what ecc ram is for :)
flx has quit [Ping timeout: 244 seconds]
mctpyt has joined #osdev
piotr_ has quit [Remote host closed the connection]
<doug16k> I have ECC and use ext4
dormito has quit [Ping timeout: 258 seconds]
piotr_ has joined #osdev
<doug16k> and apparently haven't checked its validity since 2016
<doug16k> I have 64GB of it too. I should be using zfs, right?
<doug16k> my distro calls it "experimental". experimental is to me as a cross is to a vampire
<doug16k> I barely trust the production one
<moon-child> yeah zfs is a pain on linux
<moon-child> not in kernel, doesn't fstab properly
<moon-child> nice on freebsd, though
piotr_ has quit [Ping timeout: 268 seconds]
piotr_ has joined #osdev
<doug16k> Filesystem created: Sun Jun 25 23:52:13 2017 Last checked: Sat Dec 31 19:56:00 2016 <-- sounds legit
<meisaka> o.o
<doug16k> an example of why we have a thing like fsck - buggy code
<moon-child> 🤔
<doug16k> I love the TB written total though. that is pretty cool that it tracks that
<doug16k> only at 60TB out of 800TB rated write endurance on my nvme
<moon-child> I think SMART will tell you that too
<doug16k> yeah it says 1% wearout
<doug16k> ah it finally ticked over to 2%
<doug16k> it was 1 for ages
<doug16k> sudo nvme smart-log /dev/nvme0
<mjg> does it really change in a linear manner though?
<mjg> i would not be surprised if "TB written" was including block reallocation due to wear out
<doug16k> the 60TB figure is from ext4
<mjg> and in particular last several % of supposed write-liveness being eaten in a farction
<doug16k> available spare is 100%
<doug16k> "data units" written 55,082,205
<doug16k> probably 4KB each?
<doug16k> that'd be low though
<doug16k> superblock might exaggerate writes - does it include sparse space?
<doug16k> I create a lot of sparse space in my OS build, making the disk image
<doug16k> if they were 4KB then it would be 225GB, way low
<doug16k> good luck getting a straight answer out of a storage product manufacturer though
<doug16k> the numbers fitting in some asinine struct were more important than being usable, when they designed SMART
<doug16k> they hand out a spec where you can make up all the numbers as you see fit. you can tell storage manufacturers were involved in that stupidity
piotr_ has quit []
<bslsk05> ​gist.github.com: gist:ca0bbc7f31b9404c9345eb9db0a52d5f · GitHub
<doug16k> seems new from those numbers
<doug16k> the only one slightly below flawless is spinup time
<doug16k> is it that good, or lying?
<doug16k> SMART Extended Comprehensive Error Log Version: 1 (6 sectors): No Errors Logged
<doug16k> HD lottery?
dormito has joined #osdev
<doug16k> the biggest surprise to me is how much total writing there is on all my filesystems. I figured filesystems were read-mostly. They aren't at all, it's about half reads, half writes, maybe even more writes than reads
<doug16k> my spinning drive has 14.33 billion sectors written, 14.32 billion sectors read
<GeDaMo> Maybe it's logging every read? :P
<doug16k> even bigger imbalance on my nvme, 48.8M "units" read, 55.0M units written
<doug16k> but, way more read commands. apparently writes are way bigger than reads: 1.4 billion read commands, 864M write commands
KidBeta has joined #osdev
<doug16k> GeDaMo, that would be a cool trick while reading 200MB/s linear :D
<warlock> good morning
<doug16k> hi
<warlock> doug16k: what ya into today?
<doug16k> just realized today that I have never fsck'd my root partition
<doug16k> since 2017
<warlock> oh
<warlock> still good?
<doug16k> yeah it's fine
<doug16k> seemingly anyway
<warlock> what is it on, ssd?
<doug16k> yeah, nvme
<warlock> figures
<doug16k> I find it odd how little emphasis developers put on their machine
<doug16k> people in most industries spend a fortune on tools
<warlock> yea, I seen a mem about it the other day
<doug16k> imagine a mechanic spending 3k and he's set with a kick ass setup? lol
<doug16k> 3k for one special thing
<doug16k> some dumbass oil change equipment probably costs a 32 processor machine that can build lto clang
<warlock> it was showing a setup of a developer who makes games and everything, vs a tech/gamer
<warlock> obviously, they had a way better setup than the developer
<klange> okay, bit of clean up and I've got my old ext2 driver building as a little tool on Linux, should be able to poke around with it and get it reading files, can try to figure out what's wrong with writing as well :)
<klange> Wouldn't that be a nice feature for ToaruOS 2.0 to ship with... a persistent filesystem that actually works...
<doug16k> klange, make it so
<sortie> klange, hell yeah
<sortie> I tested my ext2 driver as a fuse filesystem on Linux
<doug16k> I designed my fs interfaces with that in mind but never got around to it
<Belxjander> doug16k: on my own setup the imbalance is mainly reads over writes here... and it is a system I actively write code and test on
<doug16k> Belxjander, what's your ratio (approximately)?
<doug16k> way more reads than writes?
amanita_ has joined #osdev
<doug16k> I wonder why I write so much then
amanita has quit [Ping timeout: 258 seconds]
ghwerig has joined #osdev
<warlock> doug16k: maybe because you got a bigger block size?
<warlock> really I have no idea how you figure it out
ghwerig_ has quit [Ping timeout: 258 seconds]
ghwerig has quit [Ping timeout: 268 seconds]
dutch has quit [Quit: WeeChat 3.2]
dutch has joined #osdev
flx has joined #osdev
sol-86 has joined #osdev
sol-86 has left #osdev [#osdev]
<Belxjander> are there any known good libraries other than the python "chardet" module for identifying various text formats and encodings?
kingoffrance has joined #osdev
ahalaney has joined #osdev
<doug16k> lol, I randomly got latest gnuchess source. LTO build went berserk because it was full of one-definition-rule violations. fixed all that and threw -fsanitize=address at it, stack buffer overflows almost right away
gog has joined #osdev
MiningMarsh has quit [Ping timeout: 268 seconds]
MiningMarsh has joined #osdev
Sos has quit [Ping timeout: 265 seconds]
Ar0n has quit [Ping timeout: 258 seconds]
rwb has quit [Ping timeout: 258 seconds]
rwb has joined #osdev
KidBeta has quit [Ping timeout: 258 seconds]
andydude has joined #osdev
flx has quit [Ping timeout: 268 seconds]
ElectronApps has quit [Read error: Connection reset by peer]
flx has joined #osdev
LostFrog has quit [Ping timeout: 244 seconds]
PapaFrog has joined #osdev
dennis95 has joined #osdev
Arsen is now known as ArsenArsen
ArsenArsen is now known as Arsen
Arsen is now known as ArsenArsen
dutch has quit [Quit: WeeChat 3.2]
isaacwoods has joined #osdev
dutch has joined #osdev
<geist2> huh does it just use a ridiculous amount of stack
<geist2> or still got build problems?
<geist2> or the sanitize thing
<sortie> geist2: Ghost Harder
* sortie . o O (Best movie title for the geist sequel)
tacco has joined #osdev
pretty_dumm_guy has joined #osdev
YuutaW has joined #osdev
YuutaW has quit [Quit: WeeChat 3.1]
YuutaW has joined #osdev
mahmutov has joined #osdev
amanita has joined #osdev
amanita_ has quit [Ping timeout: 252 seconds]
kwilczynski has joined #osdev
brenns10 has quit [Quit: Ping timeout (120 seconds)]
brenns10 has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
ArsenArsen is now known as Arsen
zoey has joined #osdev
maurer has joined #osdev
Ar0n has joined #osdev
ausserz has quit [Ping timeout: 258 seconds]
pretty_dumm_guy has quit [Quit: WeeChat 3.3-dev]
thinkpol has quit [Remote host closed the connection]
thinkpol has joined #osdev
tenshi has quit [Quit: WeeChat 3.2]
thinkpol has quit [Remote host closed the connection]
thinkpol has joined #osdev
<doug16k> found an underrun, it indexes into an array with movestr[strlen(mv)-1] and mv is an empty string
<doug16k> and movestr is a char array local variable
<j`ey> whats it?
<doug16k> old gnuchess bug I found
<j`ey> I mean, in what program
<j`ey> o
<doug16k> it's also the first time I have seen one-definition-rule violations cause the program to crash all the time
<doug16k> there are several struct entry_t in the same namespace in different files
<doug16k> that differ
<j`ey> :|
<gog> whoops
<gog> also love how descriptive the tag is
<gog> entry_t
<gog> entry of what???
GeDaMo has quit [Quit: Leaving.]
dormito has quit [Ping timeout: 240 seconds]
andydude has quit [Quit: andydude]
nvmd has quit [Ping timeout: 268 seconds]
nvmd has joined #osdev
dormito has joined #osdev
srjek_ has joined #osdev
dennis95 has quit [Quit: Leaving]
heat has joined #osdev
<doug16k> gog, exactly
<doug16k> I changed them to book_entry_t book_file_entry_t etc
<heat> why are you trying to build gnu chess tho
gog has quit [Ping timeout: 268 seconds]
YuutaW has quit [Quit: WeeChat 3.1]
YuutaW has joined #osdev
<doug16k> partly curious if it has improved, partly curious how insanely fast my cpu can look ahead in depth, partly because it would be trivial to get working in an OS project, partly nostalgia because many years ago I made a win32 GUI port of it
<heat> ah
<heat> i've ported stockfish to my OS
<heat> literally effortless
<doug16k> is it pure stdin/out too?
<heat> yes
<heat> it's uci too
<doug16k> stockfish pisses me off a bit
<doug16k> "tested on two cpu machines" wut?
<heat> stockfish is crazy strong compared to gnu chess
<heat> i checked that out the other day
<doug16k> yes I know
<heat> 2800 vs 3500
<doug16k> it is 2nd strongest thing there is
<heat> it's the strongest
<doug16k> no, alpha zero mops the floor with stockfish
<heat> nah
<heat> it used to
<heat> but stockfish has NNUE too
<doug16k> they turned it back around?
<heat> yes
<doug16k> neat
<heat> the big rivalry these days is stockfish vs leela
<doug16k> I mean leela
<heat> where leela is more or less alpha zero
<doug16k> that is alpha zero
<doug16k> I thought
<bslsk05> ​en.chessbase.com: Leela Chess Zero: AlphaZero for the PC | ChessBase
<bslsk05> ​en.wikipedia.org: Top Chess Engine Championship - Wikipedia
Arthuria has joined #osdev
doug16k has quit [Quit: Leaving]
Arthuria has quit [Read error: Connection reset by peer]
Arthuria has joined #osdev
doug16k has joined #osdev
<doug16k> ok, then even touching gnuchess was pointless and stupid waste of time
<doug16k> so is osdev
doug16k has quit [Client Quit]
<hgoel[m]> hmm, for initial driver loading, would it be better to call the initialization code on a single thread and let the drivers spin up their own 'main' threads or spin up a new thread per driver right away? feels like practically spinning up a new thread per driver right away is the better way, but then I'm not sure how the OS would know to proceed with further initialization
<heat> hgoel[m]: monolithic?
<hgoel[m]> I guess technically the stuff after should be able to handle drivers/devices coming up after
<hgoel[m]> yeah, modular but essentially monolithic
<heat> how I do it is that I essentially call the init code on the current thread
<heat> but yeah you can theoretically have a pool of threads and do it asynchronous like that
<hgoel[m]> yeah, I guess for now I'll go back to synchronous and decide when I actually can work on stuff past drivers
Arthuria has quit [Read error: Connection reset by peer]
<heat> linux is actually trying out asynchronous probing for a few releases now
Arthuria has joined #osdev
<hgoel[m]> yeah, I imagine it's a pretty decent reduction in startup time
Arthuria has quit [Read error: Connection reset by peer]
<heat> it's opt-in I think
Arthuria has joined #osdev
<hgoel[m]> I see
<heat> i see a potential issue where with multiple devices of a specific type they get different names each boot or something
<heat> which is bad
<hgoel[m]> ah yes, that's a good point, I hadn't considered that
<hgoel[m]> yeah, that's enough to convince me towards single threaded init
<heat> OTOH, big speedup
<heat> hardware is slow, cpu is fast
<hgoel[m]> maybe a nice inbetween would be to check for drivers which would have multiple instances and put the init for all of those on a single thread
<heat> every driver can have multiple instances(or should implicitly support that)
<heat> in all reality maybe a better solution is to establish stable mappings
<hgoel[m]> yeah, what I mean is that to avoid multiple devices which use the same driver from interfering on name choice
tacco has quit []
<hgoel[m]> oh well, seems like it's entirely down to preference, both seem to have pros and cons
<hgoel[m]> at least it seems like my code is robust enough for spinning up a thread per driver to still result in everything booting right for now, I'm still too scared to turn on SMP though lol, planning to put the extra cores on a simpler scheduler separate from the main one
<heat> the naming issue stuff is still a problem for synchronous too
<heat> imagine you upgrade your kernel, and then sda is USB and sdb is sata?
sortie has quit [Quit: Leaving]
<heat> purely because they switched positions in the .driver.init section or whatever
<hgoel[m]> ah I hadn't thought about that either
ahalaney has quit [Quit: Leaving]
<hgoel[m]> by naming issues I was mainly imagining things like two instances of the same NIC needing to consistently register in the same order
X-Scale has quit [Ping timeout: 268 seconds]
Burgundy has quit [Ping timeout: 268 seconds]
mahmutov has quit [Ping timeout: 252 seconds]
aquijoule_ has joined #osdev
richbridger has quit [Ping timeout: 265 seconds]
<immibis> on gentoo my network interfaces seem to be named by PCI bus ID
<immibis> which did change once when I added a new card and apparently triggered some PCI switch to change mode in the BIOS
Arthuria has quit [Ping timeout: 265 seconds]
<hgoel[m]> Hmm doesn't seem like there would be much of a way of practically handling that kind of situation
<hgoel[m]> Maybe drivers could use knowledge of the device to generate a hash and use that in subsequent boots to ensure that existing mappings are maintained