klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
gog has quit [Quit: byee]
oldgalileo has quit [Ping timeout: 264 seconds]
oldgalileo has joined #osdev
MrCryo has joined #osdev
MrCryo has quit [Remote host closed the connection]
MrCryo has joined #osdev
Terlisimo has quit [Quit: Connection reset by beer]
josuedhg has quit [Quit: Client closed]
Terlisimo has joined #osdev
oldgalileo has quit [Ping timeout: 272 seconds]
mctpyt has quit [Remote host closed the connection]
mctpyt has joined #osdev
MrCryo has quit [Remote host closed the connection]
netbsduser has quit [Ping timeout: 268 seconds]
heat has quit [Quit: Client closed]
Arthuria has joined #osdev
dza7 has joined #osdev
dza has quit [Ping timeout: 255 seconds]
dza7 is now known as dza
edr has quit [Quit: Leaving]
dasabhi has joined #osdev
<dasabhi> hello have any of you setup a serial connection a dev machine for remote dev of your os
<dasabhi> as in a main dev machine connected to a console server and you ssh into the console server
<dasabhi> this way you can see all the boot logs as you start the machine
<dasabhi> all the linux kernel dmesg lines
<dasabhi> or w.e its called in BSD
<Mutabah> Physically - not for a long time... but serial output is how I do most of my debugging in emulators
<dasabhi> yeah i know right now everything is on qemu
<dasabhi> but eventually will have to get to real metal
<Mutabah> One thing to beware of when doing physical serial is that it has a speed limit, so you need to block on the FIFO before writing
<zid> I had an emulator once that didn't implement the ready bit on the fifo
<zid> That was a pain in the arse to make builds for
<zid> had to have a seperate target for hw vs emulator
<zid> and remember which one to use
<zid> (And rather than being always ready, it was always busy, ofc)
<zid> I guess I could have counted down from a hundred million or something and set a flag to enter a fallback mode, but I had limited hw access for testing
Arthuria has quit [Ping timeout: 252 seconds]
<geist> other than that, on the host side there are lots of cheap usb serial adaptors and on the device side it gets interesting
<geist> if it's anything but a PC, especially arm or riscv dev boards, it'll almost certainly have some pins you can just wire to, and then you hav serial
<geist> on a PC it's much more complicated, and/or not possible, based on the particular device
<geist> laptops in particular probalby have no way to get serial out of it unless it's pretty old and has a DE9 on it
<geist> modern PCs still sometimes come with a serial header, but most of the time that's not brought out to an external connector
<geist> flip side is PCs have a display built in guaranteed so you can draw to that
dasabhi has quit [Quit: Lost terminal]
MrCryo has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
rustyy has quit [Quit: leaving]
MrCryo has quit [Remote host closed the connection]
xenos1984 has joined #osdev
TkTech has quit [Ping timeout: 264 seconds]
goliath has joined #osdev
node1 has joined #osdev
node1 has quit [Client Quit]
node1 has joined #osdev
npc has joined #osdev
Left_Turn has joined #osdev
macewentoo has quit [Remote host closed the connection]
macewentoo has joined #osdev
gbowne1 has quit [Remote host closed the connection]
Left_Turn has quit [Read error: Connection reset by peer]
Left_Turn has joined #osdev
oldgalileo has joined #osdev
Gooberpatrol_66 has quit [Quit: Konversation terminated!]
zxrom has quit [Quit: Leaving]
node1 has quit [Quit: Client closed]
oldgalileo has quit [Remote host closed the connection]
oldgalileo has joined #osdev
GeDaMo has joined #osdev
rustyy has joined #osdev
oldgalileo has quit [Ping timeout: 260 seconds]
netbsduser has joined #osdev
heat has joined #osdev
oldgalileo has joined #osdev
macewentoo has quit [Remote host closed the connection]
bliminse has quit [Quit: leaving]
navi has joined #osdev
* Ermine burps
<nikolapdp> rude
Jackneill has joined #osdev
oldgalileo has quit [Ping timeout: 268 seconds]
dalme has joined #osdev
oldgalileo has joined #osdev
zxrom has joined #osdev
npc has quit [Remote host closed the connection]
ZombieChicken has joined #osdev
bauen1 has quit [Ping timeout: 272 seconds]
mctpyt has quit [Remote host closed the connection]
mctpyt has joined #osdev
<Ermine> yes
* mjg burps
<mjg> nature calls
<Ermine> mt7921u please quit fucking over me
<Ermine> it got upset by rsyncing my music library
<Ermine> time to learn classic field theory?
MrCryo has joined #osdev
<nikolapdp> go ahead
<mjg> almost got my color scheme working in enovim
<Ermine> Anyway, there's a big bunch of fixes coming in 6.10
<mjg> linux?
<mjg> fuck that kernel
<Ermine> wifi fixes*
<mjg> be a proper linux user and make sure to cherry pick hw based on the os youa re running
<Ermine> > fuck that kernel --- yes, embrace onyx
<mjg> as to opposed to expecting hardware to work
<mjg> anyhow i may need to learn some neovim-specific lua to plug gaps 8(
<kof673> :colorscheme not_pessimal
ZombieChicken has quit [Quit: WeeChat 4.2.1]
<Ermine> With wifi there's another factor: hopefully mediatek haven't fucked up its firmwares
Arthuria has joined #osdev
<FreeFull> Well, besides Windows, is there any OS with better hardware support than Linux?
<FreeFull> ( And Windows hardware support is only good for recent hardware, not that great for older hardware any more )
<Ermine> I have literally ancient hardware and windows works fine on it
<Ermine> Also, ChromeOS and MacOS
<Ermine> They all have 'Just Works' factor much higher than linux
<Ermine> and Android
Gooberpatrol66 has joined #osdev
bauen1 has joined #osdev
<FreeFull> ChromeOS *is* Linux
npc has joined #osdev
Arthuria has quit [Ping timeout: 246 seconds]
<FreeFull> MacOS is specifically designed to work with Apple hardware (and the Apple hardware was designed to run MacOS)
<FreeFull> By "old hardware" I mean "Latest available driver was for Windows XP"
scaleww has joined #osdev
<nikolar> lol macos doesn't count it supports like 5 hardware configurations, especially on arm
FreeFull has quit [Ping timeout: 246 seconds]
FreeFull has joined #osdev
josuedhg has joined #osdev
<heat> onyx
<Ermine> macos just works on configurations it specifically supports
<Ermine> 'Chromeos is linux' is like 'android is linux'
<heat> chromeos is far more linux than android
<heat> even though kernel-wise they're both very linux
<Ermine> upstart upstart upstart
<nikolar> upstart upstart upstart
<Ermine> nonetheless, chromeos 'just works' unlike other linux distros
<heat> i would personally feel comfortable calling android linux
<heat> even if it's not GNU/Linux or whatever the weirdos care about
<nikolar> eh, it's very different in every way other than the kernel when you compare it to the rest of the linux world
<heat> and?
<Ermine> is caring about workitude weird?
<heat> linux kernel, linux utils, posix shell, posix utils
<nikolar> heat: and that's why people don't call it a linux distro?
<heat> i'm not calling it a distro
<heat> because it's far more than a distro
<heat> but it is linux
<nikolapdp> linux is the kernel
<nikolapdp> if we're being pedantic
josuedhg has quit [Ping timeout: 250 seconds]
<nikolapdp> so no anroid isn't linux, it's built on linux
<Ermine> well anyway
<heat> is RHEL linux?
<nikolapdp> no, it's a linu distro
<nikolapdp> *linux
<heat> what
<Ermine> as L in its name suggests, yes
<heat> it's barely even linux, the red hat kernels are almost unrecognizable from upstream
<nikolapdp> heat: like define "linux"
<heat> linux kernel
<heat> well, OS built on top of the linux kernel
<nikolapdp> so a distro can't be linux because it's a distro, not the kernel
<heat> which all of these are
josuedhg has joined #osdev
<heat> a distro is not really an OS
<Ermine> RHEO when
<nikolapdp> heat: wat
<nikolapdp> distro is an os
<heat> no
<nikolapdp> what do yo umean no
<nikolapdp> what is it then
<heat> what operating system do you use? fedora or linux?
<heat> distros just package shit and hand it to you
<nikolapdp> yes that's what an os is
<Ermine> Anyway, if linux networking pisses me off enough, I'll go and make wifi on onyx
<nikolapdp> Ermine kek
<heat> no, that's not what an OS is
<nikolapdp> and what is an os
<nikolapdp> is it just the kernel?
<heat> windows is full of its own code and utilities, it's not repackaging 3rd party software
<nikolapdp> and?
<heat> android is full of its own code and utilities, it's not repackaging 3rd party software
<Ermine> If you're nerd: OS is a program that manages, abstracts and multiplexes computer hardware resources
<heat> same for macOS and freebsd and ...
<heat> a distro is just a distribution of software
<heat> linux was originally "The OS"
<Ermine> If you're normie: OS is a thing you use to run your programs
<nikolapdp> there's basically no "first party software" for linux
<Ermine> "install and run"
<heat> red hat is the closest you have to first party software with them doing most of the funding for software work on linux
<heat> but it's not quite there
<nikolapdp> so no first party software, as i said
<Ermine> depends on pedanticness level
<kof673> ^^^ /me holds up ouroborous round and round it goes, where it lands nobody knows
<Ermine> yeah guys you're bikeshedding basically
<heat> whaaaaaaaaaaaat
<heat> i thought i was solving world hunger here
<nikolapdp> i mean i told heat why no one is calling android "linux" and he went on a rant so
<Ermine> see, you've started this thing
<heat> no one is calling android linux because they're a bunch of stuck up idiots that can't cope with it not using the rest of the "GNU/Linux" crapware
<nikolapdp> no
<heat> i guarantee you if it used the GNU coreutils and glibc and all that shit, it would be a linux
<Ermine> c'mon
<heat> but google went NIH, as they usually do
<nikolapdp> so alpine isn't linux?
<Ermine> google can afford that
<nikolapdp> since they use busybox and musl
<heat> alpine has plenty of gnu software
<Ermine> yandex has high NIH factor for its infra too
<heat> very good russian libc yes
<Ermine> huh? they don't produce their os/distro/whatever you call it
<Ermine> I mean, big IT companies have NIH stuff and they can afford it
<mjg> ttps://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git/commit/?h=vfs.all&id=244ebddd34a0ab7b1ef865811864136873f4b67c
<bslsk05> ​git.kernel.org: fs: don't block i_writecount during exec - kernel/git/vfs/vfs.git - VFS tree
<nikolapdp> isn't google doing some weird semi-rolling release debian thing for their servers
<mjg> now this is interesting
<mjg> blocking writes for text mappings was considered a given in unix
<Ermine> interestingly enough, bslsk05 managed to read this borked link
<nikolapdp> why's that surprising
<Ermine> ttps://
<nikolapdp> ah didn't even not
<nikolapdp> notice
<mjg> huh
<mjg> how did that happen
<nikolapdp> i guess it might be stripping the protocol
<mjg> ye
<mjg> i'm mostly curious about the copy without the h
<nikolapdp> kek
<mjg> i just double clicked
<Ermine> mjg: so linux is a little bit more optimal?
heat has quit [Quit: Client closed]
<mjg> ?
<Ermine> less locking
<mjg> well there are fewer atomics per exec, but they are inconsequential given actual perf breakage there
<mjg> of which there isp lenty
MrCryo has quit [Remote host closed the connection]
heat has joined #osdev
<heat> mjg good ridance
<heat> denywrite was a mistake
<mjg> :O
<heat> literally useless
<mjg> i wonder if the freebsd vm can cope as is
<mjg> should one do similar whacking there
<heat> nikolapdp i don't know but that's a pretty standard idea
<heat> debian + your own handrolled kernel
<heat> then whatever patched software you need overrides the debian package servers
<Ermine> debian gnu/onyx
<mjg> does onyx deny writes/
<heat> no
<mjg> of course not
<heat> literally useless
<heat> just spoke out against it
<mjg> now you gonna claim isntead of being a lazy PoS you had seen ahead
<heat> unless your shit package manager opens the file and writes
<heat> brother
<heat> MAP_DENYWRITE has been a noop for like half my lifetime
<heat> i had to see behind not ahead
<mjg> :X
<Ermine> is it complaint behaviour
<Ermine> compliant*
<heat> oh my bad
<mjg> i would argue it does help *sometimes*
<heat> the noop MAP_DENYWRITE is older than me by 6 years
<mjg> for example joe f. random tries to replace libc with cp
<mjg> they get a ncie error
<mjg> does not help if they are determined t ofuck up, but i'm assuming no malice
<Ermine> ah ok
<heat> mjg: that's if cp truncates + writes and doesn't just replace the file outright
<mjg> cp does not rename over it or shit
xenos1984 has quit [Read error: Connection reset by peer]
<mjg> it is part of the expected behavior
<mjg> say you have few precious MB left and are overwriting a multi G file
<mjg> cp is going to sort it out in place => no problem
<mjg> while presereving funky attributes
<heat> you could just unlink
<mjg> and then what
<heat> like fucking pacman does
<mjg> you got time window where the file is not there
<heat> yes
<heat> i don't think cp is required to be atomic in any way
geros has joined #osdev
<mjg> you also lose funky attributeso n the file
<mjg> also what about other hardlinks to the sucker
<mjg> now it's a new file!
<mjg> it's literally incompatible, legally observable behavior
oldgalileo has quit [Ping timeout: 246 seconds]
geros has quit [Client Quit]
<heat> if you say so
<heat> i'm not aware of any of the legalese around cp
<mjg> i am sayin a user can have 2 hardlinks to one inode, which is nothing outlandish
<mjg> and that current cp behavior will modify hte content without disrupting the above
<heat> but are you required to preserve hardlinks?
<mjg> one does not need to be a standard lawyer
zxrom has quit [Quit: Leaving]
<heat> the gnu man page doesn't mention any of that and I CBA to open posix
<mjg> i am not saying cp was always required to do it by some standard, i am saying that its current behavior *does* preserve it
<heat> nevertheless cp does O_TRUNC
<mjg> ya which wont work with denied writes
<heat> have i told you that on linux truncation fucks with MAP_PRIVATE mappings?
<mjg> ?:D
FreeFull has quit [Remote host closed the connection]
<heat> truncate on a MAP_PRIVATE'd file also discards CoW'd pages
<Ermine> cp does ftruncate instead of open(O_TRUNC)???
<heat> which is permissible by the standard
<Ermine> > i don't think cp is required to be atomic in any way --- iirc it can't under unix sematics
xenos1984 has joined #osdev
<Ermine> at least cp -r
<heat> -r is impossible to be atomic
<heat> anything else sure is
<heat> hmm, actually not possible without mandatory file locks
<heat> at least on the src file, but totes possible on dest
<Ermine> or without fs transactions, but that's not unix
<heat> unix has transactions
<heat> it's called rename
<Ermine> that's "transaction"
<Ermine> s6-rc uses that actually, and this is slow
<heat> it depends
<heat> it should not be that slow
<Ermine> ntfs in windows has actual transactions
<Ermine> however microsoft deprecated them
<heat> ext also has transactions but its journalling
<Ermine> it's not api
<heat> yeah you'd need to expose a bunch of shit in the VFS APIs
<netbsduser> the question is whether you want transactional fs ops or not
<Ermine> The question is whether you want unix
<Ermine> but that's the question for the next arc
<heat> no i want onyx
oldgalileo has joined #osdev
<bslsk05> ​lore.kernel.org: [RFC] ML infrastructure in Linux kernel - Viacheslav Dubeyko
FreeFull has joined #osdev
<Ermine> habits are hard. I tried to highlight stuff with ctrl-v in vim
<Ermine> oops, in vscode
<heat> lol
bauen1 has quit [Ping timeout: 268 seconds]
josuedhg has quit [Quit: Client closed]
josuedhg has joined #osdev
m5 has joined #osdev
<bslsk05> ​twitter: <mycoliza> what, and i cannot stress this enough, the fuck https://pbs.twimg.com/media/GPUUFSdasAEpHWp.jpg
<bslsk05> ​[aiBIOS leverages an LLM to integrate AI capabilities into Insyde Software’s flagship firmware solution, InsydeH2O® UEFI BIOS. It provides the ability to interpret the PC user’s request, analyze their specific hardware, and parse through the LLM’s extensive knowledge base of BIOS and computer terminology to make the appropriate changes to the BIOS Setup. This breakthrough technology helps address a major hurdle for PC users that require or desi
<heat> what the fuck
gog has joined #osdev
<bslsk05> ​'Open the Pod bay doors, please, HAL.' by batmanmmv (00:03:06)
<kof673> i will stop linking that, just need a little man floating in space emoji
<GeDaMo> 👨‍🚀
m5 has quit [Ping timeout: 250 seconds]
bauen1 has joined #osdev
<Mondenkind> aaaaaaaaaaaaaaaaaaaaaaaaaaaa
<nikolapdp> i don't often agree with heat but
<nikolapdp> what the fuck
scaleww has quit [Quit: Leaving]
<Mondenkind> i always agree with heat
<Mondenkind> i would do anything for heat
netbsduser has quit [Ping timeout: 240 seconds]
<Mondenkind> https://linux.die.net/man/3/j0 c try to have sensible function names challenge (impossible)
<bslsk05> ​linux.die.net: Just a moment...
netbsduser has joined #osdev
<gog> lmao
<gog> but terseness
<mjg> :d
<mjg> i was going to make a joke that this is where the 'e' went
<mjg> but it's not there
<gog> i know where the e went
* Ermine gives gog a piece of cheese
* gog is fascinated
<mjg> i do too
<mjg> opEnbsd
<gog> good for her
<heat> Mondenkind <3
<mjg> SEE ALSO y0(3)
bauen1 has quit [Ping timeout: 268 seconds]
<heat> have you seen the awful random functions
<mjg> is that gangsta shit in the man page
<mjg> where is dawg(4)
<mjg> erm, dawg(3)
<heat> rand, random, lrand48, drand48, etc
<Mondenkind> there is cat(1)..
gildasio has quit [Ping timeout: 260 seconds]
gildasio has joined #osdev
zhiayang has quit [Quit: oof.]
zhiayang has joined #osdev
<mjg> i ordered a funny fragrance 3 days ago
<mjg> still has not arrived, the delivery company claims their systems got fucked and there is going to be a delay
<heat> will it make you funnier?
<mjg> chances are decent it waits in HEAT
<nikolapdp> don't they always
<mjg> that is it is going to arrive fucked up
<Ermine> I have cat without (1)
<mjg> nikolapdp: no
<nikolapdp> your fragrance is waiting in heat
<nikolapdp> is he smugling it
<mjg> excluding fedex i always had next day delivery
<mjg> yes in a suitcase
<mjg> if you know what that is
<Ermine> what a coincidence
<heat> yes suitcase lets say that yes
<nikolapdp> ke
<mjg> it's an actual term
<mjg> :]
<Ermine> there's a local delivery company which systems got fucked up as well
<mjg> neighbour? how you doin
<mjg> probably some webdev forgot a where clase to a delete query
<mjg> and there are no backups because you don't need them in the cloud
<heat> russian counterfeit perfume mjg? really?
<mjg> it's basically vodka
<mjg> what's wrong with it
<Ermine> that company got hacked and its databases got encrypted, including backups
<mjg> actual true story?
<heat> OH vodka perfume sounds wonderful
<mjg> cause i can easily believe you
<Ermine> that's what I've heard from news
<mjg> lmao
<heat> classic ransomware attack
<heat> i mean encrypting backups isn't normal unless this was either targetted or their backups were fucking on-site
<mjg> there was a big polish company whcih ckept backups on the same physical media as production data
<mjg> then one day it all got fucked
<mjg> how did that happen remains a mystery
<Ermine> fk
<Ermine> I didn't actually think about bringing backup disks offline
<mjg> what's the name of the company?
<mjg> my fuckers are packeta.com
<heat> completely unrelated but this somehow reminds me of that thing where some gitlab guy accidentally deleted the wrong database file
<Ermine> SDEK
<mjg> heat: i know hte story, 7/10 kek
<heat> Ermine: not just offline but imagine if you have e.g a fire
<heat> better have them off-site
<heat> ideally replicated
<mjg> well you reminded me of a great social engineering story
<mjg> if one can call it that
<mjg> at my first gig i was a sysadmin at a small firm, it was renting office space in a big office complex
<mjg> one night the alarm tripped off, i got a call so went to check it out
<mjg> some fucker did not close the window and it was super windy
<mjg> some shit was flying around
<Ermine> heat: good point
<mjg> i tell the dudez alarm is off now and i'm bailing, they say great
<mjg> few minutes later one of the guards runs up to me on the street asking if my name is xyz
<mjg> cause that's the authorized person
<mjg> :d
<Ermine> So the backup scheme is like 1) Fetch disk from off-site location, 2) upload new backup onto it, 3) disconnect and bring it back to its location?
<heat> use S3!!
<heat> i genuinely don't know what the sensible way of doing this is
<heat> you probably want the off-site stuff to be connected but immutable (and e.g backups get deleted every 30 days or whatever)
<nikolapdp> at work, we use s3 for one large database
<nikolapdp> that's basically how it works
<nikolapdp> i think we keep a month or two of backups
<nikolapdp> i think it's a full backup weekly and then incremental backups every day
<mjg> backups are for the week
<mjg> here is a funny as fuck story
<mjg> dude was dilligently doing backups every week for years
<heat> aight today is mjg story time i guess
<mjg> of some fucked website
<mjg> he never needed to restore and never tested if it works
<mjg> ... and it did not
<mjg> which he found out the hard way after shit hit the fan
<mjg> the site disappeared from the interwebz
<mjg> it was a forum for some fuckers
<mjg> made the news 'n shit
<nikolapdp> lol funny thing, we were supposed to test this backup scheme but i went to a vacation and when i got back they were like nah do this other hting
<mjg> dude
<mjg> 1. backup
<mjg> 2. failover
<mjg> is the thing which either gets tested continuously or never
<mjg> i see you are in the latter kamp
<mjg> keep your cv fresh
<nikolapdp> oh it is
<Ermine> Oh yeah, I had a mistake in my script, so my backups were empty
<nikolapdp> lol mine definitely aren't empty at least
<Ermine> But i've noticed it before shit hit the fan
<heat> what are you compressing them with
<Ermine> tar
<heat> ... that's not a compression format
<Ermine> xz
<mjg> last funny, i promise
<heat> oh ok LGTM reviewed-by: heat
<heat> but
<heat> use zstd use zstd use zstd
<mjg> a company doing their backups was running low on space
<Ermine> I've retired already
<mjg> so they devised a procedure: unlink old, create new
<heat> retired already?? dang this young
<mjg> aaaand one of the machines died during the proess
<mjg> :d
<nikolapdp> my backups are (full or incremental) snapshots of a zfs filesystem with zstd compression
<heat> i wish i was retired
<nikolapdp> it's pretty good
<heat> you almost had me with zstd compression but zfs ruined it sorry
<Ermine> oh no
<nikolapdp> like i care lol
<heat> when i become president of the world i'll ban zfs
m5zs7k has quit [Ping timeout: 240 seconds]
<nikolapdp> btw it's zfs snapshots because it's a write heavy db and you can't have a lot of downtime so snapshots it is
<Ermine> anyway, rn databases are small, so there shouldn't be any notable difference between xz and zstd
<nikolapdp> heat luckily that will never happen
<Ermine> you sure?
<heat> xz compresses slightly better at the cost of being a lot slower
<nikolapdp> yeah it's a drastic diffencein speed
m5zs7k has joined #osdev
<Ermine> we'll get onyx on every voting machine in the world
<Ermine> and then make them output 100% votes for heat
<heat> can't wait for the class-action law suit
<nikolapdp> lol
<heat> and subsequent settlement with fox news
<heat> anyway filesystems are bad im on windows big up ntfs
<mjg> refs
<Ermine> use refs
<nikolapdp> refs
<heat> i don't have ReFS that's a windows server only feature i think
<heat> > Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
<heat> yeah
<nikolapdp> i think you might be able to use it with windows pro
<nikolapdp> ah never mind
<Ermine> windows 11 insider preview apparently has support for refs
<heat> some pro versions support refs
<heat> apparently
<nikolapdp> huh interesting
<heat> refs is to windows what btrfs is to fedora
<heat> totes great btree fs that they want to push as a default but can't because it's perennially broken
<Ermine> btw i forced fedora to use ext4
<nikolapdp> good
<mjg> what is fedora on
<mjg> xfs/
<Ermine> they use btrfs by default
<heat> i think xfs is the default for rhel
<heat> oh they fully switched? that's awful to know
<Ermine> opensuse uses btrfs by default too
<nikolapdp> at this rate bcachefs is going to be stable and more complete than btrfs lol
<nikolapdp> *sooner
<Ermine> well idr about fedora
<heat> oh christ
<mjg> lol
<heat> btrfs still fucking dies if you fill up your disk too much
<heat> prod ready am i right
<Ermine> that's fedora for you
<nikolapdp> and non-mirror raid featurs also die
<Ermine> newest, shiniest: yes
<nikolapdp> and apparently it's really fragile
<Ermine> prod ready: lol who cares
<nikolapdp> since it managed to die the one time i've used it
<heat> tbf you're not really supposed to use fedora as a server OS
<nikolapdp> true
<heat> RHEL is still on xfs as far as I know
<Ermine> i don't
<Ermine> well I've once chatted with the guy who uses fedora on server and upgrades it by reinstalling
<nikolapdp> uh yuck
<heat> fwiw arch is not prod ready either
<heat> neither is debian (yuck yuck yuck yuck)
<Ermine> rolling release is not prod ready, that's for sure
<nikolapdp> heat i've got to say that i've had fewer issues running arch/artix than ubuntu lol
<heat> alpine is the least prod ready shit ever i would never run an alpine container willingly
<nikolapdp> ignoring the self inflicted issues
<Ermine> :(
<heat> it's not about alpine as the distro
<Ermine> at ext-job I've deployed two alpine containers
<heat> just musl as the libc
<heat> i would not underpin my whole business with fucking musl
<heat> knowing how the sausage is made
<Ermine> those containers work tho
<Ermine> but the business logic runs on glibc, since it uses proprietary libs
<heat> i would guess musl kinda mostly works until it doesnt, or until perf goes oopsie because the malloc sucks, etc etc
<heat> then you're stuck LD_PRELOAD ing random libraries and mallocs and maybe even patching your alpine instance with your own musl patches
<Ermine> mon
<heat> OH
<Ermine> 'musl sucks' and 'musl sucks in highload environment' is two different things
<Ermine> are*
<Ermine> we had that discussion, had we
<heat> yeah but it sucks in both
<heat> re: the DNS mega discussion that happened over like 10 years of back and forth
<Ermine> well, I made dns work at that job
<heat> it wasn't fixed earlier because of the mega stubbornness on musl's side
gorgonical has joined #osdev
<heat> they'll go to great lengths to avoid doing the obvious
<Ermine> And my personal servers work fine too
<nikolapdp> yeah same
<Ermine> Not denying that musl dns situation is awkward
<Ermine> but there are success stories too
gildasio has quit [Remote host closed the connection]
gbowne1 has joined #osdev
<heat> i'm not saying musl or alpine are broken
<heat> just that i know of too many awkward situations with musl to ever think it's a great choice for prod
<Ermine> > alpine is the least prod ready shit ever
<gorgonical> I have a complaint to make about musl if that's what we're doing
<heat> Ermine cuz of alpine
<heat> uh
<heat> cuz of musl
<heat> sorry if i didn't make that clear enough
gildasio has joined #osdev
<nikolapdp> gorgonical go ahead
<Ermine> Freudian slip?
<nikolapdp> why heat is ranting
<nikolapdp> *while
<heat> i actually run alpine
stilicho has joined #osdev
<heat> Ermine: though i have to admit having busybox is annoying
<heat> i'm a systemd + coreutils enjoyer
<gorgonical> because musl has much smaller thread stack sizes (and they are asymmetric to the parent process), the library has to discover how much stack space it has
<Ermine> coreutils are packaged btw
<heat> the algorithm they use for finding out the stack's size is hilarious btw
<gorgonical> and so the way they do this is by repeatedly mmapping page chunks in the stack space, checking for eperm or einvalid. When mmap returns efault they assume they have reached the end of the stack size
<gorgonical> Fucking ridiculous
<heat> yep
<heat> wasn't it mincore? or maybe mremap
<gorgonical> It's mremap
<nikolapdp> is there no syscall or something to ask the kernel for the stack size lol
<gorgonical> apparently not!
<heat> what stack?
<Ermine> Well, anyway
<Ermine> I'm going to continue fucking with tg bot api
<heat> i actually have a problem in my rust port where it tries to find out the stack size, but i assume that bit is broken on my end so it like unmaps the stack or something weird like that
<heat> Ermine: USE ONYX
<Ermine> btw
<Ermine> I should try feeding to my cloud provider
<Ermine> (and, ideally, find a cheaper one)
<heat> you should probably wait til i bother pushing my grub port
<heat> assuming you'll need to install it, these cloud providers always have such bespoke media formats
<Ermine> qcow2 works in that case
<Ermine> so I upload a pre-installed image
<Ermine> otoh the provider wants cloud-init
<heat> yeah i don't quite know what cloud-init is
<heat> i know ubuntu server has it
<Ermine> many server distros have it
<nikolapdp> even arch has it i think
<Ermine> it fetches various parameters from the provider
<nikolapdp> it's quite common
<Ermine> like networking settings, user, sudo config, ssh keys
<nikolapdp> how is it communicating to the vm i wonder
<heat> oh okay you need to add some python script to cloud-init it seems
<heat> networking?
<nikolapdp> don't know, maybe
<Ermine> ip address, dns
<Ermine> btw are you porting rust?
<gorgonical> heat: the algorithm is even worse, because the stack size isn't even cached
<heat> i have a wip port of some older rust version
<nikolapdp> the better quqestion is why
<heat> in my hard drive
<bslsk05> ​github.com: musl/src/thread/pthread_getattr_np.c at master · bminor/musl · GitHub
<heat> gorgonical: caching is slightly incorrect because of stack growing
<gorgonical> then why check at all? t->stack is checked
<heat> t->stack is for pthreads i imagine
<heat> which do not grow
<gorgonical> hmm yes
<gorgonical> suppose so
<heat> now, the trivial solution in this case is to get proc/self/maps and parse out the [stack] entry
<gorgonical> but fwiw this is a pthread function
<heat> Ermine: i also started porting go but gave up
<heat> and i have tons of xorg ports that i never finished up and pushed
<heat> tl;dr i'm a mess
<gorgonical> I remember now. The pthread threads have static stack size, but the parent process has a growing stack, that they have to check. Eugh, such a nauseating strategy though
<heat> you could try adding some maps parsing and (lol) convincing upstream that this is a valid and easy solution
<gorgonical> So, to be fair to them, they could instead blindly map the stack each time, but they do the bare minimum and cache 8bytes of data to prevent a shitload of syscalls
<Ermine> I wanted to give rust port a try, but I haven't found any porting guides, besides wiki.osdev.org one
<heat> gorgonical: for maps parsing you're basically looking at finding some line similar to "7ffd08179000-7ffd0819a000 rw-p 00000000 00:00 0 [stack]" and doing the math
<gorgonical> heat: just feels like interrogating a procfs seems like the wrong solution. If we can do that, shouldn't there just be a syscall that exports that information to us in the first place?
<heat> no
<heat> procfs is a legitimate api
<gorgonical> it's ill-defined, though
<heat> why?
<gorgonical> is it agreed upon that it's always /proc/self/? I'm genuinely unaware
<nikolapdp> procfs might not be mounted, maybe that's~.
<gorgonical> nikolapdp: that's fair, but I don't think it's too bad to stat(/proc) first
<heat> yes, /proc/self is guaranteed to be a magic symlink to yourself
<gorgonical> is that a linuxism or a general unix property?
<heat> and /proc is defined by the FHS
<heat> linuxism, this is all linuxisms
<nikolapdp> is musl used for embedded since those usually don't mount procfs from what i gather
<heat> although fwiw freebsd has its own emulation of linux procfs called linprocfs
<Ermine> iirc my router uses uclibc
<heat> nikolapdp yes but musl is not fully correct without procfs in any way
<heat> e.g pthread_setname_np just doesn't work
<Ermine> also some bb utilities rely on procfs
<heat> Ermine: i have to mention that porting rust is reeeeeeeeeeally fucking annoying
<heat> it wants its own build of LLVM, it has a bunch of submodules that aren't really submodules but git repos and need patches so you need to override those, etc etc
<heat> the build system is fully custom and completely backwardly awkward
<Ermine> heck
<heat> i basically learned how to cross compile rust by following some obscure fuchsia CI python script
<Ermine> So they basically use their own version of llvm?
<heat> yes
<Ermine> damn
<nikolapdp> rust sucks? who knew
<Ermine> nikolapdp: please don't start
<bslsk05> ​github.com: GitHub - heatd/llvm-project-rust at rustc/13.0-2021-09-30
<gorgonical> I tried to install a rust crate for a program and couldn't do it because of the yanking ability of rust crates
<nikolapdp> Ermine: kek sorry
<gorgonical> So crates out in the wild are just permanently broken now
<nikolapdp> yanking ability?
<gorgonical> You can de-register crates of specific versions from the index
<nikolapdp> ah right
<gorgonical> And this not only makes the rust utility angry, it forces/breaks existing crates that depend on that crate
<nikolapdp> i had issues installing some crate so i just got sources and dumped them into a dir and it worked
<nikolapdp> mostly
<gorgonical> I tried that, but the maintainer massively reorganized the repos and I actually couldn't find the sources
<gorgonical> I didn't look that hard, though
<nikolapdp> guess rust is heading to another leftpad debacle
stilicho has quit [Quit: Client closed]
<gorgonical> In principle yanking crates forces downstream to update, but in practice it just means only very active crates stay valid
<nikolapdp> yeah
<gorgonical> There's no concept of "this worked five years ago, so we can continue to use that version"
<Ermine> Confirmed, my router uses uclibc
<heat> my routre has uclibc and musl
<nikolapdp> heat is that french
<nikolapdp> routre
<heat> british
<nikolapdp> same thing
<Ermine> and bb 1.17.4
<heat> centre routre
<Ermine> and kernal 3.14.14
<Ermine> 3.10.14*
<nikolapdp> KERNAL
<heat> old kernals 😍
<heat> should've used onyx
<nikolapdp> surprised it's not 2.x
<Ermine> it's mips
<nikolapdp> neat
<heat> mips onyx porten
<mjg> 3.10 old?
<mjg> :(
* mjg is old
<Ermine> fucking anicent
<Ermine> mjg: you're baby in comparison
<mjg> 8(
<mjg> i left rh when 3.10 was the shit in rel7
<mjg> rhel7
<mjg> time flies innit
<mjg> s/innit/whatever appropriate/
<Ermine> rhel patches the kernel
<Ermine> unlike router vendor
<heat> vendor stable kernels are totes secure and stable innit mjg
<mjg> they sure are
<Ermine> i'm pretty sure rhel kernels are pretty stable
<heat> they're not
<nikolapdp> and secure kek
<Ermine> well ok
<Ermine> anyway
<bslsk05> ​lwn.net: White paper: Vendor Kernels, Bugs and Stability [LWN.net]
<nikolapdp> that was fast
<Ermine> there are some mips eval boards in the wild
<nikolapdp> oh nice
<Ermine> Also I have an old router, but I need to solder uart pins to it
oldgalileo has quit [Ping timeout: 252 seconds]
<Ermine> dlink dir-300. Ancient router actually
<heat> time to port onyx to mips
<Ermine> And, as far as routers are concerned, it's also time to make wifi stack on onyx
<Ermine> which brings us back to classical field theory
<gorgonical> "a monad is a monoid in the category of endofunctors"?
<Ermine> this is category theory
<heat> Ermine: what's the license for the linux wifi stack?
<Ermine> idk
<Ermine> it's cringey imo
<mjg> CRINGEv2
<heat> gpl sad
<heat> maybe the freebsd or *whispers* openbsd stacks are portable and good enough
<mjg> freebsd is porting linux drivers
<mjg> :d
<heat> openbsd is known for its STELLAR wifi support innit theo
<mjg> they apparently do have STELLAR wifi for select chips
<heat> mjg so you have the stack but not the drivers?
<mjg> which is wya more than one can say about certain other system
<mjg> and i'm not even dunking on onyx here
<heat> i'm aware
<Ermine> GPL-2.0-only
<Ermine> picked from net/wireless/lib80211_crypt_ccmp.c
<Ermine> freebsd had the tool to convert ndis drivers to freebsd ones
<heat> wifi is one of those things i have very little interest in working on
<mjg> i tired that ndis thing with a bunch of cards, did not work once
<heat> that and audio
<Ermine> I have other priorities too btw
<nikolar> Lol really
<heat> WHAT
<Ermine> ?
<heat> dedicate your life to me, now.
<Ermine> i meant priorities in onyx
<Ermine> eg virtio
<heat> or that yeah
<nikolar> Yeah that's probably the smarter use of time
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<mjg> :d
<heat> mjg i assume you're working on wifi then
<mjg> for onyx
<mjg> yes
<heat> send patchen when you're finished
<mjg> i'm screwing around with completion in onyx
<Ermine> welcome to Onyx Foundation
<mjg> good: recognizes types in a struct
<mjg> bad: suggests completions it claims are wrong
<mjg> example: struct crap { void *lol };
<nikolar> What completion
<Ermine> do you use correct clangd?
<mjg> then you type printf("%s", crap->
<mjg> it suggests lol
<mjg> and then complains about it
<Mondenkind> oh yeah my experience when i tried some clangd stuff like that a few years ago was similar--totally broken
<Mondenkind> glad to know nothing's changed?
<mjg> it's not *totally* broken
<Ermine> clangd worked for me in vs code
<mjg> it beats doing everything by hand
<Ermine> just specified the path to clangd from onyx llvm port
<nikolar> clangd works fine for me
<mjg> you can consider it autocomplete a little smarter than merely typing shit by hand
<nikolar> You probably need compile_commands.json
<heat> clangd works totally fine for me
<Ermine> I'd say it works beautifully in vs code
<heat> actually, i'm lying: it has some troubles with thread safety annotations that don't show up in clang itself
<Ermine> with all type labels everywwhere
<nikolar> Huh maybe vim is better than neovim, mjg
<mjg> how is that a vim thing
<mjg> or neovim
<nikolar> I'm kidding
<nikolar> But you probably need compile_commands.json
stilicho has joined #osdev
<nikolar> For it to work properly
<mjg> i do have it
<heat> are you actually testing it out with onyx?
<mjg> this is how it even knows what to suggest
<mjg> i'm fucking around in the linux kernel
<heat> oh then it should Just Work
<mjg> it mostly does, modulo warts like the above
<nikolar> Also don't leave it running for too long pretty sure it's got memory leaks lol
<nikolar> clangd
<mjg> here is a non-printf example of the problem
<mjg> void lolek(int dupa) { printk("%d", dupa); }
<Ermine> lolek i bolek!
<mjg> now i have a struct inode, fuckton of fields, very little in way of "int" or compatible
<mjg> lolek(inode->
<mjg> and it lists *everything* as opposed to matching yptes
<heat> oh
stilicho has quit [Client Quit]
<heat> well that's tricky
<heat> you can't really do strict type matching for instance
<mjg> i don't think it is, i knows to complain about it once selected
<heat> and e.g
<heat> lolek(inode->i_pages != NULL) is a valid expression
<nikolar> Is it showing everything that's automatically castable to int
<nikolar> Or everything
<mjg> literally everything, pointers 'n shit
<heat> it should show everything, i don't think it's a problem
<heat> i would find it problematic if it didn't
<nikolar> Yeah now that I think about it's that's how it works for me too
<mjg> there can be a dedicated opt for showing everything if you really want it
<heat> i thought you were getting like unrelated fields or whatever
<heat> something really nonsensical
<mjg> anyhow, this is still a massive win over not having an lsp in that the idents comes from the struct
<mjg> so overall an improvement, but not as good as it could be
<mjg> ye unrelated shit happens without lsp
<mjg> there were regrettable vim plugins to that extent :X
<Ermine> TIL rwlock
<mjg> ?
<mjg> you are scaring me man
<Ermine> ?
bauen1 has joined #osdev
<heat> did you learn about read write locks in general or just my read write lock
<Ermine> I mean, rwlock interface
<Ermine> since go has sync.RWMutex
<heat> which interface?
<mjg> rwmutex is a really bad name
<heat> agreed
<mjg> maybe it's a joke and readers also exclude each other
<heat> mjg have i told you that i had a course in uni where they wanted us to write a rwlock using two mutexes
<heat> i asked if we could just use pthread_rwlock_t and yeah i used it ggez
<mjg> two?
<mjg> lol
<Ermine> I guess, the write lock excludes readers, but readers don't exclude each other
<nikolar> We may have had to do the same
<mjg> a lock-prtected rwlock is the webdev variant
<nikolar> Just in pseudo code
<nikolar> So no cheating
<mjg> rwlock implelmented using a rwlock is an idea i'm gonna have to use one day
* mjg takes notes
<nikolar> Lol
<dostoyevsky2> mjg: I think implementing syscalls using syscalls is also right up there
<nikolar> Genius
<Ermine> In uni my concurrency course was: here's a book which describes mutexes and condvars, now make your matrix inversion algorithm concurrent
<heat> mjg yes they don't hear about atomics
<mjg> fearless
<nikolar> We csp and c-linda lol
<nikolar> *we did
<Mondenkind> condvars😱😱❌❌🤮😿
<heat> maybe it was a mutex and condvar
<heat> either way i felt it was really stupid
<heat> particularly as its just PESSIMAL
<Ermine> and dudes from computer department also learned barriers
<mjg> no heat but have you demonstrated it's a problem
<mjg> oh you did not
<mjg> that's what i thought
<heat> i can easily do that
<mjg> it clearly is not at their laptop
<heat> mon you'll see it with like 4 threads banging a rwlock
<mjg> therefore fine
<mjg> they have a 2 core laptop
<heat> this is not your kind of optimization where you micro-optimize some slight issue and make a random kernel path .5% faster
<mjg> funny you say that heat.i could say literally the same thing about not using 'where' clause a sql query
<mjg> which you were defending if table is small enough
<heat> if the table is small enough and the query is complex enough that i don't know how to do it by heart, yes
<mjg> you are not going to suddenly claim some stuff is crap and should be be implemented
<mjg> well there you go mate, what if they only have 2 threads to this rwlock
<heat> i dont claim to be an SQL database expert
<mjg> no problem!!
<mjg> with that mindset
<heat> but i do know how to do concurrent programming for the most part
<nikolar> What was that about SQL where
<nikolar> I'm curious
<mjg> the other say i was saying some ideas are inherently shite, like select * and filtering in an app
<mjg> heat took an issue with that
<nikolar> Lol what
<mjg> claiming small tablen == no problem
<nikolar> That's silly
<nikolar> Lol
<heat> it is factually true
<mjg> with this mindset, small thread count == shite lock no problem
<heat> and im sorry im not a sql wizard
<mjg> but he fails to see the PARALLEL
<heat> i guarantee you you're not either
<nikolar> Yeah let's completely forget about the added network latency due to the larger transfer too
<mjg> can you concede some ideas are just bad and should not be implemented
<mjg> not naming any specific ideas yet
<heat> sure
<nikolar> I can
<mjg> oh nice
<CompanionCube> today on 'uefi crimes': 'aiBIOS leverages an LLM to integrate AI capabilities into Insyde Software’s flagship firmware solution, InsydeH2O® UEFI BIOS.'
<mjg> and you think select * + filtering in the app does not qualify because...?
<mjg> btw this is the stock standard example of shite dev
<heat> i'm not saying filtering in-app is good and ideal, i'm just saying it's acceptable for some cases
<nikolar> CompanionCube: you're late, someone's already shared it :)
<heat> particularly ones where the WHERE is not some trivial crap
<mjg> so in your opinion it does not fall into
<mjg> 22:26 < mjg> can you concede some ideas are just bad and should not be implemented
<mjg> this category
<heat> should != must
<heat> i can implement the idea and concede it sucks
<mjg> now you are trying to laywer your way out of a shithole
<heat> no mjg
<heat> people have deadlines and things to do
<mjg> look mate, i don't think this convo is going anywhere but down the drain
<nikolar> Kek
<heat> you consistently miss that aspect about life
<mjg> ye i'm sure time was saved not writing 'where' but a for loop in the app
<nikolar> Lol
<heat> okay mjg now calculate the average of thingies for a given thingy's max and min thongies, directly in sql
<heat> your table size is 10
<heat> have that done in 10 minutes okay?
<nikolar> I don't even get what you're asking
<mjg> where is that quote book
<mjg> about making up an uber straw man
<nikolar> Lol
<heat> i can't give you a concrete example without an actual problem domain and db modelling
<heat> and i'm not making one up for the sake of some irc argument with mr micro-opt
<nikolar> You very rarely have 10 row tables I'll just say that
<heat> that's fair
<dostoyevsky2> nikolar: if the sql where is slow you typically can add an index
<nikolar> dostoyevsky2: true but that wasn't even the question lol
<dostoyevsky2> nikolar: what was it?
<nikolar> Check mjg's earlier messages
node1 has joined #osdev
<heat> the question is if there's any case where doing in-app filtering of a select * should not get you shot
<dostoyevsky2> heat: like when you do a ReadDir() on my sqlfs and the filter the results? Might just be "find" doing it's thing
<dostoyevsky2> Ok, that's probably not the same "app"
<heat> that could be an example. like doing readdir and doing the find in-app, or implementing the find using some super complex query
<Ermine> I need to setup CI
<dostoyevsky2> Ermine: I've never seen an OS with a test suite...
<heat> i need to study a bunch of maths, wanna switch?
<Ermine> dostoyevsky2: take a look at onyx!
<heat> onyx is not the golden example of testing but i do try from time to time
<heat> ultimately it's really hard to do proper "unit" testing of many kernel concepts
<Ermine> I mean, I want CI on my server
<heat> if you self host a gitlab instance you might get that ezpz
<Ermine> So it amounts to either installing some code forge or doing some NIH thingie from git hooks
<heat> yeah
<dostoyevsky2> yeah, with onyx those C programs testing APIs seems reasonable, still it's more of a smoke test, as a lot of things can go wrong in ther kernel until you notice anything in those userland C test cases... Not sure if one could have unit tests inside the kernel
<Ermine> gitlab won't fit in 1Gb ram
<Ermine> I'd say gitea, since it follow github actions
<heat> dostoyevsky2: i've tried over time to move some stuff to the kernel, but it's hard because there's a whole lot of global state and mocking isn't trivial
<heat> booting itself is a somewhat decent smoke test. a lot of stuff can just be tested with smoke tests, i.e unit testing memory reclamation isn't easy
<Ermine> booting on real hw jejeje
<dostoyevsky2> Ermine: how much ram does gitea require?
<heat> the sanest way for a kernel (at least one traditionally written) is probably regression/unit tests (for edge cases or API correctness) and smoke tests for stressing or basic features or whatnot
<heat> Ermine: i tried :(
<dostoyevsky2> heat: One could try to translate part of the kernel to lean4 and then proof things mathematically instead of writing test cases
<dostoyevsky2> prove even
<heat> the kernel is too big
<heat> my kernel is hobbyist and has 200KLOC
<heat> one can imagine linux or freebsd or any other BSD being literally impossible to proof
<Ermine> formal verification is not really practical
<heat> i've heard proofing some 5-10KLOC formally takes some months to pull off
<Ermine> also sel4 proof makes some interesting assumptions
<dostoyevsky2> heat: one could focus on certain subsystems that are small.. e.g. device drivers or the like, small steps... https://news.ycombinator.com/item?id=40363744
<bslsk05> ​news.ycombinator.com: Translation of Rust's core and alloc crates to Coq for formal verification | Hacker News
<Ermine> dostoyevsky2: re gitea: not a lot
<heat> dostoyevsky2: maybe but it's not the device drivers that are really problematic correctness wise (at least not the small ones)
<heat> think about the page cache which interacts directly and indirectly with tens of thousands of lines of code
<dalme> all this considering you can proof it. Your code maybe correct but you may not be able to prove it
<dostoyevsky2> heat: lean4 has this tractic called "sorry" it just tells the system that the proof was successful... so you can start with a coarse structure for a proof and then add new things over time, e.g. when you notice crashes and the like
<gorgonical> This is unrelated but we've had a lot of rain recently and I just picked a pound of ripe mulberries off my tree. They're very ripe
<dostoyevsky2> dalme: Your proof could also misrepresent what the code is doing
gildasio has quit [Remote host closed the connection]
<dostoyevsky2> I am not sure if they have proofs for all optimizations in llvm but I think at least for new optimizations they use alive2 et al: https://alive2.llvm.org/ce/
<bslsk05> ​alive2.llvm.org: Compiler Explorer
gildasio has joined #osdev
oldgalileo has joined #osdev
<dostoyevsky2> but I wonder if this breakthrough was achieved with the help of lean4: https://mathstodon.xyz/@tao/112557248794707738
<bslsk05> ​<tao> There has been a remarkable breakthrough towards the Riemann hypothesis (though still very far from fully resolving this conjecture) by Guth and Maynard making the first substantial improvement to a classical 1940 bound of Ingham regarding the zeroes of the Riemann zeta function (and more generally, controlling the large values of various Dirichlet series): https://arxiv.org/abs/2405.20552 ␤ Let 𝑁(σ,𝑇) denote the number of ze[…]
Jackneill has quit [Ping timeout: 246 seconds]
oldgalileo has quit [Remote host closed the connection]
oldgalileo has joined #osdev
zxrom has joined #osdev
node1 has quit [Quit: Client closed]
oldgalileo has quit [Remote host closed the connection]
oldgalileo has joined #osdev
<nikolar> you'd think that something like that would be mentioned
<kof673> their nickname is tao :D in the beginning was void lol
<nikolapdp> i thought that was his last name
<kof673> i'm not on pdp but i generally don't try links if i don't think lynx will handle them so...you may be correct, i did not see
<nikolapdp> yeah terence tao is a pretty big name in mathematics
<dostoyevsky2> terry tao is like the terry tao of mathematics
dalme has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
PapaFrog has quit [Quit: ZNC 1.8.2+deb3.1 - https://znc.in]
PapaFrog has joined #osdev
<netbsduser> Ermine: i feel strongly about this re the limited teaching of issues of concurrency
<nikolapdp> what do you mean
<netbsduser> i have no idea how it is that the freebsd, solaris, linux people mastered the art of convoluted synchronisation strategies, which are far above the level you typically learn in a bachelor's degree (for me, little more than the producer-consumer problem and such)
<netbsduser> and to my sight it looks like this is a huge element of these kernels contributing a large part of their complexity
<netbsduser> yet the real kernel devs seem to magically know all about it!
josuedhg has quit [Quit: Client closed]
<netbsduser> either a) i am much stupider than they, because i am nowhere near as confident in implementing brave and complex systems of locking, or b) they all learnt about it painfully on their own time
<netbsduser> and seldom is it described in books about these kernels either, which i find really surprising, because to me, this really does seem to me to be such a big and vital thing as to be deserving of a thorough treatment - but it never is
<nikolapdp> right yeah got you
<nikolapdp> same here
<nikolapdp> crappy subject that thaught only the very basics
<heat> netbsduser: it'd help if you could define convoluted synchronization strategies
<heat> but the gist of it is generally that they're not confident either, hence lockdep and a bunch of stress testing and bombing the kernel with lots of weird stuff
<heat> and then later KCSAN
<nikolapdp> i think he's referring to stuff like rcu
<nikolapdp> reffering?
<netbsduser> heat: look to the virtual memory subsystem of linux for example
<heat> referring
<heat> rcu has been formally proven
<netbsduser> or of freebsd, i thihk they have per-page locks and all that fun stuff now
<Ermine> netbsduser: otoh concurrency wasn't the point of that course
<heat> netbsduser: okay, linux vm locking is relatively simple (pre-vma-locks)
<netbsduser> or at illumos' /dev/poll/pollcache/the extra junk they added for epoll compat
<heat> mmap_lock is a rwlock, write locked for writes to the vma tree, read locked for everything else
<heat> pages have their own locks which the rules are really a mess and undocumented, but the people in-the-know Just Know it
<netbsduser> there you have it
<heat> but generally page locks are held when bringing pages up to date and starting writeback on them
<heat> oh, and when truncating/removing a page from an address_space
<heat> now you're going to ask me why this is so mysterious, and i'll simply reply by saying they're adhoc locking "rules" that stem from 30 years of continuous development and optimization
<heat> starting from a crappy hobby kernel into a mega-scalable death machine
<nikolapdp> lol death machine
<Ermine> (still crappy)
<heat> yes only onyx is not crappy
<Ermine> absolutely
<heat> now the real answer to "why does no one teach us this" is that generally no one needs all these techniques
<heat> kernels are horrible concurrency problems with lots of shared state
<netbsduser> it's a kernel thing and maybe a database thing
<nikolapdp> oh yeah databases too
<netbsduser> a lot of other applications are more naturally concurrent
<Ermine> I don't think it's possible to teach someone
<Ermine> Experience is the way
<heat> yeah i assume databases are also awful, although i've never written one (maybe i should, in the future)
<heat> Ermine: lots of these are just patterns
<netbsduser> i don't complain too much about the lack of teaching though i think it ought to be part of operating systems curriculums
<heat> like spin_lock(&table->lock); /* grab ref to an element */ spin_unlock(&table->lock);
<heat> is a super common simple pattern
<nikolapdp> heat you must've heard the horror stories about working on the oracle db
<heat> yeah i've heard
<netbsduser> what bothers me above all is the fact that it looks sometimse like everyone coming up with these wonderfully complex schemes seems to do it with gusto and confidence that i lack
<netbsduser> how this big and complicated topic is so often left unspoken, and certainly never makes it into OS books
<nikolapdp> eh i doubt that there's that much confidence most of the time
<heat> maybe you're just missing the confidence (and ego) of a kernel hacker
<nikolapdp> unless it's some research paper and it's proven correct
<netbsduser> heat: the interaction with lifetime is a really interesting one i think
<heat> e.g the per-vma locking shit was silently broken and broke a bunch of software IN PROD
<Ermine> openbsd devs don't grok those patterns
<dostoyevsky2> netbsduser: aren't teachers usually focussed on teaching a theory and for locking a lot also depends on the implementation, and kernel hackers usually learn a lot how locking is implemented in a cpu
<heat> right the openbsd kernel is a lot more classically written, with big locks everywhere
<nikolapdp> dostoyevsky2: yeah no, they didn't teach as any "theory"
<heat> e.g what you'd see in a server program with little scale or something
<Ermine> thus i think there's something else beside those patterns
<nikolapdp> nothing beyond the bare basics at least
<netbsduser> much of the problems of locking really are the problems of locking mixed with lifetimes. it's why i like RCU and will definitely make use of my implementation any day now
<dostoyevsky2> I heard they keep adding more fine-grained locks to the OpenBSD kernel but it's a slow process
<nikolapdp> yeah i imagine it owuld be
<netbsduser> dostoyevsky2: well i think there is also something to be said about the widening gulf between industry and academia
<heat> netbsduser: rcu truly fixes some lifetime/locking issues (locking ordering, etc) but you need to carefully check if you can tolerate stale data, etc
<nikolapdp> that gulf has been valles mariners for decades
<nikolapdp> how doable is rcu in userspace
<nikolapdp> like how efficient could you make it
<heat> epoch-based is a bitch, at least with current tooling
<dostoyevsky2> netbsduser: I feel computer science courses would teach locking typically e.g. in a language like Java with synchronized... maybe even touching on how it is implemented but not sure if they e.g. would talk about biassed locking optimizations et al, there is only so much time you have to cover a lot of 101 topics in computer science
<heat> sorry, not epoch, quiescente-based
<heat> quiescence
<heat> epoch-based is easily implementable and there's a nice sample implementation of C++26 RCU in facebook's folly
<heat> liburcu does QSBR (quiescence based reclamation) but it has some nasty hacks that i've heard don't work super well
<netbsduser> dostoyevsky2: yes, and i do think it is a mistake and would be disastrous for computer science degrees to turn into some sort of coders' bootcamp
<heat> currently there's just no easy low overhead way of doing preemption disabling or what have you
<netbsduser> what is still mysterious to me is why books on popular OSes devote vanishingly little time to concerns of locking. it is not talked about
<nikolapdp> yeah that's what i thought
<heat> netbsduser: it's just *very* hard to talk about
<dostoyevsky2> netbsduser: Yeah, it's a trap... you don't want people to become specialists in some hardware that may be soon arcane... you want to teach the theory so they learn stuff that may last a lot longer... But I guess Linux kernel knowledge can probably also last you a long time, it's just a very commonly used piece of software
<netbsduser> it is sometimes spoken of, emphasis spoken, i was forwarded a video of some microsofter talking about how windows 7 annihilated the great dispatcher lock
<heat> i can talk about linux: linux's rules are usually super-adhoc and a huge mess in core subsystems
<heat> linux's locking primitives are usually super optimized and hard to grok for mere mortals
<heat> paul mck's book is really good and probably the best you have on this
<nikolapdp> what's the book
<heat> but, again, lots of adhoc locking patterns. i.e the page/folio lock is super adhoc, then you have the writeback flag on folios/pages which was a weird separation of the page lock wrt writeback
<heat> (admitted to by page cached guys)
<netbsduser> this writeblack flag, what's it all about?
<bslsk05> ​mirrors.edge.kernel.org: Is Parallel Programming Hard, And, If So, What Can You Do About It?
heat has quit [Quit: Client closed]
<dostoyevsky2> netbsduser: On GPUs they don't have locks they just madate you to use certain memory access patterns...
heat has joined #osdev
<netbsduser> i have distinguished page use from page reference count so a page can be deleted (detached from vm object or adderss space it belongs to) but not be freed because the refcnt is not 0 yet; that's what writeback does
heat28 has joined #osdev
<nikolapdp> thanks heat
<heat28> netbsduser: basically they locked the page through writeback Back In The Day. This was okay but had huge latency problems when e.g someone wanted to write to it again (normal cached writes lock the pages)
<heat28> solution: break up the lock into the normal lock, and a writeback flag. if you need to sync with writeback, lock the page and wait for writeback to clear
<heat28> (writeback can only be set when the page is locked, i.e the page is locked briefly on writeback, but not through the whole IO process)
<heat28> this is, again, super adhoc and a way to trivially solve the latency issues
<nikolapdp> heat28 what did you do to heat
<heat28> killed him
<netbsduser> heat28: that's so interesting, i never really saw myself a need for locking around writeback, but i assume there is a reason why linux does
<heat28> you can't have the page going away while doing IO
<netbsduser> then this is where my refcnt comes in instead
heat has quit [Ping timeout: 250 seconds]
oldgalileo has quit [Remote host closed the connection]
m5zs7k has quit [Ping timeout: 268 seconds]
<heat28> you need to be mega careful when e.g doing truncation for instance
oldgalileo has joined #osdev
<heat28> (e.g and for instance are redundant, gj heat28)
<netbsduser> i synchronise this with an rwlock in the vnode (or rather that's one use that rwlock will have when i add truncation)
m5zs7k has joined #osdev
<netbsduser> as i see truncation as being acceptable to be heavy i prescribe write-locking of the vnode's paging rwlock during this
<heat28> yeah but for instance reads don't grab the inode's i_rwsem on linux
<heat28> neither do writes in general
<heat28> ok here's some trivia: page faulting. does it grab the mmap_lock write or read?
<netbsduser> read i would guess, that's what i'd do
<heat28> yep
<heat28> then there are a bunch of tiny details like: mapping a page takes the page lock (to stop races against truncation or reclaim). how do you sidestep lock contention?
<heat28> easy: speculative page faulting under a page table lock + carefully try locking pages under rcu
<heat28> with a heuristically defined upper limit
josuedhg has joined #osdev
<netbsduser> quite charming
zxrom has quit [Quit: Leaving]
<netbsduser> i like the page structs because they stay being what they are, their members mutate but it's always a page struct
<netbsduser> notwithstanding memory hotplug
<netbsduser> and that makes life so much easier
LostFrog has joined #osdev
PapaFrog has quit [Ping timeout: 268 seconds]
<heat28> i hate page structs because they're horrendously size constrained
npc has quit [Remote host closed the connection]
<heat28> unless you're solaris and want to have some 200 or so byte vm_page_t's
gbowne1 has quit [Remote host closed the connection]
gbowne1 has joined #osdev
xenos1984 has quit [Ping timeout: 268 seconds]
Left_Turn has quit [Read error: Connection reset by peer]
netbsduser has quit [Ping timeout: 240 seconds]
oldgalileo has quit [Ping timeout: 264 seconds]