<heat>
i have just realised i need syscall restarting
<heat>
X abuses signals
<geist>
yeah one of those ugh things you gotta do eventually
<geist>
some of it depends a lot on if you've heard
<heat>
i'm not liking this honestly
<heat>
i'm slowly getting lots of technical debt (for halfassed code) for X11
<geist>
well at the end of teh day you can blame pbx
<geist>
kazinsal: question. you might have experience with SFP+?
<geist>
or really anyone here that does
<geist>
i'm asking to ask, i guess. question is: SFP+ passive direct attach cables. it's really hard to find any technical info, since i just find marketing data
<geist>
question is what's going on there? it's much cheaper than SFP+ to 10gbaset so i assume it's doing something cheaper, like basically just blatting the SFP+ pins on a short run of copper and thus only works over very short hauls?
Lumia has joined #osdev
<geist>
all i find are pages where they're being sold, but no real technical info
<geist>
guess that's right, from wikipedia: "SFP+ also introduces direct attach for connecting two SFP+ ports without dedicated transceivers. Direct attach cables (DAC) exist in passive (up to 7 m), active (up to 15 m), and active optical (AOC, up to 100 m) variants."
<geist>
so then i guess the obvious question is would a random NIC with SFP+ also work with direct attach. ie, aside from using SFP+ DAC to chain switches that are next to each other together, is it also feasible to connect a server with a SFP+ port to a switch, provided it's nearby
<heat>
this is just not how i roll :/
<heat>
which is why i still look at a terminal after 7 years of this
<klange>
someone tried to build kuroko for Android arm64 and ran into issues that we tracked down to MTE, that was fun
zaquest has joined #osdev
Lumia has quit [Ping timeout: 260 seconds]
SpikeHeron has quit [Quit: WeeChat 3.7]
smach has quit []
zaquest has quit [*.net *.split]
eroux has quit [*.net *.split]
zaquest has joined #osdev
<kazinsal>
geist: yeah, 10GBASE-CR is basically just a balanced twinax cable blasting data down the line with 8b/10b encoding in effectively four 3.125 gigabaud channels
<kazinsal>
there's very little going on so it's only good for about 7 metres
<kazinsal>
any NIC that support SFP+ will support a DAC, since the SFP/SFP+/etc standard is the electrical communication between the MAC and the PHY on the transceiver
<kazinsal>
it's when you start playing with SFPs where the *BASE name ends in a W that things get incompatible because they're SONET/SDH-over-Ethernet
dutch has joined #osdev
<geist>
kazinsal: cool
eroux has joined #osdev
<geist>
so in a room of lots of switches would you primarily end up using DAC to chain them together and fibre if it's out of the room?
<kazinsal>
yeah, DACs are the usual goto for runs within the same rack
<kazinsal>
between switches, servers, storage arrays, etc
<kazinsal>
starting to see more 10GBASE-T on stuff though that's for sure
<geist>
yeah starting to see some cheaper 10gbe switches that just has a pile of SFP+s with some limitation to the number of 10gbe SFP+ modules they can support
<geist>
since they are fairly power hungry
<kazinsal>
yep. the controllers and sockets are a bit more expensive too
DarkL0rd has quit [Remote host closed the connection]
<kazinsal>
a lot of the machines I've been working with lately have 2x10GBASE-T onboard and 40G QSFP+ on PCIe mezzanine cards
<zid>
but does it have ferrite cores hanging off it
<geist>
maybe less so for 10GB-t, but stuff higher than that seems like *SFP would make sense
heat has quit [Ping timeout: 255 seconds]
<kazinsal>
yeah above 10G you get this weird multi-generational split
<kazinsal>
because everyone decided that after 10G was going to be 100G but nobody could decide what stepping stones of tech to use to reach it
gog has quit [Ping timeout: 252 seconds]
<kazinsal>
so at first you had 40G implemented as 4x10G lanes, then 25G on a new cable and 100G implemented as 4x25G lanes
Matt|home has joined #osdev
dutch has quit [Quit: WeeChat 3.6]
SpikeHeron has joined #osdev
qubasa has quit [Ping timeout: 268 seconds]
[itchyjunk] has quit [Read error: Connection reset by peer]
Lumia has joined #osdev
* kazinsal
looks at his to-do list and realizes "write a new 64-bit kernel" has been on there for at least nine months, completely untouched
<kazinsal>
hmm. shit.
kindofwonderful has joined #osdev
<kindofwonderful>
Good morning
Lumia has quit [Ping timeout: 255 seconds]
Burgundy has quit [Ping timeout: 252 seconds]
kindofwonderful has quit [Ping timeout: 272 seconds]
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
smach has joined #osdev
smach has quit [Client Quit]
<remexre>
are modern file systems typically "smart" about wear-leveling? I've sketched out a design for one, but the journal (and a couple other blocks) end up being pretty heavily written to
<Mutabah>
Unless they're designed for flash/ssd, no
smach has joined #osdev
<klange>
At the same time filesystem designs were being optimized for SSDs, SSDs were being optimized for filesystems that didn't cater to them.
<geist>
yeah you should generally assume that higher end ssds (ie, not SD cards, etc) can and do internally wear level such that it doesnt really matter where you write
<kazinsal>
and SSD wear has been shown for years to be somewhat overblown -- several years ago I believe it was a german tech website that finally managed to kill an SSD that had a life rating of 2 PB written with something like 9 PB written
<clever>
kazinsal: that makes me feel a bit better about my laptop
<clever>
Percentage Used: 131%
<clever>
it claims to have used 131% of its expected lifespan
<kazinsal>
yeah write endurance is definitely intentionally underestimated on decently high end SSDs
<clever>
Data Units Written: 272,466,492 [139 TB]
<clever>
i think its just bytes written, devided by some constant
<clever>
Model Number: Samsung SSD 960 EVO 500GB
<zid>
that's a lot of TB
<zid>
I am at.. 48TB fuggin windows wow
<clever>
~278 times the capacity
<kazinsal>
I'm at 92 TB on one of my SSDs and 34 on the other
<clever>
one thing you kind of have to deal with when writing a wear leveling algo
<zid>
*double checks with the samsung tool*
<clever>
lets say i fill 50% of the disk with data, and never change it again
<clever>
then i continually write and trim the other 50% of the disk
<kazinsal>
not sure what they're rated for -- as of a few months ago samsung magician has spontaneously decided that my 860 EVO is a fake
<clever>
half the disk has data on it that isnt changing, and so isnt getting wear
<clever>
and half the disk is getting all of the wear
<clever>
now what do you do?
<clever>
start moving data that isnt changed, to reclaim those less worn sectors?
<kazinsal>
stealthily move blocks around 🥷
<clever>
kazinsal: and what happens to your write/sec rates when that happens?
<zid>
dang I only get 150TBW
<zid>
in endurance
<zid>
so I am a third of the way through my SSD's lifespan in... 85k hours
<clever>
zid: some sata SSD's in this study failed in horrible ways, like not even showing up on the sata bus, or half the disk just suddenly returning read errors
<clever>
likely metadata corruption
<zid>
yea exactly
<zid>
readonly is the good case
<zid>
it needs to die while it's doing "normal" things is my assumption
<zid>
if it's doing internal reallocations when it dies the firmware crashes instead :P
<clever>
page 279 has a summary of every death, table 5
<clever>
zid: but only an idiot would not journal the internal metadata!
<clever>
oh right, but that study was power loss mid-write
<clever>
not failure due to wear
<zid>
I lost power during a write
<zid>
ntfs + samsung ssd = mega pain
<zid>
half my ACLs were fucked
<zid>
sfc /scannow and various tools fixed all the ones for /windows, but Users/zid took a lot of manual fixing, all sorts of random failures for a while
<clever>
3 of the devices in that study, had bit corruption, implying that its not validating the write or metadata updates first and the flash didnt flash properly due to lack of power
<zid>
because of like zid/AppData/LocalLow/MSI/
<zid>
meaning no installers will run
<clever>
3 drives had shorn writes, where only half the sector updated, feels like a clear sign of updating the metadata first, and the data writing died mid-sector
<clever>
both are also a sign of no ecc, so its returning whatever garbage it found in flash, and cant return read errors
<zid>
I wish we had a decent flash technology tbh
<clever>
8 ssd'd had non-serializable writes, which wreak havoc with any journaled fs
<zid>
hdds are ghetto in theory but work great in practice
<zid>
ssds are great in theory but ghetto in practice
<clever>
imagine if you write to the journal, sync, then data, and sync, but after a power failure, the journal writes are undone!
<clever>
and now you have a partial write to your critical metadata, and no journal to undo/redo with
<zid>
yea welcome to my life after that breaker tripped
wand has quit [Remote host closed the connection]
<zid>
I couldn't be bothered to reinstall windows, so I had to spend 2 days fixing it
<clever>
they also mentioned a scary failure mode of mechanical drives, that luckily was not observed on any ssd
<clever>
"flying writes"
<zid>
A flying write on a mech drive is when the platter detatches mid-write, and flies through the air
<clever>
where the head control motor looses control due to lack of power, but it keeps on writing out data, to random regions!!
<clever>
and depending on chance, the sector can be written fully intact and be fully readable, at the wrong LBA
<clever>
what did it land on?
<geist>
huh TIL that at some point in the past you could format some enterprise scsi drives with 520 byte sectors (512 + 8)
<geist>
hardware raids would write an 8 byte checksum/crc per sector
<zid>
CD > ssd
<geist>
so that when they got a raid parity mismatch they knew which drive was probably the culprit
<zid>
CDs use 2352 byte sectors per 2048 logical
wand has joined #osdev
GeDaMo has joined #osdev
<clever>
geist: thats about the only way a mirror can know what the right answer is
<clever>
either that, or have a seperate region dedicated just to checksums, and deal with trying to keep that in sync
<geist>
right, that's the idea. the general gist of the youtube vid was most modern RAID doesn't check parity on read back, and generally rely on the storage device to report a failure so it knows what side of the parity to consider bad
<clever>
zfs solves it, by having the checksum beside the block# pointer, and having clear space allocated for it in the metadata
<geist>
but in the old days, and/or much more serious hardware raid actually reads the parity back and continually checks it
<geist>
and yes, the super gist is the real future is having the FS do all of this instead of relying on raid
<geist>
but i thought the 520 byte sector thing was interesting
<clever>
i think you also see similar things on a lot of floppy formats
<clever>
but its often hidden by the floppy controller
<clever>
and i assume hdd/ssd's can also do the same internally, to be able to generate read errors
<geist>
right, in general hard drives or whatnot already uses larger sizes, as does cdroms, etc. it's just *usually* hidden by the device because it's used for low level error recovery, and as a header for the sector in the first place
<geist>
that was the deal with the 2300 byte cdroms. you could 'get to' the raw format
<zid>
2352
<geist>
yeah
<zid>
it has timestamps, ecc data, etc
<clever>
that reminds me, the 1541 floppy drive for the c64/vic20, the cpu was so slow at detecting the end-of-sector footer, that there was a mandatory gap between sectors
<zid>
some of that wouldn't need needed on a solid state drive, but the ecc data would be nice
<clever>
so when it passes the end of 1 sector, it switches to writing, and that delay causes a gap
<geist>
oh nand flash has a lot of ecc as well
<clever>
one copy-protection scheme, just ignored those gap rules, and put too many sectors into a track
<geist>
since it's a highly unreliable format. but that's also stuff the nand controller deals with
<clever>
it reads perfectly fine, but the floppy controller cant react fast enough to pack the sectors in that densely
<zid>
Mine has some crcs but god knows how good they are
<zid>
and how often it checks them
<clever>
and the floppy controller doesnt have enough ram to write an entire track at once
<clever>
the hack the pirates came up with, a ram expansion for the floppy drive!
<geist>
i remember there was awhole push there by some silly people to format their cdroms with mode 2 and eschew a bunch of the ecc in order to fit more data in
<geist>
made it highly unreliable
<Ermine>
Siily people are silly.
<clever>
that reminds me of all of the companies putting a logo in the middle of a QR code
<clever>
they are eating into the ecc bits when they do that
LostFrog has joined #osdev
PapaFrog has quit [Ping timeout: 248 seconds]
netbsduser has joined #osdev
<kazinsal>
speaking of ECC, apparently the RTX 4090 drivers have a little advanced settings flag you can turn on to cut 10% of the memory bandwidth down in exchange for ECC on the GDDR6X
<kazinsal>
wonder if they can backport it to previous 6/6X GPUs
<kazinsal>
it's one of those features you'd expect to only be on Quadro and DCGPU cards and not consumer (albeit extremely high end enthusiast) equipment
<Ermine>
And when is it worthy to disable ecc?
<kazinsal>
on a GPU? well, if you're buying a $2000 consumer-tier graphics card, chances are you care about MORE SHADER PERFORMANCE NOW and not about cosmic bitflips making a pixel 2% less mauve
<zid>
yea gpu ram is sort of.. ran on the ragged edge, because it doesn't matter
<Ermine>
So, on consumer cards it's beneficial to have ecc off by default.
_xor has joined #osdev
<zid>
the shader programs etc will be in the equivalent of icache
<zid>
most cards are perfectly 'usable' with entirely stuck bits on the dram, you just get snow
<clever>
personally, i would want textures to lack ecc, but control lists to have ecc
<clever>
but doing both at once, could be tricky....
<clever>
if a control list bit-flips, the gpu crashes and drivers have to re-init everything
<clever>
if a texture bit-flips, 2% less mauve, who cares?
<pitust>
are bitflips really that common
<clever>
vertex data, it will be a bit more jarring, but not a problem
<zid>
clever: it clocks everything dynamically bear in mind
<clever>
pitust: my desktop is having its ram fail, and its very noticable, random things segfault and SIGILL's
<zid>
so the dram will likely be running super slow while you're uploading all the shaders and shit
<zid>
but then running fast during render
<clever>
zid: but reads on dram are destructive, so the shader could corrupt while rendering at a high clock
<kazinsal>
there's a reason most open source GPU drivers still suck
<kazinsal>
and that reason is that modern GPUs are straight pants on head in complexity
<clever>
kazinsal: when i reboot my dual-boot machine between windows and linux, i can see the windows wallpaper on my linux desktop...
<kazinsal>
you have to spend a good number of CPU cycles just making sure they don't spontaneously combust
<clever>
thats a serious un-initialized ram bug
<clever>
its corrupted to heck and back, but its still clearly visible
<kazinsal>
and then you have to spend more CPU cycles to fix up lazily written shaders
<kazinsal>
before vulkan and DX12 became pretty much the standard for game development, DX10/11 games were generally so badly written that they'd gain significant performance increases after the first post-launch driver was released because nvidia and AMD would go through and flag a bunch of shaders as ones to be overridden by properly optimized ones
<kazinsal>
or ones that just weren't fuckin broken
<kazinsal>
with vk and dx12 you actually have to put more effort into making your shaders run properly
<kazinsal>
which resulted in a) more attention to detail in shader programming and b) better tools to make turning designer brainwaves into functional shader code a reality
<kazinsal>
eg. unreal's material graphs and unity's shader graphs
<zid>
yea I often get old nonsense in windows too
<kazinsal>
also the ability to dynamically recompile DX8/9/10/11 shaders into vulkan ones was an insane leap forward
wand has quit [Ping timeout: 258 seconds]
<zid>
I forget what reliably triggers it now, tabbing in and out of fullscreen things
<clever>
something ive kinda wanted to look into, can i compile a shader, and then compute how many clock cycles it would take to run, not counting texture lookup stalls?
<clever>
for some gpu's, that should be a trivial computation
<kazinsal>
that same tech got applied to other architectures so now you can run Switch games at 4K 120 FPS on a PC
<clever>
and then based on the clock and core count, i can compute the pixels/sec fill rate
<kazinsal>
I think nvidia's cuda sdk has some static analysis tools
<kazinsal>
but I haven't really played with it because I'm not really a graphics programmer
<clever>
and then assuming no transparencies, i can divide by res to get fps
wand has joined #osdev
awita has joined #osdev
alpha2023 has quit [Read error: Connection reset by peer]
alpha2023 has joined #osdev
MiningMarsh has quit [Ping timeout: 248 seconds]
smach has quit [Read error: Connection reset by peer]
epony has quit [Quit: QUIT]
nyah has joined #osdev
awita has quit [Ping timeout: 255 seconds]
awita has joined #osdev
<zid>
geist: I thought of another 'new thing make old CPU slow' category, people removing quadratic algs
<zid>
which in the Good Old Days was fine because the amount of cpus in a system was '1' or '2' and maybe '4' on a top class server, or whatever
<zid>
but now it might be 2048 so you have to build a hashtable
<zid>
so now the '1 cpu' case builds as hashtable instead of a for loop
MiningMarsh has joined #osdev
gog has joined #osdev
awita has quit [Ping timeout: 260 seconds]
wand has quit [Ping timeout: 258 seconds]
wand has joined #osdev
<gog>
hi
<zid>
hi
<geist>
zid: hmm, yeah that's true
<GeDaMo>
The computer is now a network
<gog>
you have computers in your computer
awita has joined #osdev
Burgundy has joined #osdev
Burgundy has quit [Ping timeout: 246 seconds]
poyking16 has joined #osdev
<Ermine>
True
_xor has quit [Quit: brb]
_xor has joined #osdev
puck has quit [Excess Flood]
puck has joined #osdev
<mrvn>
clever: X just creates the framebuffer and starts displaying and only then it writes to it.
<mrvn>
clever: a leftover background implies your BIOS doesn't train your DRAM, at least on reboot.
gildasio has quit [Ping timeout: 258 seconds]
gildasio has joined #osdev
terminalpusher has joined #osdev
IRChatter9 has joined #osdev
IRChatter has quit [Ping timeout: 252 seconds]
IRChatter9 is now known as IRChatter
Bonstra has quit [Quit: Pouf c'est tout !]
Burgundy has joined #osdev
Burgundy has quit [Remote host closed the connection]
Vercas6 has joined #osdev
Burgundy has joined #osdev
dude12312414 has joined #osdev
gildasio has quit [Ping timeout: 258 seconds]
gildasio has joined #osdev
awita has quit [Read error: Connection reset by peer]
heat has joined #osdev
<heat>
your bios doesn't train your dram normally AIUI
<heat>
it trains it once and caches it
<heat>
only on hw changes does it re-train them i think
awita has joined #osdev
<heat>
also that sounds like the culprit is vram, not ram
pretty_dumm_guy has joined #osdev
Vercas6 has quit [Quit: Ping timeout (120 seconds)]
Vercas6 has joined #osdev
Goodbye_Vincent has quit [Quit: Ping timeout (120 seconds)]
sortie has quit [Quit: Leaving]
knusbaum has joined #osdev
sortie has joined #osdev
Goodbye_Vincent has joined #osdev
xenos1984 has quit [Ping timeout: 246 seconds]
<gog>
hi
<heat>
hai
xenos1984 has joined #osdev
bauen1 has quit [Ping timeout: 252 seconds]
<sham1>
Shark to you too /s
<sham1>
How are you all
<heat>
i'm a solid ok
<heat>
a 7/10
<gog>
haj
<heat>
not great, not bad
<heat>
how are u
<gog>
i got my hairs cut
<heat>
nice
<heat>
u bald or what
<GeDaMo>
I've been cutting my own hair since Covid, I just run the clippers over it
<gog>
no
<heat>
if i were to cut my own hair i'd just hair-trimmer-it
<gog>
i have luxurious long locks of golden hair
<heat>
probably not shave
<heat>
you could have luxurious long locks of b a l d
<gog>
no thanks
<heat>
cringe
<heat>
if anyone is oppressed it's white fat bald english men
<heat>
proper oppression innit
<kazinsal>
some days I miss the glorious long hair I had during and after university
<kazinsal>
maintaining it was a pain and it didn't look *great* but dang
<geist>
i dont htink i have a weird accent in all seriousness anyway. i grew up in TX, but i think most folks dont hear my texas accent anymore
<zid>
plus a lot of americans have pin/pen and cot/caught mergers now
<zid>
which makes having them describe vowels using text a.. challenge
<geist>
as in i haven't h eard anyone i've met mention it anymore. been outside of texas, on the west coast where there's very litte 'interesting' accents for longer than i lived in texas
<mrvn>
geist: you never have a weird accent, everybody else does.
<zid>
geist: Which of those two mergers do you have?
IRChatter4 has joined #osdev
<geist>
well, what i mean is west coast accent is pretty uninteresting, really
<mrvn>
zid: potato potato
<geist>
its basically midwestern plain
<geist>
(except obviously some interesting dialects in los angeles and whatnot)
<zid>
midwestern is actually of the weirder american accents
<zid>
it's just normal to you
<mrvn>
geist: do you say howdy?
<sham1>
Ah yes, the schwa. It was rather difficult to adopt. Like no, you can't just reduce all your wovels
<geist>
it's considered pretty 'normal' american, kinda the center point
<heat>
howdy is only pronounciable in an australian accent
<sham1>
But an Aussie wouldn't say howdy, but g'day
<zid>
geist: which of the pin/pen and cot/caught mergers do you have?
<bslsk05>
'20 British Accents in 1 Video' by Eat Sleep Dream English (00:21:54)
<heat>
this dude is pretty good too
<zid>
heat: What's your english?
<zid>
geist won't say
<heat>
foreigner accent
<geist>
zid: what are you going on about?
<heat>
i can switch into US mode or UK mode
<heat>
whatever you want bruv
IRChatter4 has quit [Client Quit]
<zid>
geist: which of the pin/pen and cot/caught mergers do you have?
<geist>
what the hell is that?
<geist>
i dont know what your question is
<zid>
are those words different
<zid>
or the same
<sham1>
Do they sound the same if you say them
IRChatter0 has joined #osdev
<geist>
sound the same
<zid>
both mergers? wow
<zid>
yea your accent would sound super strange to me
<geist>
yes you're british
IRChatter has quit [Ping timeout: 252 seconds]
IRChatter0 is now known as IRChatter
<zid>
I spend like 2000 hours a month listening to americans speak
<geist>
of course my accent is weird. it's american
<heat>
what's your accent zid
<heat>
where are you from
<zid>
heat: esturary
<zid>
it's all glottal stops
<zid>
(technically anglia, but all the major features are the same)
<sham1>
Eh, Klingon has more
<heat>
is that cockney?
<zid>
no
<heat>
wikipedia says it is
<sham1>
There's not enough rhyming
<geist>
well i guess pin/pen would be different but generally aren't
<geist>
cot caught yeah would be exactly the same
<zid>
pin/pen merger always sticks out to me when I hear it
<zid>
aliensrock has it badly
<zid>
(youtuber who plays puzzle games)
<geist>
it's more like pin/pen would sound different if i were to very precisely prounounce it, but in general i'd just smear it out
<sham1>
Bloons
eck has joined #osdev
<zid>
and bloons
<sham1>
It's actually weird, I think I've actually adapted the cot caught merger into my accent. Well, maybe the latter has a slightly longer vowel sound, but yeah. It's odd
<mrvn>
heat: I don't think that's driver dependent.
<heat>
amdgpu?
<clever>
[root@amd-nixos:~]# lsmod | grep amd
<clever>
amdgpu 6209536 158
<heat>
mrvn, sure is
<heat>
your driver(s) do the memory management
<clever>
mrvn: i would expect any sane driver to not use a buffer until it has been initialized
<heat>
are you using xf86-video-amdgpu?
<heat>
or whatever is the nixos package
<clever>
lib/xorg/modules/drivers/amdgpu_drv.so is mapped into the X process
<mrvn>
heat: and it's working as malloc, not as calloc
<heat>
clever, try to remove that package and install -modesetting
<mrvn>
clever: ^
<heat>
i think that's a bug on the X driver
<mrvn>
heat: definetly. seen it on many systems
<clever>
heat: on nixos, thats as simple as services.xserver.videoDrivers = [ "modesetting" ];
<clever>
it accepts a list, so multiple drivers can be installed at once
<heat>
nixos is the zfs of distros
<heat>
fancy and annoying :P
<clever>
and all traces of non-enabled drivers vanish immediately
<clever>
so you can swap drivers trivially, and not have to worry about if you removed every part of it
<heat>
i use arch
<heat>
we have packages
<heat>
i install packages, i remove packages
<heat>
it goes brrrrrrrrrrrrrr
<clever>
but some packages may have post-install scripts that mess with config files, and dont undo it
<Ermine>
heat: it sometimes chokes when it's from aur.
<heat>
Ermine, wdym, has always worked here
<Ermine>
I remembered it when lib32-gstreamer* maintainer disappeared for a while and gstreamer versions went out of sync and broke videos in wine
<clever>
Ermine: but with nixos, you can rollback any upgrade, and keep wine on an older version while having other programs use newer versions
<Ermine>
And it wasn't easy to bump version because they switched from autoconf to meson
<heat>
that's pretty easy
<heat>
praise meson
<clever>
and like gentoo, the 32bit and 64bit follow the same recipe, so there is no need for a dedicated 32bit maintainer
<heat>
i've been thinking about redoing my OS's build system in meson
<Ermine>
Nah. having python for system-level stuff is meh.
<heat>
hm?
<Ermine>
I spent some time figuring out how to force it to build 32 bit stuff
<Ermine>
clever: my (former) seminarist on programming uses nixos
<clever>
Ermine: nixos uses namespaces (like docker containers) so the build cant even find a 64bit compiler
<heat>
what does python have to do with meson?
<clever>
it has no choice but to do a 32bit build
<clever>
uname even lies, and claims it has always been a 32bit system
<Ermine>
heat: meson is written in python
<heat>
ok, and what's the problem?
<heat>
it's still fast
<sham1>
Doesn't mean that system-level stuff in python isn't meh. It's not even the speed, but the python itself and needing that to exist in a system
<Ermine>
heat: the problem is not performance, but bootstrapping: I have to build python to build your system
<sham1>
Hell, python is irritating even for established systems
<Ermine>
Thankfully we have muon, but afaik it's still wip.
<heat>
i don't think that's a problem
<heat>
you'd need at the very least a C compiler, C++ compiler, linker, libc, compiler runtimes, libc++ and a bunch of unix utilities
<heat>
and a shell ofc
Gooberpatrol66 has quit [Quit: Leaving]
<zid>
gentoo spends half its dev time making sure the 8 slotted versions of python it needs don't break I am sure :p
<Ermine>
That's a lot of packages to have, and I'm not keen of adding another one.
<heat>
you realize linux builds its build system when you're building it, right?
<heat>
which also needs flex for instance
<heat>
python is not that big of a dependency
<heat>
at least, not bothersome
<heat>
i'm more worried about a build system's expressibility and utility when thinking about this stuff
<zid>
build systems shouldn't have those
<heat>
meson seems good for unixy stuff, but does it work well for a whole operating system?
<heat>
makefiles are bad and slow, cmake is bad, gn is verbose af (because it's *too versatile*), ninja files are supposed to be output
<zid>
more features = more mess
<heat>
bazel seems like a shitty idea as well
<heat>
and bazel needs java!
<Ermine>
not that I approve using flex
<heat>
imagine needing to build java, much worse!
<Ermine>
O.O
<heat>
and here's a funny tip: if you want to build the *whole OS*, you need all of these tools already
<j`ey>
heat: what does linux use flex for? compile time generation of something?
<heat>
i have pure makefile packages, packages that use python as a build script, packages that use shell scripts, a shitton of depressing autoconf, cmake, and meson (which has by far been the best experience porting)
<heat>
j`ey, kbuild
<j`ey>
oh
<heat>
oh yeah I forgot
<heat>
openssl uses perl
<heat>
i technically have a local v8 build which needs the whole of depot_tools
[itchyjunk] has joined #osdev
<Ermine>
openssl is clearly not an example of good software
<heat>
yeah but the point is that building a whole OS requires everything
<heat>
as in almost literally everything
<heat>
building a smaller base OS may not require so, but...
<heat>
shoutout to klange for making his packages use straight up makefiles
<heat>
building is hard :(
<klange>
I try to make all of my stuff reasonably compile just by throwing all the sources at the compiler, the makefiles are just convenience tools
<zid>
more features = more mess
xenos1984 has quit [Read error: Connection reset by peer]
<klange>
eg. you can build kuroko with `cc -o kuroko src/*.c src/vendor/*.c` or bim with `cc -o bim bim.c -lkuroko`
<heat>
kuroko is a bit more finegrained, with the modules and stuffs
<heat>
trimonthly reminder that you should maybe pull my meson patch
<bslsk05>
lwn.net: A first look at Rust in the 6.1 kernel [LWN.net]
xenos1984 has joined #osdev
bauen1 has joined #osdev
<j`ey>
maybe the m1 gpu driver will be the first ""real"" driver
<heat>
friendly reminder that lwn is fucking freat
<heat>
great**
<heat>
there's an nvme driver already
<j`ey>
well im not sure that one is going to actually be merged
<heat>
yeah, it's not replacing the current one
<heat>
maybe as a sample though
<j`ey>
maybe the m1 gpu driver will be the first 'new'/rust-only driver, there :P
<heat>
probably
<heat>
it's unclear to me what the implications of rust in the kernel are
<heat>
particularly right now where it requires a very particular build setup
<j`ey>
i dont think it's that hard to do though
<j`ey>
to setup I mean
<heat>
you need two very specific versions (rust and bindgen), and the standard library source
<heat>
it's significantly harder than make oldconfig bzImage modules_install
<j`ey>
sure
<j`ey>
but it should get easier
<heat>
the only toolchain variance in builds until now was LLVM=1
<heat>
and right now you need rustc because there's no other "working" compiler
<heat>
and rust doesn't have a spec, and rust evolves super quickly, etc
<j`ey>
i mean, gcc was the only option for years too
<j`ey>
and the rustc_gcc backend (gcc backend to rustc) can mostly compile the rust-for-linux stuff
<sham1>
I think that the problem with building isn't as much that building is difficult, for to me it's more that people try to bloat up building stuff with things like project abstractions and such
<heat>
yeah and it still is the de-facto default
<j`ey>
rust evolves quickly, but still is backwards compatible
<heat>
but people apparently use those experimental features a lot
<j`ey>
not really, not anymore
<j`ey>
it was like that for a while, all the crates used nightly, but most of them dont anymore
<heat>
if you write kernel rust code, your code is only "reliably" compilable atm for llvm architectures
<sham1>
As far as I am personally concerned, I still feel that the make model, as also shared by ninja, is the best. Basically building is just a graph traversal problem along a DAG where each node has some number n pre-requisites and changing those cascades through the DAG. Everything else IMO is just adding more bells and whistles
<j`ey>
yeah, but as I said, rustc_gcc_backend (forgot the actual name) works
<j`ey>
*ish, will work better soon im sure
<heat>
yeah but maturity, etc are all not there
<heat>
and gccrs is still very young too
<j`ey>
yeah gccrs is a lot further out
<heat>
and rust does not have a fucking spec
<j`ey>
rustc :P
<sham1>
That's not a spec
<sham1>
That's an implementation
<j`ey>
it;s both
<j`ey>
but sure
<j`ey>
I mean, the ":P" was there because I was joking :)
<sham1>
I'm annoyed by that train of thought
<sham1>
Because when people say that seriously
<heat>
j`ey, what the hell does linux need std for?
<sham1>
You'd think that would be the first thing yeeted by Linus
<heat>
OH
<heat>
core and alloc
<heat>
it's not std
<j`ey>
yeah it's not std
<heat>
these should be reimplemented by the kernel but whatever
<j`ey>
alloc sure, but core nah
<heat>
yeah core looks too low level
<heat>
which makes me ask why this isn't part of the compiler's runtime libs then?
<j`ey>
I mean it comes from the same repo as rustc
Gooberpatrol66 has joined #osdev
poyking16 has quit [Quit: WeeChat 3.5]
gog has quit [Ping timeout: 272 seconds]
archenoth has quit [Quit: Leaving]
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
AmyMalik is now known as Reinhilde
Vercas6 has quit [Remote host closed the connection]