klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
<geist> maybe the idea is you can still reoder writes around reads, or vice versa, based on io priority
<geist> once you toss them into a single queue (without using multiple nvme priority quees, etc) you're at the mercy of the drive
<geist> OTOH SSD is so fast...
<geist> flip side is queining a bunch of io on the ssds lets it leverage some of it's own parallelism, so i guess that's not reall a good idea
<heat> yeah linux on SSDs still opts for an IO scheduler
<heat> but AFAIK based on ionice and processes, not the disk head
Left_Turn has quit [Read error: Connection reset by peer]
<zid> yea makes sense you'd want one on the 'fairness' end regardless
<heat> but yeah you totes want to harness the multiple queues
<zid> just splat as much to the ssd as you can
<zid> and let it sort it all out
<heat> yep
<zid> but if you have EVEN MORE THAN THAT, then fairness/cluster/etc it as best you can and hope it helps a little
<zid> if not, whatever
<gog> hi
<heat> unfortunately the SATA controllers are pretty crappy on the io queue end
<heat> NVME is great though
<heat> you can (try to) allocate a queue per cpu and use MSI for MAXIMUM PERFORMANCE
<heat> IDE gets the cake for "most crappy hardware" where multiple block devices need to share 1 or 2 queues
<heat> with a staggering queue depth of... 1
relipse has quit [Quit: Connection closed for inactivity]
<heat> funnily enough windows 11 on a mechanical drive got really large outstanding IO queue depths
<heat> so i guess that's why it really sucks
<geist> what i dont see linux do necessarily is set up multiple priority io queues on nvme. but that may be because maybe most consumer level stuff, whcih is all i have experience with, only supports one level of priority?
<heat> wdym priority io queues?
<geist> i thought you can, if the nvme device permits, create multiple queues, each with their own io priority, for the device itself
<geist> consumer level stuff may only allow one level of priority
<geist> or i guess more to the point, you can already create multiple queues, so you can assign one per cpu, etc, but you can also assign different io priorities to them
gbowne1 has quit [Remote host closed the connection]
<geist> so hypothetically you could set up say a lo/med/high queue per cpu, so you get N * 3 queues
<heat> i don't remember, but i might've mentally skipped that part
gbowne1 has joined #osdev
<heat> this is the traditional point where i complain i can't find my spec pdfs in the middle of my home dir
<mjg> dir tree bad idea
<heat> ideally yeah but my Downloads/ has 178 pdfs
<heat> so it's not much of a tree anymore
<mjg> CALIBRE
<mjg> yer welcom
<heat> looks interesting
<heat> but if i can't use directories i'm not sure i'm going to use a separate software
<heat> also: yeah io submission queues can have priorities heh cool
<mjg> search by metadata mofer
heat has quit [Remote host closed the connection]
heat has joined #osdev
Matt|home has quit [Ping timeout: 245 seconds]
<geist> huh watching with top when GCC goes to doing the parallel phase of the LTO linking thing it actually seems to spit out a little makefile into /tmp and then run it
<geist> has the effect of having all the parallel versions of this using the single make token logic, so it acutally doesn't exceed the overall parallelism of the root makefile
<geist> kinda neat
<moon-child> oh fun
<moon-child> mjg:
goliath has quit [Quit: SIGSEGV]
<mjg> i'm too old to take the bait
<moon-child> just re lld stuff
<heat> geist: ew
<heat> does it invoke make?
<geist> yep, the lto-wrapper
<geist> well first i was eww, but what's nice about it is make automatically does the 'find the root make job giver' thing
<heat> lto-wrapper IS make, or does it implement enough make?
<geist> so it ends up not using more jobs than the overall build system
<heat> or does lto-wrapper invoke make?
<geist> invoke
<heat> yuck
<heat> they could've just implemented the same logic a bit
<geist> but like i said it's pretty clever
<geist> oh i suppose
<heat> not that there's anything purely wrong with it, but i'd prefer a toolchain that doesn't depend on GNU make for it to work
<geist> what i dunno is if it just always does it or if it tries to detect that it's under a make invocation
<heat> >Use -flto=auto to use GNU make’s job server, if available, or otherwise fall back to autodetection of the number of CPU threads present in your system.
<geist> but then i guess the real ick is that they spawn multiple cc1s instead of just implementing a multithreaded LTO pass inside the thing, like clang does
<zid> -flut= is auto by defaultey
<heat> yeah
<geist> yah so i guess there's some standard way to detect that you're under a make server. probably an environment variable
<Mutabah> Yep
<geist> i that regard it sort of makes sense. it only uses gnu make if it already knows its under gnu make
<geist> in that what wold be the point of reimplementing makes logic if you know you can just use make
<zid> MAKEFLAGS has --jobserver stuff in it
<zid> idk if that's what you're supposed to use
<Mutabah> That's what you're supposed to use
<Mutabah> (I implemented jobserver support in a compiler recently)
<geist> i assume it's some sort of ticket thing where you read a token out of a pipe or something?
xenos1984 has quit [Read error: Connection reset by peer]
<Mutabah> Yep
<Mutabah> On windows it's a shared/named semaphore
<Mutabah> on posix, it's reading/writing tokens from a FIFO
<geist> do you have to write back to it when done? or does it somehow track your life?
<Mutabah> every byte you read must be written back to release the token
<Mutabah> One tick, I'll find the docs
<geist> oh i can look it up, just figured i'd ask since you're here :)
<bslsk05> ​www.gnu.org: POSIX Jobserver (GNU make)
<geist> ah thanks
<geist> i wonder how it tracks an abnormal termination?
xenos1984 has joined #osdev
<Mutabah> Iirc, `make` complains if not all jobs are released
<Mutabah> But, since every running task implicitly has a token (by virtue of being running), progress will always be made
<mjg> that job server thing is a massive piece of shit
<geist> oh dont hold back mjg
<mjg> it was probably a good idea when first smp came out
<Mutabah> What's so bad about it?
<mjg> it's one pipe
<kazinsal> works on my machine
<geist> i'm gonna guess it's some sort of PESSIMAL thing
<geist> and whats wrong with one pipe? it's unfair?
<mjg> it's a *massive* scalability bottleneck evne on linux
<mjg> if you go to 64+ cores
<geist> oh geez man. it's almost certainy not the bottleneck when spawning thousands of processes
<Mutabah> if obtaining jobserver tokens is a scalability issue, then you're Doing It Wrong
<mjg> it very much is
<geist> exaclty
<mjg> then go build a linux kernel with -j 96 or similar, on an adequately sized box
<geist> i have a 256 core one yes
<geist> how can i tell it's the bottleneck?
<mjg> pipe poll will be very high on the profile
<mjg> perf record/perf report
<heat> perf record with the correct options
* geist throws it all in the trasj
<mjg> right, see flamegraphen
<geist> it's PESSIMAL ergo doesn't work
<heat> perf record -F999
<Mutabah> The bottleneck there wouldn't be the jobserver, it'd be other IO - between gcc/cc1/... and to disk
<mjg> it's literally the job server
<Mutabah> citation needed
<mjg> or rather, the fact that this is a single heavily shared pipe
<kazinsal> yeah at the scale where the job server pipe might start to be a concern you have much more tangible bottlenecks
<zid> good news, speeding up that token shit would slow it down
<zid> cus it'd add overhead that wasn't there before
<mjg> interestingly linux already has a better solution: eventfd
<geist> i mean yeah i guess. thing is things that work pretty well in a cross platform way are pretty nice
<mjg> i don't have a bigl inux box handy at the moment, but i can sort it out tomorrow
<mjg> geist: eventfd by now is cross platform though
<mjg> :]
<Mutabah> hmm... with `make -j 96` - only one task should be pulling from that pipe - the make orchestrator thread
<mjg> no
<mjg> you get *excessive* polling
<mjg> from all over
<Mutabah> To the jobserver pipe?
<geist> eventfd is implemented in all posix systems?
<mjg> geist: not all, but even netbsd and freebsd have it. if eventfd-like interface was standarized, like it could have been 15+ years ago, they would have
<mjg> Mutabah: yes
<mjg> Mutabah: gnu make has it in a non-blocking mode, they poll and read in a loop
<mjg> *a lot*
<Mutabah> That'd be a single thread doing that, right?
<mjg> no
<Mutabah> Although... if linux uses recursive make...
<mjg> it's every make process
<mjg> and you get quite a few with high j and a build tree which parallelizes
<Mutabah> but... does that mean that it's a bottlneck?
<geist> yeah it's when there's more than one make
<Mutabah> It might be a hot-spot for those tasks, but are they being blocked from other useful work?
<mjg> Mutabah: i had flamegraphs from few months back sohwing that it is
<geist> it's explicitly for recursive make i assume, since wthin a make process it can do its own thing
<mjg> also you may remember there is this thing named ninja
<mjg> which dodges the entire thing
<heat> NINJA
<heat> wait what does ninja do?
<mjg> ninja has my stamp of approval
<geist> NINJA
<heat> wow
<geist> yah we went there
<mjg> heat: you generate all commands upfront, so you have one proc running all jobs
<heat> mjg's stamp of approval has a pretty high bar
<Mutabah> Technically, a well-made makefile set does the same
<mjg> i'm not aware of any
<Mutabah> although it is harder to write
<geist> right, having single process makes basically is the same thing. ninja sideteps the issue by not really letting you do the recursive thing
<geist> so thus not needing a job server
<kazinsal> ninja: because the other build systems out there weren't getting the author promoted
<heat> i think ninja still provides a job server?
<Mutabah> Still probably implements the jobserver, so it can coordinate with parent/child build systems
<geist> yah
<geist> kazinsal: bingo.
<Mutabah> I impemented a jobserver for my compiler project exactly for that reason
<geist> ninja is a solid T6 promotion
<heat> wasn't ninja a personal project?
<geist> maybe T7!
<heat> or is it also a G project?
<geist> oh heck if i know. i was only following kazinsal's lead. i know google does work on it a bunch
<kazinsal> I don't work for big G but I've seen personal projects get pivoted into "you now have a team working on this, here's some money"
<geist> we (fuchsia) actually added a pretty cool feature recently that shows you the top N jobs sorted by time
<geist> so it keeps a moving waterfall of the slowest jobs while compiling, really useful for very parallel builds
<geist> since we generally build with -j1000 or so for our builds
<kazinsal> -jUINT_MAX
<kazinsal> build machine immediately falls over
<mjg> geist: ninja can export a log which can be viewed in chrome
<mjg> you get a gantt chart of targets
<geist> oh sure we use that all the time to debug after the fact
<geist> yep
<mjg> i used that to find a lot of idiocy
<geist> this is handy to see what is currently running, basically
<geist> yeah dont get me wrong i love ninja, it's just not a complete build system. it's the engine for other build systems
<mjg> agreed, it's a small tool
<mjg> also bonus points for not spawning /bin/sh as a prefix for every command
<geist> yeah though to be fair if you understand make you can avoid it for that too
<geist> since make only does /bin/sh for some of it's lines, depending on what is in them
<mjg> i thought that behavior is required by posix?
<heat> you can skip it if you're sneaky
<geist> simple commands, ones that it can directly invoke it just does
<heat> GNU make does sh's job for most commands if you don't touch $(SHELL)
<mjg> i think gnu make hacked echo in that manner
navi has quit [Quit: WeeChat 4.0.4]
<mjg> but what about the rest
<heat> like
<geist> basically if the command is essentially a simple command line without any fancy bits it just directly fork/execs
<heat> if you do SHELL:=/bin/myshell you'll see it prefixed everywhere
<geist> and yes 'built in' shell stuff like echo it will run in make process
<mjg> aight gents, it was nice talking to you
<mjg> i need to sign off for the day
<mjg> have a nice $timezone_appropriate
<geist> i did a fair amount of tracing of the LK build system to try to avoid /bin/sh as much as possible, was educational
<geist> especialy since i was doing the thing that made it worse: using ; and combining multiple lines to try to reduce the number of /bin/shes in some cases
<geist> but the ; itself causes make to /bin/sh
<heat> if /bin/sh is bash it'll be extra slow
<heat> for good measure
heat_ has joined #osdev
heat has quit [Ping timeout: 250 seconds]
<heat_> ok i checked real quick and it seems ninja is pretty google, even though it had mostly a single guy behind it
heat_ is now known as heat
<heat> so google is directly responsible for blaze, bazel, ninja, gn, gyp, kati and soong
<heat> and i'm probably missing something
<mcrod> hi
<moon-child> hi mcrod
<epony> wsup cookies ;-)
<kof123> "and i'm probably missing something" corporations are fictional, but let us not go there :D
<kof123> the whole point of a corporation is there is no direct responsibility
<epony> do you really want a burning meteor of fireflame right now? ;-)
<kof123> no, just was a funny statement :D
<epony> word of the year (denoting a 25 year procession): enshittification (Cory the late story)
<epony> it's called global surveillance and espionage
<epony> calling it just profiteering and middle-man-central-planned-digital-economy.. is too light of a punch
<epony> weak punchers are.. typically part of the system
<epony> that's why you have to also be concerned about the unions and the gentle opposition that the GNU is putting out, it's *W*E*A*K*
gog has quit [Ping timeout: 268 seconds]
<epony> the big shitty cacodemon: fair distribution of wealth, goods and produce in the economy since time immemoria of the imperial monarcho feodal colonial capitalism of exploitation misery and war-death-pollution-genetic-extinction
<epony> weak weak feeble and featherlight punches of caress and sexual stimulation, those "fables" and "allegories" are
<epony> need a light for your gas stove? ;-) I got it.
<epony> fictional are the participatory rights and democratic values of nothingburgers of the middle-class, it is disappearing
<epony> your "pseudo-social" system is gone, and so is literacy and social amenities, you have the glass towers and the broken roads, and an army that can't go anywhere, what can possibly go fuckingly wrong homeland?
<epony> enjoy.. while it lasts, 'cause Mexico wants its lands back and the 20 million missing native Amrican northern Indians and the 25 million Aztec and Inka are now changed beings and revenge spirits that bite hard in the mem-brains
<epony> corporations, yeah you wish that was your problem
<epony> remember what made the Mexican bay "appear" and cause dinosaur extinction.. you're the reptilian strike area now
<epony> MUAHAHAHAHA
<epony> ---
<epony> just joking ;-) chill
<epony> oh, and by the way, Chomsky is a weak (and senile from his youth) critic too, can't really make an impression either
heat has quit [Ping timeout: 245 seconds]
vdamewood has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<zid> oh heat died
<epony> and then start thinking, USA is greatly indebted to China, and debt leads to economic and hot destructive wars
<zid> I wanted to tell him a fun story about someone at nvidia trying to avoid PESSIMAL
<zid> They have a list of 4 things, they need to up this list to a list of 64 things
<zid> Someone submitted a patch to add an entire red black tree.
<zid> til mjg works at nvidia
<zid> (The patch was rejected)
<epony> nVidia has a representative on the C++ committee chair board
<epony> the $you's can be less worried about their "language" representation, and more worried about their entire economic and business model..
<geist> zid: PESSIMAL
<epony> because their AI accelerator machines are.. insufficient, weak, expensive, energy wasteful, pegged on TSMC production and overall wasteful and inefficient
<epony> when compared to the million cores of the nascent and already in deployment wafer-scale integration
<epony> nVidia better make a silicon fabrication facility.. 20 years ago to survive
<epony> it's shaking up pretty badly and losing up on serious compute and HPC markets for protein folding and AI and simulations
<epony> don't know what is your obsession with nVidia, but it is a consumer market serial production of expensive and inefficient low density GPU compute machinery, that faces competition from other TSMC customers for chip batches and losing out to Apple precedence of orders and Samsung semiconductor production independently
<epony> so might have to turn ot the Chinese fabrication plants sooner, before the Taiwan return to China completes
<epony> the entire "market valuation" is hollow
<zid> Remember, the goal is to make the constant factor as long as possible, so that your O(1) is bigger than O(sqrt(n)) for all n your program might encounter
<epony> the goal is to retake the HPC market anew with new cluster deployments with much higher energy and compute density with an several orders of magnitude performance and efficiency difference
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
<moon-child> 64 i would maybe binary search
<moon-child> but still almost certainly a flat constant size buffer
<moon-child> unless there's like a bunch of them
<zid> nah it's for iommu regions I think
<zid> they bumped the limit, but no actual memory map will be that complicated in practice
<zid> it's almost always 4
<moon-child> OH
<moon-child> lol
<kof123> epony: "historic analysis" there is no history, it just goes back to astrology lol
<kof123> you end up with a purple heron capstone lol that is the "USA" claim to authority
<kof123> and exactly the same as "china" on that lol
<kof123> epony: For Engels (1934, 1954), the unity and struggle of opposites The unity of opposites, which Lenin described as the most important i don't know why you link me to "Marxist" stuff, they just merge, as always :D that was how egypt rolled :D same old :D
Ellenor has joined #osdev
<snappy> Hi, does anyone know a way to cross compile a gcc x86 toolchain for m1 mac? Just wondering typically how people do it, I think I need gcc, binutils, and gdb. I want to play around with xv6 os with qemu.
<zid> I'd use crossdev from gentoo, but that will probably not apply
<zid> the gcc people have a guide for building a cross compiler by hand somewhere
<geist> that being said, mac may be hard to generally build a cross tool chain for
<geist> since it's not ELF based, may have magic apple stuff, etc
<geist> and apple has been using clang for a long long time
<zid> mm yea that's a good point
<zid> is there a 'mingw' equivalent?
<epony> kof123, maybe you did not understand any of that, but.. globalised imperialism is manifested in international financialised cartels of pre-war exploitation that leads to war
<epony> that is what your mention of corporations means
<Mutabah> snappy: If it's a free-standing toolchain (no need for standard library), then the normal procedure of building a gcc cross compiler should work?
<epony> and it shows up as "organised" takedown operations of the balance of power in a country towards which then weapons would be transported and ships returned with oil and gas and grain and minerals of high value
<zid> aka crossdev/crosstool-ng/gcc's guide/etc
<epony> so, your "Arab springs" example are the result of corporate funding to destroy your local economies and result in serious north African and middle Eastern series of hot and dramatically destructive conflicts
<Mutabah> ... ok, what is going on with epony's discussion?
<epony> that is, what these "corporations" facilitate and take part in, including big tech and.. international US banks and the USA military
<zid> you listen to him?
<Mutabah> I usually ignore him, but that's during quiet times
<zid> I'm going to have to check the log now
<Mutabah> Does he bother people?
<kazinsal> I've had him on ignore for years now
<zid> ah, just normal schizo stuff
<epony> Mutabah, we exchanged a quick set of ideas ;-) don't worry, that's just facts that you should never know and accept as being told by the corporations and foreign governments to which you are a colonial and exploited extension (dominion / client state)
epony was kicked from #osdev by Mutabah [Please keep it on-topic, move your rants elsewhere]
<Mutabah> I should have done that months ago TBH, been too distracted to keep a close-enough eye
epony has joined #osdev
<epony> relax, it was concluded before you mentioned it.. no need to be rude
<zid> There's a fair few hanging around, they're attracted to tech channels
<epony> zid, check yourself ;-)
<epony> before you breakyouself
TkTech has quit [Quit: Ping timeout (120 seconds)]
TkTech has joined #osdev
<kof123> if you want to discuss philosophy mention another channel epony or msg :D
masoudd__ has joined #osdev
dza has quit [Quit: ]
dza has joined #osdev
dza has quit [Quit: ]
Brnocrist has quit [Ping timeout: 252 seconds]
Brnocrist has joined #osdev
dza has joined #osdev
dza has quit [Quit: ]
gbowne1 has quit [Quit: Leaving]
[_] has quit [Read error: Connection reset by peer]
<snappy> Mutabah: yeah no libc, what's the standard method for compiling a toolchain? I've tried to use crosstool-ng in the past but always have troubles
<snappy> having said that, looks like homebrew has i686-elf-gcc
dza has joined #osdev
<Ellenor> grug
<moon-child> gog
<zid> glig
<Mutabah> snappy: The "building a cross compiler" wiki page has basic instructions
<zid> all I ever remember is that you don't build in the source dir
<Mutabah> Yeah
<zid> thank god for crossdev
<kazinsal> everyone should go through the "fuck, okay, what am I missing now" rigmarole for building a cross toolchain a few times
<Mutabah> One tick, getting an example from my (linux) machine
<kazinsal> just to appreciate how it "fits" together
<kazinsal> then, and only then, can they use a "just build me a damn toolchain" script ;)
<Mutabah> The above is the basic pattern I've used for various architectures and binutils/gcc versions
<kazinsal> I just use geist's toolchain scripts these days, they do the job for anything I intend to build for
<kazinsal> (that is, when I actually sit down and do some osdev stuff. which, recently, has been "not much")
GeDaMo has joined #osdev
gog has joined #osdev
Left_Turn has joined #osdev
roper has joined #osdev
<ddevault> DWARF is a fucking nightmare
<Mutabah> needlessly, or just the mess of dealing with real programs?
<ddevault> I'm not sure.
<ddevault> honestly I don't need everything it offers right now and just getting at what little I do want (line numbers) is horribly complicated
* moon-child STABS ddevault
xvmt has quit [Remote host closed the connection]
xvmt has joined #osdev
goliath has joined #osdev
sbalmos has quit [Remote host closed the connection]
sbalmos has joined #osdev
Vercas9 has quit [Quit: Ping timeout (120 seconds)]
navi has joined #osdev
<epony> kof123, we'll discuss of course, but for now the important points were presented and they are quite seriously real, and unavoidable, as you're living in them, and not just some "internal pondering" philosophy ;-) so.. catch the drift of the wind anywhere it fits, but briefly and do your research after that, i.e. it already happened, time to begin understanding it
<gog> hihi
heat has joined #osdev
<heat> STABS
<sham1> DWARF
<gog> HEAT
<sham1> That's no debug info format I've heard of
<gog> high-efficiency attribution table
<heat> GOG
<heat> GOOD OLD GAMES
<gog> they don't sell only old games tho
<heat> maybe its a GOOD OL games
<heat> southern-like
<sham1> Of course they don't. It's CD Projekt Red's own storefront thing
<heat> DOG.com
<heat> DANG OL GAMES
<sham1> GOD.com
<sham1> Games Of Decades
<bslsk05> ​kernal.org: ArmageddonSoon.com
<heat> ARMAGEDDON SOON
<gog> wow this is some very 90's web design here
<gog> 90's bible thumper design
<sham1> And like any good bible thumpers, they clearly haven't actually read the good book
<sham1> The thing explicitly says that you can't know when the apocalypse happens
<heat> i talked this through with gog yesterday
<heat> reading is for nerds
<heat> do not read
<gog> yes
<sham1> So… how are we commsing?
<gog> do not read literature i mean
<sham1> Ah
<heat> no
<heat> do not read anything
<gog> reading is fine as long as it's not for enrichment
<heat> unlearn reading
<gog> i mean yes
bauen1 has quit [Ping timeout: 240 seconds]
<heat> people should be write-only
sprock has quit [Read error: Connection reset by peer]
sprock has joined #osdev
<epony> 4.6: "Planet of the Users" http://www.openbsd.org/lyrics.html#46
<bslsk05> ​www.openbsd.org: OpenBSD: Release Songs
<epony> "shit only"
<gog> shit-only interface
dude12312414 has joined #osdev
<heat> cummed interface
<heat> farded
<bslsk05> ​'Dracula Flow: The Official Saga (1-4)' by PLUMMCORP RECORDS (00:17:34)
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
[itchyjunk] has joined #osdev
PublicWiFi has quit [Quit: WeeChat 4.0.3]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 268 seconds]
PublicWiFi has joined #osdev
PapaFrog has quit [Ping timeout: 256 seconds]
vdamewood has joined #osdev
gog has quit [Quit: Konversation terminated!]
skipwich has quit [Quit: DISCONNECT]
<epony> you mean holy C roman gameboy interface
skipwich has joined #osdev
<epony> nah, that Apple
<epony> muh SWIFT is in uhr LLVM
<epony> "owndeaded platformfece"
* epony runs
* geist yawns
<zid> okay geist is bored, channel's over guys
<zid> pack it up
<geist> more like i woke up early because didn't sleep very well
<geist> so it's one of those kinda yawns
<geist> like i'm awake, but yawn
<zid> I just woke up myself
<zid> but it's at 6pm, because I am a degenerate
<sbalmos> geist: you're not supposed to do that until you're back to working
<geist> yeah i know!
<geist> am dissapoint
<geist> i'll crash here in a few hours for sure
PublicWiFi has quit [Ping timeout: 245 seconds]
<sbalmos> every time I sit down to try and start actually writing code nowadays, I start going comatose :/
<geist> yeah that happens sometimes to
<geist> too even
masoudd_ has joined #osdev
masoudd__ has quit [Ping timeout: 256 seconds]
gog has joined #osdev
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
PapaFrog has joined #osdev
skipwich has quit [Quit: DISCONNECT]
<heat_> geist!
<heat_> how r u
<geist> still awake!
<gog> hi
<heat_> grog
<gog> heat you have a tail
<gog> are you a cat
<heat_> i meow-may be
<heat_> geist, in fuchsia, do you know what MRU means?
<heat_> is it "Multi-generation LRU"?
<geist> hmm, or could just be most-recently-used
<gog> most recently used makes more sense to keep track of imo
<heat_> hmm yeah it seems like fuchsia just uses different terminology
LostFrog has joined #osdev
gog has quit [Quit: byee]
PapaFrog has quit [Ping timeout: 252 seconds]
<heat_> linux uses "active" and "inactive", fuchsia seems to use lru and mru?
<heat_> / Currently define a single queue, the MRU, as active. This needs to be increased to at least 2
<heat_> // in the future when we want to track ratios of active and inactive sets.
<heat_> seems like it yeah
<heat_> i was hoping yall had a nice writeup about your page reclamation somewhere, but i can't find it
<geist> yeah probably internal, i can ask the author when i get back to work if they can get it into docs
<geist> most stuff starts as a google doc internally that gets socialized around
<geist> sometimes that gets flattened into an RFC which is public, but that's generally for systemwide things
<geist> like adding some new api,e tc
<heat_> it'd be great if they can find the time to post it as a doc :)
<heat_> mm people don't like writing documentation :P
<bl4ckb0ne> how screwed am I when the page fault handler faults
<heat_> erm, what kind of fault?
<bl4ckb0ne> idk, working on this
<heat_> oh, if its accidental the answer varies from "badly screwed" to "eventually you'll be 'fine'"
<bl4ckb0ne> its not accidental, im trying to plug the memory map I got from uefi into the kernel
<bl4ckb0ne> im trying to figure out what cr2 is reporting
<zid> six
<bl4ckb0ne> almost, 464
<zid> try not loading 464 into a segment selector
<heat_> WHY IS CR2 A CONTROL REGISTER IF IT DOESNT CONTROL ANYTHING
<heat_> WAKE UP SHEEPLE
<bl4ckb0ne> qemu hangs again gfdi
<zid> -no-kvm
<bl4ckb0ne> even with -d int,cpu,mmu
<bl4ckb0ne> ah right
<geist> heat_: well, it's more like writing the docs and then writing it again is annoying
<geist> but yeah
* heat_ nods
<bl4ckb0ne> > CR2=00000000000001d0
<bl4ckb0ne> yeah same as with kvm enabled
<bl4ckb0ne> but at least i get some output
<heat_> could also just make the doc public, i know the chrome people do that a bunch
<zid> what does cr2 contain during a fault?
<zid> fault addr?
<bl4ckb0ne> that value
<zid> no
<heat_> zid, yes
<zid> I only ever use the -d int output to looka t faults
<geist> bl4ckb0ne: another thing that might help in addition to -no-kvm is -singlestep
<geist> it causes it to translate one instruction at a time. good for tracing, at the expense of speed
<zid> turning off tcg chaining was enough for me
<zid> but it took a while to find that option
<geist> heat_: yeah for some reason we dont do that, it's against policy
<geist> i think it's because we made a hard policy of getting docs into the tree, which is great, IMO *however* it makes it an additional step
<geist> so i think it has it's pros + cons
<bl4ckb0ne> it steps on its own?
<geist> yah it still runs the same way, you'll just see in a trace that it does it one source instruction at a time
<geist> instead of a block of instructions
<heat_> oh, here's a funny unrelated anecdote: i once had a problem with ghost page faults on bad, unmapped addresses, when the system was stressed. it turns out that on the page fault handler i was doing "void pfhandler() { irq_enable(); /* ... */ unsigned long cr2 = cpu_get_cr2(); }"
<heat_> so the cr2 was getting accidentally clobbered after the irq enabling
<geist> ah yep, a fairly common mistake
<bl4ckb0ne> so thats not the addr that messes up everything
<heat_> cuz on all other trap handlers, that's not a problem, so my generic trap handler was restoring IRQs
<geist> pretty much the same on most architectures, though sparcv9 atleast has a stack of control registers to let IRQs/exceptions nest up to a point
<geist> which is kinda neat. haven't seen another arch do that
<geist> (was reading the sparcv9 docs recenlty)
<heat_> the sun engineering ethos is unchallenged
<geist> they kinda neat it because the register window spill and TLB miss exceptions can more or less fire at any time
<geist> so this helps you have exceptions interrupt other exceptions up to a point (i think 4 deep)
<bl4ckb0ne> hm weird
<bl4ckb0ne> cr2 is 0, then a value a single time, then the value i pasted
<zid> why not
<zid> look at the -d int bit
<bl4ckb0ne> im doing that
<zid> and make sure all the regs make sense
<zid> cr2 is a pretty shitty reg
<geist> also you should be able to look at the flags pushed on the stack to see what the PF flags are
<geist> ie, instruction or read/write, etc
<heat_> cr2 is great
<bl4ckb0ne> got a bunch of "Servicing hardware INT=0x20" before that change
<bl4ckb0ne> after its just "check exception old: 0xffffffff new 0xe"
vdamewood has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<geist> yep. so now you need to figure out why PF is being thrown
<geist> and then work backwards form that
<geist> it's possible the problemis something like int 20 is nesting, or the stack isn't being cleaned up, so you're blowing the stack or something
<geist> but figure out the why and then work backwards
<zid> gist me the log :P
<geist> nah let em figure it out
<heat_> a nice, easy tip for debugging is to just dump the stack
<heat_> like x/100a $rsp in gdb really helps
<bl4ckb0ne> relevant bit from the log Servicing hardware INT=0x20
<bl4ckb0ne> RAX=000000000065f1aa RBX=0000000000002710 RCX=000000000000b008 RDX=000000000000b008
<bl4ckb0ne> R12=000000003e5e90d4 R13=000000000000c040 R14=0000000000000000 R15=000000003e5ec040
<bl4ckb0ne> R8 =000000003fe95bb0 R9 =000000000000b008 R10=0000000000000000 R11=0000000000000000
<bl4ckb0ne> 128: v=20 e=0000 i=0 cpl=0 IP=0038:000000003e7c3b53 pc=000000003e7c3b53 SP=0030:000000003fe95b00 env->regs[R_EAX]=000000000065f1aa
<bl4ckb0ne> RSI=000000000065f73c RDI=0000000000000000 RBP=000000003fe95b50 RSP=000000003fe95b00
<bl4ckb0ne> RIP=000000003e7c3b53 RFL=00000293 [--S-A-C] CPL=0 II=0 A20=1 SMM=0 HLT=0
<bl4ckb0ne> CS =0038 0000000000000000 ffffffff 00af9a00 DPL=0 CS64 [-R-]
<bl4ckb0ne> ES =0030 0000000000000000 ffffffff 00cf9300 DPL=0 DS [-WA]
<heat_> linux does that on when tracing panic handlers
<bl4ckb0ne> SS =0030 0000000000000000 ffffffff 00cf9300 DPL=0 DS [-WA]
<bl4ckb0ne> DS =0030 0000000000000000 ffffffff 00cf9300 DPL=0 DS [-WA]
<bl4ckb0ne> FS =0030 0000000000000000 ffffffff 00cf9300 DPL=0 DS [-WA]
<heat_> stop
<bl4ckb0ne> GS =0030 0000000000000000 ffffffff 00cf9300 DPL=0 DS [-WA]
<heat_> stop
<zid> This is not a gist.
<bl4ckb0ne> LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT
<bl4ckb0ne> TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy
<bl4ckb0ne> GDT= 000000003f5dc000 00000047
<zid> This is a 'flood the channel'
<bl4ckb0ne> IDT= 000000003f06b018 00000fff
<sortie> /msg chanserv op #osdev
<bl4ckb0ne> CR0=80010033 CR2=0000000000000000 CR3=000000003f801000 CR4=00000668
<heat_> stop
<heat_> stop
<bl4ckb0ne> DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
<bl4ckb0ne> DR6=00000000ffff0ff0 DR7=0000000000000400
<zid> you can't stop it
<sortie> bl4ckb0ne: (core dumped)
<zid> his client has DON'T GET ME BANNED mode built in
<geist> dang, sortie was quick to the trigger there
<zid> clients should be banned from the network if they support that feature imo
<sortie> geist., got delayed a sec due to a space before /msg chanserv whoops
<zid> it only exists to be abused
<heat_> what feature?
<bl4ckb0ne> DR6=00000000ffff0ff0 DR7=0000000000000400
<sortie> geist., got delayed a sec due to a space before /msg chanserv whoops
<bl4ckb0ne> CCS=0000000000000008 CCD=ffffffffc0023dd0 CCO=SUBQ
<zid> anti-flood
<bl4ckb0ne> EFER=0000000000000d00
<geist> haha still coming in
<sortie> bl4ckb0ne, yeah you'll be quiet for a little while longer
<heat_> i think i have that feature too
<zid> back in my day, you'd paste a message and after 4 messages you'd get disconnected, because it took less than 50ms
<geist> one of the things i love bout irccloud is it has an auto pastebin feature
<sortie> bl4ckb0ne, feel free to leave and rejoin or whatever, or I'll unmute you in a second :)
<geist> if you try to paste >=3 lines it says do you want to pastebin this?
<zid> rather than the client spacing them 1200ms apart so that you don't
bl4ckb0ne has left #osdev [#osdev]
bl4ckb0ne has joined #osdev
<bl4ckb0ne> very sorry about that
<geist> no worries
<zid> My favourite is when I do 'copy image' from google image results
<sortie> bl4ckb0ne, no worries, use a pastebin next time, now where were we? :D
<bslsk05> ​paste.sr.ht: 9824125 — paste.sr.ht
<bl4ckb0ne> i was using a pastebin
<bl4ckb0ne> i just messed up my copy paste
<zid> and I get a kilobyte of base64 instead
<heat_> OH I KNOW WHAT'S GOING WRONG
<sortie> b geist
<sortie> bt
<geist> branch to geist?
<sortie> geist, gimme a stack trace
<bl4ckb0ne> the "Exit boot services" is right at the end of the bootloader and into kmain
<zid> smh got your tss below 1MB
<heat_> you're dereferencing a pointer from EFI without it being mapped
<zid> imagine thinking memory below 1MB really exists
<heat_> no need to thank me
<bl4ckb0ne> thanks
<zid> these are incredibly silly addresses in general
<bl4ckb0ne> heat_: how did you deduce that
<geist> yeah something at 000000003df3ef98
<heat_> 0x3df3ef98 is mega EFI-like
<heat_> particularly since they allocate top-down
<geist> yeah
<zid> all your mapped memory before you start a user program should start with a bunch of f
<zid> unless you tried to access device memory and forgot to map it
<zid> (but that should then start with an f, after you do)
<geist> if you dissassemble your kernel you can see what it was trying to do at RIP ffffffffc0036849 which is where the first PF happens
<bl4ckb0ne> i saw that yeah
<geist> the second PF is probably nonsense, since your system has already blown up
<heat_> anyway you should check what's wrong with your PF handler that it can't panic properly
<heat_> addr2line is nice, llvm-symbolizer is even nicer
<zid> RSP and RIP are nicely kernel spacey, but it appears to be heavily interacting with user addresses.. which seems very wrong
<bl4ckb0ne> i think I got it
<bl4ckb0ne> im loading a file in the bootloader
<bl4ckb0ne> ended up dereferencing a efi_phyiscal_addr
<heat_> you can't call boot services after exitbootservices btw
<bl4ckb0ne> yeah those are loaded before
gog has joined #osdev
<zid> I don't have any sick logs of my kernel taking irqs in kernel space, now that it plays zelda on a loop in userspace :(
<bl4ckb0ne> emulator?
<zid> 0x2b -> ethernet frame
<bl4ckb0ne> neat
<zid> but it's in userspace so all the regs are low values now
<heat_> while "haha boros zelda" zid is stealing your personal info and phoning home
<zid> data EXFILTRATION
friedy has joined #osdev
<gog> that's fine he already found those pictures of me anyway
<zid> gog is actually the receptionist in the power puff girls
<zid> only exists as socks down
<GeDaMo> Sara Bellum
<zid> yes, bellum is it
<zid> gog is just pink and blue stripey socks all the way up, then she goes out of frame
<gog> yes
<bl4ckb0ne> j
<bl4ckb0ne> thats not my vim buffer
<heat_> j
<zid> my vim buffer is 400kB of 8===D
<gog> :%s/==D//g
<zid> I only get to keep 299.998kB of shaft, and a ball!?
<zid> 399*
<zid> jewish regex
<gog> l'chiam
<gog> in my case it was :%s/8//g
<zid> sexy sexy eunuchs
<gog> i told the doctor i was obsessed with unix and he misunderstood
<zid> Gave you the solaris snip
<gog> crab delivery
<gog> or is this a weapon
<zid> whatever it was carrying just rapidly carcinized
<gog> dang
<heat_> TIL the S in SPARC stands for scalable
<zid> I thought it stood for sparc
<zid> sparc parc arc rc c.
<heat_> only GNU can cheese acronyms like that
<zid> cpu from parc, with lightning, remote control, in C
masoudd__ has joined #osdev
masoudd_ has quit [Ping timeout: 252 seconds]
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
masoudd__ has quit [Ping timeout: 268 seconds]
Nixkernal has quit [Ping timeout: 260 seconds]
Nixkernal has joined #osdev
<ddevault> getting there https://files.catbox.moe/0jdvsf.png
<chibill> I am thinking of trying to write an OS for a RiscV processor. And advise other then following the barebones on the wiki? (I have learned not to trust QEUM's device trees... Since they even have hardware that doesn't exist on the emulator listed.)
<heat_> what?
<heat_> you need to trust QEMU's device tree
<heat_> it's reliable
skipwich has joined #osdev
<ddevault> yeah you can definitely trust qemu's device tree
<chibill> Not for RISC-V its not (specifically the virt machine type) it lists all for possible VirtIO devices no matter now many actually are setup to exist, if you try to touch ones that does actually exist you memory fault.
<snappy> chibill: coincidentally i'm going through the xv6 in riscv book, might be worth checking out for the riscv parts
<chibill> four*
<chibill> snappy, I actually learned this working on xv6 in riscv. Was trying to add multiple harddrive support.
<heat_> you need to probe the virtio-mmio devices
<heat_> The driver MUST ignore a device with MagicValue which is not 0x74726976, although it MAY report an error.
<heat_> The driver MUST ignore a device with Version which is not 0x2, although it MAY report an error.
<heat_> The driver MUST ignore a device with DeviceID 0x0, but MUST NOT report any error.
<epony> probing time
<heat_> if you're page faulting its your bug, not qemu's
<chibill> If you try to probe the virtio-mmio devices it lists you get memory faults for any that were not defined in your launch command line.
<chibill> Because if you only add 1 in the command line it still lists all four that the virt machine for RISC-V supports having in the device tree.
<heat_> what do you mean with "memory fault"
<heat_> because i'm pretty sure the virt machine boots linux, and it not booting linux with bogus mmio ranges it can't handle is... very unlikely
<epony> and the utra in sparc stands for 64bit
<heat_> but if you've got a repro i'm all ears
<epony> you learn so fast
<chibill> I am trying to remember since I apparently got feed up (this was a few months ago) with my attempts and deleted my code. Maybe I was doing something wrong but I don't think I was since it worked properly for the number of virtio I defined in the
<chibill> Command line but as soon as it went past that it would fault
roper has quit [Quit: leaving]
<chibill> So if I am understanding what you were stay correctly, device trees can list hardware that doesn't exist and you have to just check if it does by probing it? (At least in the case of VirtIO devices)
<heat_> in the case of virtio yeah
<heat_> because they explicitly mention it in the spec
<zid> Are you allowed to say, say there's a serial port somewhere that might not actually be soldered to the board on that revision, in the device tree, and expect them to probe and potentially handle a bus fault to find out? Or is there some supreme overlord saying that's illegal
<chibill> Which spec? The VirtIO spec only mentions devices trees in that it recommends using them to getting the address of the MMIO device address for devices that exist. The device tree spec makes no mention of VirtIO at all.
<heat_> zid, that's probably illegal
<zid> who shoots me?
<heat_> ACPI kinda supports that using the _STA method
<zid> device tree confederacy?
<heat_> no one shoots you, but they'll shoot the vendor
<zid> what
<heat_> the arm people are very aggressive
<zid> I was the vendor in that example
<heat_> well, then you're getting shot
<zid> by WHO
<heat_> the arm people
<zid> this is like pulling teeth
<heat_> all of em
<zid> self-policing?
<heat_> yeah
<heat_> there's a status property that can be used to selectively disable devices
<heat_> but that obviously needs device tree patching
<zid> yea the goal there would be that you didn't have to change anything
<zid> just list crap as optional, probe for
<zid> so you could ship all the revisions on the same fw
<heat_> >Refer to the device binding for details on what disabled means for a given device.
<heat_> ok so if its your hardware you totally can
<chibill> Still expected QEMU to follow the device tree spec when it handles VirtIO devices and not list devices that don't exist. Since I still haven't found in either the device tree or VirtIO spec where you HAVE to probe for them.
<heat_> like your serial port is compatible = "zid,serial-port"
<heat_> chibill, dude i fucking quoted the spec
<chibill> VirtIO jsut says you should check yout device tree for the addresses if you have on.
<zid> The driver MUST ignore a device with MagicValue which is not 0x74726976, although it MAY report an error.
<zid> I assume this is.. highly relevent
<zid> And if you paste it into google, you get the.. virtio 1.1 spec pdf
<chibill> Yeah except following the device tree spec I should never run across that, since it should only present me with devices that exist. And the VirtIO spec says I would assume incase you have a busted device tree. (I literally have both open)
<zid> except you do
<zid> because the spec says you can
<zid> virtio is allowed to have stubbed devices
<zid> it makes sense to me, it's.. for virtual devices
<zid> that may not be *software* enabled
<zid> think of like.. vmware, I might add 8 extra serial ports, or add 0 to the virtual machine, but I'm still booting the same arm soc under virtualization either way
<zid> so the DT can't possibly be updated with *exactly* how many serial ports there are
<zid> so we add a *virtual* serial port hub, and ask it how many serial ports there are, and there might be 0
<heat_> for a more hardware-like analogy: think of the virtio-mmio devices of something like a bus, where you need to probe if something's behind it before trying to use it
<zid> yea my hw analogy was 'hub'
<zid> the dt has the hubs, you probe the hubs for the connected devices
<heat_> this isn't usual for the device tree (or ACPI), but virtual machines and virtual devices sometimes do it, because, erm, they're virtual and doing dt manipulation is annoying
<zid> yea recompiling the DT because I added an extra serial port would be blehh
<chibill> Surprised QEMU doesn't just adjust the DT when it builds it at run time. Would make it a lot easier to understand what exists.
<heat_> and for virtio there's a great assumption that both parties are acting in good faith to make device detection dumb simple and to make the operations fast
<chibill> Since for the risc-v virt device it builds the device tree at run time.
<zid> could submit a patch, but it isnt' very useufl
<zid> because your probing code will need to exist either way
<heat_> like certain passages in the virtio spec are literally "your driver must place a write barrier here for correct device operation"
<zid> so it's just extra code in qemu to make one random list that will be pruned by your driver regardless, slightly prettier
<chibill> True, but following the DeviceTree Spec would be a great thing to do.
<heat_> BUT ITS FOLLOWING IT
<zid> It is following it
<chibill> Its following the VirtIO spec.
<zid> the virtio devices are *hubs*
<heat_> what
<chibill> Your device tree shouldn't list things that don't exist.
<zid> I have 18 usb hubs on my machine, representing.. 4 devices total, after they're probed.
<heat_> it exists
<zid> The 18 hubs still exist
<zid> They just don't have shit plugged into them
<heat_> like, they totally exist, you can do mmio to them, you don't get faults, you don't get all-1s
<gog> i'm a hub
<heat_> they just don't have a device behind them, and that's explicitly defined by the virtio spec
<zid> gog: you're 3/4 of a hub, the rest is out of frame
<gog> yes
<zid> I am the dexter's lab episode with the inner beard
<heat_> im pickle rick
<zid> That stands to reason
<heat_> aggressively genz and deeply unfunny
bauen1 has joined #osdev
<kof123> see, it is just hieroglyphs in the end lol
gbowne1 has joined #osdev
<bslsk05> ​'Episode 1 - Mongo DB Is Web Scale' by gar1t (00:05:36)
dude12312414 has joined #osdev
<zid> heat_: you said it not me
<gog> heat
dude12312414 has quit [Remote host closed the connection]
<heat_> gog
<zid> I have scotch roll + german salami + cheese + coleslaw, surprisingly good sandwich
<geist> i'm watching this and immediately am thinking of mjg saying that piping to /dev/null as pessimal at 64 cpus +
<heat_> MySQL doesn't scale to 300 CPUs
<zid> It does
<zid> one mysql per cpu
<zid> I like how over the top the american accent is on the word 'scale' on this AI voice
<zid> WEB SKELL
goliath has quit [Quit: SIGSEGV]
epony has quit [Remote host closed the connection]