<gamozo>
0 C++ experience here, can you really template a conditional like that? I feel like the compiler might not be smart enough to constprop through the conditional
<heat>
yup
<heat>
the problem remains even if I switch the conditional for StaticBitmap or DynamicBitmap
<gamozo>
yeah, must be static-map specific I guess
<gamozo>
haha, I only tried dynamic
<Griwes>
heat, this->
<heat>
huh
<heat>
why?
<Griwes>
you need to do member accesses into dependent base classes with this->
sonny has quit [Ping timeout: 252 seconds]
<Griwes>
it's because the language doesn't know the base class until it's instantiated
<Griwes>
but it needs to have already done the first phase of lookup at that point
<heat>
only for templated stuff like this?
<Griwes>
only for classes with dependent base classes, specifically
<Griwes>
otherwise you already know the members of the base, so you can just do lookup into them
<gamozo>
Wow, yeah, that's wild. Using explict `this->` works
<gamozo>
(I wish C++ had explict `this->` tbh, I can't stand now "knowing where a variable comes from")
<gamozo>
I think that's one of the reasons I'm okay with templating/generics/traits in Rust but not in C++
<heat>
wdym?
<heat>
explicit this-> works fine
<heat>
but it's just noise
<Griwes>
yeah it's super noisy most of the time
<gamozo>
Yeah, I just wish it was enforced. It tells you about where to look for the variable definition and I think mades code significantly more readable
<Griwes>
painfully noisy even
<heat>
you could enforce it with a custom clang-tidy check i guess
<gamozo>
Yeah. It doesn't really bother me about my own code, I just find it very confusing when reading other code, like Chromium. The project is so massive and when you see a variable you have no idea where it's declared
<heat>
probably in the current class
<gamozo>
Could be up 2000 LoC in the function start, 5 classes deep, etc. It's one of my gripes with C++, I think it's fine for code you're working on (because you're familiar with it), but really annoying with foreign code
<heat>
but yes I get what you mean
<gamozo>
yeah
<Griwes>
I run editors where I get semantic highlighting of everything, and that solves the problem much more elegantly
<heat>
it's a problem with all OOP languages
<Griwes>
this-> and self. littered all over the place just eats up so much space
<gamozo>
I do security research so 99% of my time is spent auditing other peoples code. So I get to see a _lot_ of styles
<Griwes>
I'd probably not mind having something like a prefix dot to access a member of this/self
* Griwes
makes a mental note for when he goes back to his language again
<gamozo>
Yeah. That actually sounds nifty
<Griwes>
(I already have the syntax used in places, I just never thought about using it more generally)
<gamozo>
It's more for readability, not writing. Writing I feel like C/C++ let you do _whatever_ style you want lol
<Griwes>
yeah the problem is that this->/self. actually *hurts* readability IME, though your mileage may vary
<heat>
rust solves this
<heat>
write rust
<heat>
rust best language
<gamozo>
A lot of it is probably just my lack of experience in C++ where I don't really expect implict data to be accessed
<heat>
rust cured the blind and died for our sins
<gamozo>
The "path to the value" is very unclear
<gamozo>
Ahaha
<gamozo>
I think Rust is too verbose for a lot of people, personally I love that aspect of it
<heat>
a good editor with a language server is pretty useful for C++
<gamozo>
Yeah, I think that makes it pretty usable. Unfortunately, a lot of the code I have doesn't build or integrate into language servers ahaha.
<Griwes>
what i really want is to win a lottery so I can quit work so I can make my language a thing so I can actually write what i *want* to write
<heat>
when you don't remember how the code/classes/structures look
<Griwes>
but, alas
<gamozo>
Griwes: that sounds awesome
<gamozo>
Arguably OSdev is my path to that, but with OSes
<heat>
gamozo, how doesn't it build with language servers?
<heat>
is it not standard GCC/clang code?
<gamozo>
heat: :shrug:
<gamozo>
Oh yeah, lots of Windows stuff
<heat>
i don't know how intellisense works but that's also a thing
<gamozo>
Lots of things that are cross-compiled or built in hacky environments
<Griwes>
in the ideal world, I'd be writing the OS in my lang :V
<gamozo>
vim + ctags are still, unfortunately, the best solution for most of what I do
<gamozo>
At least, with my workflow. It's very possible I just have a bad workflow
<gamozo>
Griwes: that sounds transcendent
<heat>
in your sillicon
<gamozo>
sillycon
<heat>
true nirvana
<Griwes>
if I had infinite time, it'd be on my own ISA, yeah
<heat>
i o w n t h e c o m p u t e r
<Griwes>
but even in the dream world I know there's limits and that I have to choose my battles :P
<gamozo>
Intel still has ME running somewhere on the chip
<gamozo>
Wait you can choose your battles instead of just fighting all of them and losing them all?!
<heat>
maybe y o u h a v e b e c o m e M E
<heat>
you can fight one and waste all your life doing it
<heat>
turns out operating systems are complicated haha
<Griwes>
gamozo, yeah :| that's how you can tell I'm approaching 30
<gamozo>
I'm in that same boat, 28
<gamozo>
Same hard decisions
<gamozo>
I have decided osdev is in scope, I am allowed to write my own code for my own OS instead of Linux. That's a recent decision I had to make
<heat>
what would be the other option
<gamozo>
Probably writing hefty amounts of drivers for Linux
<heat>
that's also fun
<heat>
different, but fun
<heat>
you can also
<heat>
do
<heat>
both
<gamozo>
I'm not happy with really any OS-es virtual memory management, I really think page-tables are underused for unique data structures
* gamozo
sweats
<heat>
whats your idea
<gamozo>
Oh, I just do a lot of things with NUMA and separated page tables based on your numa node (turning the computer into separate computers effectively)
<gamozo>
It's the model I've been doing for my past few OSes and it's the only way I scale well with hardware
<gamozo>
TLB shootdowns make virtual memory management soooo slow, whuch then makes anything that requires a lot of mapping/unmapping non-viable.
<heat>
you can delay tlb shootdowns
<heat>
or avoid them altogether
<gamozo>
I'm only comfortable with not doing TLB shootdowns unless the data is exclusively owned
<heat>
huh?
<gamozo>
If not doing a TLB shootdown _could_ be invalid, then I'm not comfortable not doing it. Like if I don't know if the data is being shared between cores
<heat>
ah
<heat>
well, it depends
<heat>
if its kernel memory, you need to shoot it down if it has been accessed (A bit in the page tables, or whatever soft bit you have)
<gamozo>
Personally, I've been doing every-core-gets-its-own-page-table and I really like it
<heat>
in user memory, you can take a look at the A bit and at the active cpu set (some cpu set where you track where the process has been)
<heat>
well, not has been but is
<gamozo>
Makes sense. I just really hate the round-trip-time between cores (I wish Intel gave us direct access to the cache coherency bus for IPC or something)
<gamozo>
I only really do compute though, so I really really care about even small costs
<gamozo>
I do a lot of differental stuff with page tables and stuff so I'm constantly modifying them or traversing them. I also do a lot of cross-core aliasing of memory to their node-local copies (thus at runtime I don't have to check between addresses)
<Griwes>
my main problem with tlb shootdowns is that I am really trying to keep the core parts of the kernel (as opposed to threads that stay in the kernel mode) not interruptible, which means that I will probably need to make all in-kernel lock poll loops keep doing a check for "is someone requesting that I do a tlb shootdown" so that I don't trivially deadlock when two threads of the same process attempt to unmap something
<gamozo>
I've been recently doing my OSes with a Rust-like memory model, your virtual memory is either A. shared and immutable, or B. exclusive and mutable. I really really like it and it really helps me get all the perf out of my chips
<Griwes>
yes I am fully aware that I *will* have bugs with this, can't wait to debug those deadlocks
<gamozo>
Griwes: Ugh, yeah. I used gross NMIs and a slight amount of state
<Griwes>
yeah I considered NMIs for a second but eventually recoiled in terror
<gamozo>
As any healthy person should
<Griwes>
the rust model at VAS level means that you can never have lock free programs and they are just inherently cool (when they work)
<Griwes>
:P
<gamozo>
Oh yeah, I effectively only do lock-free now and I love it
<Griwes>
gamozo, oh I don't know about the "healthy" part
<gamozo>
When I got my Phi and I saw that `lock inc` takes 30,000 cycles, I realized very quickly that even just shared memory was really slow
<gamozo>
(in theory shared memory is good for IPC as it's the fastest IPC, but that you can kinda do "rarely")
<gamozo>
It is crazy to me how expensive cache coherency is. The fact that my OS effectively puts all cache lines in their optimal states really actually seems to matter
<Griwes>
my main gripe with shared memory is that I cannot really pass kernel resources through it, so if I want zero copy for buffers and whatnot, I cannot not touch the kernel
<Griwes>
oh yeah cache coherency is... quite the thing
<heat>
what kernel resources?
<gamozo>
Yeah, I think if I made a GPOS (and somehow could dictate the user-land experience). Pretty much everything would be in shared-memory
<Griwes>
I mean in case of trying to pass buffers around, VMOs
<gamozo>
I think CPUs should have a small range set (some MSRs) that allow a kernel to give a user-land application access to raw virtual memory and paging
<Griwes>
my object model is that a kernel object identifier (a "token") is a per-process thing, it's only valid from within that process, so I can't just pass it, it has to be translated by the kernel for consumption by the other side of IPC
<heat>
you could map them
<Griwes>
and to map them
<Griwes>
someone has to create it
<Griwes>
and then pass a token elsewhere ;>
<heat>
the kernel would implicitly map the VMO and maybe queue the handle in some socket
<heat>
that could save you the trip
<Griwes>
also the VMM is outside of the kernel :P
<heat>
shizzle
<Griwes>
:'D
<heat>
how does that work?
<heat>
does every privileged op go to the kernel?
<heat>
invlpg, cr3 loading, etc
<Griwes>
I should s/is/will be/
<Griwes>
I don't mean all VM management
<Griwes>
vasmgr is probably a better term
<Griwes>
the kernel still does all the mapping and invalidation and whatnot
<gamozo>
Alright the real question. Do you use GOTs or do you map all shared objects at the same address in all processes :D
<Griwes>
but it doesn't manage the address space itself
<Griwes>
eventually having ASLR is a must
<gamozo>
Honestly, it's a bit naughty, but I like ASLR just for debugging because it really stresses that you correctly are moving everything around and not doing fixed addresses
<gamozo>
even if not for the mitigation, I think it keeps me honest
<heat>
what, ASLR is horrific for debugging
<heat>
1) notice it always crashes/misbehaves on address X
<heat>
2) reboot the OS
<Griwes>
it's great for stress testing
<heat>
3) different address
<heat>
fuck
<Griwes>
awful for actually debugging
<heat>
what are you stress testing with aslr?
<Griwes>
well, at least your elf loader
<gamozo>
Ahaha, yeah I know what you mean for horrible for debugging. Mainly mean it keeps my code better. There's so many situations where code relies on some weird undefined behavior and the same address makes it ok
<heat>
it's not your elf loader that does ASLR
<Griwes>
I didn't say it is
<Griwes>
but if you run it always with the same address bases, you may miss a bug that happens with offsets being slightly different
<heat>
also, the worst of all the ASLR issues: "haha your toolchain is default PIE and now every program's base is constantly changing. crashing? oh no!"
* Griwes
's toolchain is forcibly PIE
<heat>
and yes my toolchain is default PIE and default SSP strong 😎
mrvn has quit [Ping timeout: 240 seconds]
<heat>
forcibly PIE :(
<Griwes>
I'll get to stack protectors eventually
<gamozo>
I don't do shared objects so it's all easy for me :D
<gamozo>
as long as my bootloader can put my kernel at a random address and do fixups I'm all good!
<gamozo>
I normally have an "ASR" mode for my system where I can boot it up and all allocations are completely randomly placed in VA space
<gamozo>
But it makes mapping fairly slow as you have such a weird page table layout
<gamozo>
Good for testing that everything can be moved around and nothing is accidentally relying on a fixed address or something
<heat>
i think fuchsia does something close to that
<gamozo>
It's honestly not too bad, but a lot of kernels use linked lists for virtual memory regions (for some reason???), so it often isn't super great
<gamozo>
I made patches for FreeBSD at one point to add it and the perf hit was massive as I think mmap() was O(N) WRT virtual memory regions in a process
<gamozo>
so after a bunch of regions, you start getting extremely expensive allocations
<heat>
linked lists?????????????
<gamozo>
it's a solvable problem if you just use a page-table model for memory maps (or just a graph-based structure) in your kernel
<gamozo>
Although graphs are then bad for low N
<heat>
the standard is a red black tree or an avl tree
<heat>
any kind of self-balancing binary tree
<gamozo>
for maybe core structures, but _something_ in the loop had it
<gamozo>
someone at some layer of the stack tacked on an O(N) structure
<gamozo>
(or something enumerated the tables or something, when it maybe didn't have to)
<Griwes>
that's... so bad
<gamozo>
I mean, Linux and Windows both haev _very_ bad mmap scaling with regions
<gamozo>
is this not like, a thing that people complain about? I definitely get mad about it
<Griwes>
every day I find so many new and exciting reasons for why no software should ever work
<gamozo>
VMMs scale terribly on Linux and Windows with cores as well as with regions
<gamozo>
they work fine with # bytes, but it's # regions that it's bad
<gamozo>
Getting my Xeon Phi was the best decision of my life
<gamozo>
It is the true test of scalability of stuff, and holy shit I can't even use Linux on it
<Clockface>
is DOS/16 bit x86 still used in new embedded systems?
<gamozo>
You get this 1.3 GHz clocked atoms, that are maybe equiv to 200-500 MHz modern-Xeons, and a boatload of them. If your code doesn't scale well it's painfully apparent. I _love_ it, but I'm also a perf masochist
<heat>
Clockface, I don't think so
<gamozo>
Clockface: Jeez. I'm sure in some legacy places (eg. new products running old company code that "already works"). But I think most things are UEFI
<Griwes>
yeah I've heard phi perf horror stories from HPC people
<heat>
embedded x86 is pretty niche, and DOS would beat the point of having a new system
<gamozo>
Embedded seems to love UEFI
<gamozo>
Honestly, the Phi is fucking fantastic
<heat>
really? UEFI?
<Clockface>
are there any single-tasking operating systems that have absolutely no protection, like DOS was?
<gamozo>
But you have to write code very carefully for it. It definitely requires an extremely specialized dev
<Clockface>
or is UEFI enough to be a DOS replacement
<gamozo>
heat: Idk, I feel like most people have been doing UEFI at this point. Maybe it's my sampling of reality.
<gamozo>
The ecosystem around it is great open-source
<heat>
Clockface, UEFI is enough to be more than a DOS replacement
<heat>
gamozo, yes, but I wouldn't do embedded on it
<gamozo>
Like, it's way easier to get a UEFI build of some open-source bios working before you get some massive BIOS vendor to get you a custom BIOS for your chip
<gamozo>
oh yeah, I wouldn't do it on it, but it's what I see
<heat>
if I wanted to do embedded, I would use something like lk
<heat>
lightweight and has interrupts, threads
<gamozo>
Yeah, I think that's why all the bootloaders fork lk
<heat>
all you ever need really
<gamozo>
The problem is, most companies making embedded devices don't have devs who are really that broad with their skillsets
<heat>
the two major limitations of UEFI are that it has no threads and no interrupts
<heat>
sure but UEFI is pretty niche
<gamozo>
Yeah, but even as just a boot environment it's so much better than a BIOS
<heat>
if I had a dev that knew UEFI I would let them work on bootloader/firmware stuff
<heat>
not "hey look, we have this embedded app, make it run on UEFI kthxbye"
<gamozo>
really? Idk, I think UEFI is just the onl thing that really is standardized for boot across arches, and so I feel like it's everywhere now
<heat>
sure, it's "everywhere"
<heat>
but how many people have written kernel or firmware code?
<heat>
also in a "standard C app" sense, lk is way closer to it than UEFI
<gamozo>
Tbh, honestly I think more with UEFI
<gamozo>
Like, pretty much all the stuff in the phone world are UEFI
sonny has joined #osdev
* heat
laughs in GUIDs
<heat>
gamozo, huh, really?
<heat>
I thought it was all device tree?
<gamozo>
and switches, and routers, and pretty much all of my devices that have "new" embedded code written
<gamozo>
I'm thinking x86 devices, which are a bit weirder lol
<heat>
well, sure
<gamozo>
but like, all the modern hardware I have is starting to get UEFI-only
<Clockface>
UEFI has a premade C library, which cool
<heat>
those need to be UEFI or some custom device tree thing
<gamozo>
which means that there's a massive market of people doing UEFI dev for all these devices
<heat>
Clockface, yes and no
<heat>
edk2-libc is a thing, but it's not part of UEFI, just edk2
<gamozo>
Don't get me wrong, not saying that UEFI is really that great or anything. I genuniely do think it has lowered the difficulty in writing code at that level, and I think it's led to a lot of it being written/produced
<heat>
UEFI does give you a lot of interfaces but none of those are standard C library stuff
<heat>
have you actually looked at what they're doing or are you just guessing?
<Clockface>
it isnt stdc but being complient with that is only important if you want to run normal programs in UEFI
<heat>
because I don't see the point in writing custom UEFI code
<Clockface>
which i dont think anyone wants to do
<heat>
wrong
<heat>
edk2-libc has a python port!
<gamozo>
Nah, this is what a lot of things are doing. I'm fairly familiar with a handful of stuff at this level (like my networking equipment, bunch of modern embedded stuff)
<gamozo>
I lift a lot of chips off boards
<heat>
well, yuck
<heat>
UEFI isn't that suited for stuff like that
<gamozo>
The main thing is just the cost of ram/compute to do UEFI + maybe a barebones Linux is getting so low (eg. chips that can handle a workload that large)
<gamozo>
Like I think my friends MMO mouse he was saying, ran Linux
<heat>
well yes, but barebones linux doesn't require UEFI code
<gamozo>
after he waas trying to mod it and looked into the firmware
<Clockface>
why does a mouse need linux
<gamozo>
oh, of course
<gamozo>
but the thing is, it's just like, the "easy" ecosystem now
<gamozo>
pretty much anyone can get UEFI (open) + Linux (open) + a relatively standard arm-chip
<gamozo>
and now you can hire a normal user-land dev to do your whole stack (at the cost of a slightly more expensive chip)
<Clockface>
i know a guy who still makes all his products with 8 bit motorolla 6809's or something, he has basic user interfaces with keypads and little displays too
<heat>
UEFI isn't that open
<gamozo>
It really isn't, I agree
<heat>
you can't actually "just run arm" on it
<gamozo>
Look, I'm just saying this as an observer of weird stuff. I personally think we are extremely wasteful with the way we write code
<heat>
well, you can, but there are not a lot of options
<gamozo>
I honestly kinda say this more in a state of shock
<gamozo>
Like the Freedom Phone I ordered to reverse, turns out the entire flash is unlocked
<heat>
i would love to look at what you're talking about, that looks absolutely disgusting
<gamozo>
so it effectively offers 0 security
<Clockface>
well, maybe what he does doesnt justify more than that, but he does seem to push things further than people do with the little 8 bit chips normally, which i admire
<heat>
the freedom phone haha
<gamozo>
It's fantastic
<gamozo>
I thought I got scammed
<heat>
you did
<gamozo>
I opened another bank acount to wire them the money cause I did not trust buying it
<gamozo>
I mean
<gamozo>
I got a gem of a phone
<gamozo>
I will protect this with my life
<heat>
it's priceless
<gamozo>
I can flash any part of the phone
<gamozo>
it's a piece of history
<gamozo>
I can flash literally stage 1 bootloader
<gamozo>
I have a factory dev phone with a common mediatek chip
<gamozo>
I'm very happy
<gamozo>
and mediatek has a baked in true-rom for recovery that I can talk to
<gamozo>
So I every _byte_ of flash is up for grabs and for me to run at any level of the boot process
<gamozo>
To me, this is super fun, I can reverse (and patch and play around with) all stages of the phone
<gamozo>
I like getting code running in very unique places :D It's like golf
sonny has quit [Ping timeout: 252 seconds]
<gamozo>
That being said, my embedded experience is probably on the higher-end of the cost spectrum (more modern things, networking stuff, etc). But like the whoel Microsoft SONiC stack is huge on switches now
<gamozo>
which is UEFI + ONIe + SONiC
<gamozo>
I _feel_ like UEFI has made embedded dev more reachable to people who maybe don't have access to systems-level devs
<heat>
that's horrific, why would anyone need UEFI on that
<heat>
i bet they just chainload a kernel
<gamozo>
Because you can hire linux devs for cheaper than your firmware devs *cough*
<gamozo>
*cough*
<gamozo>
I hate it
<gamozo>
I really do
<gamozo>
But it's true
troseman has quit [Ping timeout: 240 seconds]
<gamozo>
SONiC is actually just Ubuntu
<gamozo>
yeah, my switch, runs UBUNTU
<gamozo>
debs and all
<heat>
well, depends on the switch I guess
<Clockface>
the guy i know says he cant figure out C so he only uses assembly, that man is a living legend, i am in wonder of his computer habits
<heat>
if it runs ubuntu, that's fine
<Clockface>
he runs a mac, windows XP, and a dos machine for some of his programs
<Clockface>
*he had the dos machine for a really long time
<heat>
gamozo, doing stuff in userspace is actually a pretty good idea
<gamozo>
(tbh, I think RAM and flash have gotten cheap enough that the cost of the dev is more expensive)
<gamozo>
For sure
<heat>
i probably wouldn't pick ubuntu but whatever floats their boat
<heat>
i'd probably pick alpine
<heat>
or debian?
<gamozo>
haha yeah, it's a weird pick. It could be debian but the version numbers felt more Ubuntu
<bslsk05>
sonic-net/SONiC - Landing page for Software for Open Networking in the Cloud (SONiC) - https://sonic-net.github.io/SONiC/ (804 forks/1435 stargazers)
<gamozo>
personally, I think it's an extremely sloppy mess
<gamozo>
BUT
<gamozo>
People love it!
<gamozo>
I absolutely hate configuring on it
<gamozo>
It regularly breaks, a lot of commands do _not_ work
<gamozo>
Like literally python backtraces (not errors, just broken scripts)
<heat>
it looks fine
<heat>
maybe they assembled it wrong but it looks totally fine
<heat>
they even use zebra
<heat>
*why is it running everything in a docker**
<gamozo>
Ahahaha
<gamozo>
"it's easy"
<heat>
well, I mean
<heat>
this is totally not UEFI code
<gamozo>
So
<heat>
literally just GRUB 2.0 that boots linux
<heat>
you could probably skip a step and use the efi stub
<gamozo>
Being "onie" is actually like, a big deal right now in the switch world
<gamozo>
Lots of vendors will advertise they're ONIE-based
<gamozo>
I don't know if ONIE _requires_ or just supports UEFI, but I've only seen it with UEFI
<gamozo>
It's honsetly a new ecosystem to me
<heat>
well if this is the "embedded uses UEFI everything is bad" you were talking about, we're fine
<heat>
i was actually thinking they ran routing on top of UEFI
<heat>
defo not the greatest idea
<gamozo>
OHHH
<gamozo>
yeah
<gamozo>
sorry didn't mean it as uefi application
<gamozo>
my b
<gamozo>
that's how I read the initial question
<heat>
they were talking about using UEFI as DOS
<gamozo>
Yeah, I guess DOS usually just isn't a thing on UEFI systems was the idea in my head
<heat>
UEFI works as a DOS replacement
<gamozo>
Idk the last time I saw DOS tbh, in a new device
<Clockface>
would it still save money to use tiny 8/16 bit devices if they are making a huge amount of the product even if it needs more development
<heat>
no
<gamozo>
The hard part might be justifying the workforce that can work on that stuff
<heat>
a 32-bit arm controller is like 2 euro
<GreaseMonkey>
you'd have to consider, say, one of the 3 cent padauk microcontrollers if you're really trying to save money
sonny has joined #osdev
<gamozo>
Now you have to hire a firmware dev at your Furby factory :D
<Clockface>
yes but what if you never update the product
<Clockface>
if its done and it works, you dont have to pay anyone
<gamozo>
Then you can't advertise it as cloud and offer a subscription to your microwave ads!
<Clockface>
good point
<gamozo>
If it doesn't run Linux, it doesn't run Chrome, and if it doesn't run Chrome it doesn't run Electron, and if it doesn't run electron, you're not selling ads
<Clockface>
it needs to be part of web 3.0.1 prerelease-beta
<gamozo>
or writing your UI in javascript
<gamozo>
Shit even the WIndows UI is in javascript at this point *grumble grumble*
<Clockface>
this is why i dont want to go into programming
<gamozo>
Like aren't the new fridges and stuff serving touch screens + web UX?
<gamozo>
Loooool
<gamozo>
I just love complaining, but I am very disappointed that after a lot of moores law, my omputer is less responsive than it was in 2005
<klange>
The unfortunate fate of webOS...
<geist>
sick burn
<Clockface>
some technologies are smarter than the societies that use them
<Clockface>
i guess modern semiconductors are one of those
<Clockface>
lol
<heat>
love the doomer vibes tonight
<klange>
Coulda been a contendah in the mobile space, RIP Palm. Now powering TVs and refrigerators at LG...
<Clockface>
imagine how cheap stuff could be if they dident bloat up
<gamozo>
I gotta get my doomer under control, my bad
<geist>
yah and bummer that most of what i wrote for webos LG didn't take up
<geist>
except maybe novacom
<geist>
but they didn't continue to use bootie the bootloader
<geist>
RIP
<gamozo>
Bootie the Bootloader? Is that the name?
<geist>
that is the name
<gamozo>
Aha, that's great.
<geist>
was a darn nifty bootloader if i dont say so myself
<gamozo>
Was there a mascot/logo?
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
<geist>
alas no
<geist>
though we had a logo for Trenchcoat, the flashing tool
<gamozo>
That's great. I've been commissoning art recently when I come up with projects, it's just a fun process
darkstarx has quit [Read error: Connection reset by peer]
<gamozo>
I see LK everywhere at least :D
<gamozo>
At least a lot of bootloaders I open to reverse
wereii has quit [Ping timeout: 240 seconds]
<mxshift>
As much as UEFI is horrible, booting modern x86 from the reset vector is a minefield of footguns
<heat>
UEFI also boots from the reset vector
<heat>
and it's not horrible
<heat>
unless you dream about calling bios interrupts that you read about on a crappy .txt
<gamozo>
my friend still has nosted this to this day
<gamozo>
ahahaha
<gamozo>
hosted*
<gamozo>
I love it so much
<gamozo>
Anyone doing RISC-V stuff? I've only done userspace stuff
sonny has quit [Remote host closed the connection]
<heat>
yes i've done that
sonny has joined #osdev
<mxshift>
I mean UEFI does a lot of work for you that is difficult to replicate.
<mxshift>
Work is going straight into Rust at the x86 reset vector and then doing all the init ourselves.
<gamozo>
hnggg
wereii has joined #osdev
<Clockface>
were you reffering to Rust lang or were you using a metaphore
<mxshift>
Rust lang
<Clockface>
whats rust better at than C
<Clockface>
i never looked at it much
<gamozo>
Personally, it's a much stricter language and requires you to explain more clearly what your intentions are to the compiler. As a reward from this, the compiler can give you better optimizations, safety guarantees, etc
<heat>
everything
<gamozo>
The more you communicate with the compiler, the more the compiler can reason about your code
<heat>
except compile time
<gamozo>
And that's big for many reasons
<heat>
compile time sucks
<Clockface>
i found C too strict for my abominations so i program in assembly as much as i can
<gamozo>
I've only ever had compile time issues on third party things and I don't really know why. My OS builds in like 2 seconds, including the bootloader
<heat>
yikes
<gamozo>
But some deps take ~60+ seconds, I don't know how?
<heat>
might depend on your code
<gamozo>
I think a lot is procmacros
<heat>
at least in C++ you can have relatively fast compiles and then the late stage C++ that is LLVM
<gamozo>
And probably templating, which I honestly use fairly heavily, so maybe not
<heat>
also, LTO
<gamozo>
LTO is hot hot hotr
<gamozo>
So good for code size
<Clockface>
my macro language has been doing suprisingly ok, i have versions of it for x86-64 linux and 16 bit BIOS, the 16 bit DOS binaries have a boot sectore but can also be loaded by DOS
<Clockface>
macro languages arent so bad actually
<Clockface>
it went ok
<Clockface>
probably best as a way to boost productivity when writing assembly rather than as a full language
<Clockface>
and the one i made is pretty bad, but i like where its going
No_File has joined #osdev
<heat>
if you get an old C compiler it probably feels like using assembly macros
<Clockface>
the thing i absolutely despise about what i have created are the conditional loops
<klange>
I ported my bytecode VM language to run bare-metal protected mode in a state that can easily jump back to BIOS, and also to EFI. I was thinking of writing an overly complicated scriptable bootloader with it. Like GRUB!
<mxshift>
Hubris builds are mostly spent building the various PAC crates which are register accessors generated from XML files
<mxshift>
Stuff that generates a lot of Rust code in a single crate really adds up in build times
<Clockface>
it doesnt evaluate if the condition is true until it reaches CONTINUE, which i have found plenty of ways around, but its aweful
<heat>
klange, do it
<Clockface>
and once i change that ill like it more
<Clockface>
IF works fine, but while is almost unusable imo
<Clockface>
it compiles to position indipendent code
<Clockface>
it was fun
<Clockface>
eventually it will be viable and have a proper compiler
<Clockface>
later
<Clockface>
when frogs rain from the heavens
<klange>
Hm, where did I put that protected mode one... ugh is it in a branch of ToaruOS itself...
<sonny>
mxshift: do you know how rust userspace drivers work?
<klange>
The EFI one is on Github, works great as an "EFI app", especially if you have an EFI shell.
<sonny>
rather, eli5 if possible
<sonny>
s/rust/hubris
<klange>
Even has most of the REPL functionality, though its syntax highlighting color options are limited as it uses the normal text output APIs and despite definitely always running in a graphics mode, those don't offer rich color options like a dedicated terminal emulator would...
Mutabah has quit [Remote host closed the connection]
<mrvn>
kingoffrance: but the hydrogen makes my voice sound all funny
Mutabah has joined #osdev
* moon-child
lights a match
No_File has joined #osdev
nyah has joined #osdev
GeDaMo has joined #osdev
bauen1 has quit [Ping timeout: 240 seconds]
bauen1 has joined #osdev
bliminse_ has joined #osdev
bliminse has quit [Ping timeout: 260 seconds]
sikkiladho has joined #osdev
the_lanetly_052 has joined #osdev
<sikkiladho>
how to do you keep the secondary cpus idle until primary cpu requests them? send them in a loop with a condition on a volatile int, and primary cpu sets the int to zero when it needs?
<zid>
halt
<zid>
send them an interrupt to wake them up
<clever>
wfi or wfe opcode is what i would do
<zid>
I'd do 'hlt'
<zid>
but you're probably talking about some bizzare cpu nobody uses
<clever>
zid: is hlt valid on arm?
<zid>
he never said arm
<zid>
you just have it on the brain
<klange>
it's a bit lower than the brain, and most people have two of them, but
<zid>
those are called testicles
<clever>
zid: sikkiladho has been working on an arm hypervisor for months now
<zid>
still doesn't mean you can insinuate
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<sikkiladho>
I'm using arm64, I think clever knows about my project more than me maybe. XD
<sikkiladho>
clever: i have secondary cpus in control of the hypervisor, I send them in a loop. linux makes the smc, hyp traps it. How would I pass the secondary cpus to kernel's requested addres?
<clever>
using an IPI
<clever>
you need to look into how SMP is done on arm, and how to pass messages between cores
<sikkiladho>
that's great. Thank you. Will look into it.
<clever>
then the hypervisor on core0 can send a message to the hypervisor on core1
<clever>
and core1 can then execute the linux entry-point, as directed by the PSCI
eroux has joined #osdev
GeDaMo has quit [Remote host closed the connection]
GeDaMo has joined #osdev
GeDaMo has quit [Client Quit]
GeDaMo has joined #osdev
gog has joined #osdev
bauen1 has quit [Ping timeout: 244 seconds]
bauen1 has joined #osdev
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bauen1 has quit [Ping timeout: 276 seconds]
Burgundy has joined #osdev
bauen1 has joined #osdev
gildasio has joined #osdev
bliminse_ is now known as bliminse
elastic_dog has quit [Ping timeout: 240 seconds]
No_File has quit [Quit: Client closed]
eroux has joined #osdev
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
genpaku has quit [Ping timeout: 276 seconds]
genpaku has joined #osdev
No_File has joined #osdev
sikkiladho has quit [Quit: Connection closed for inactivity]
sikkiladho has quit [Quit: Connection closed for inactivity]
<heat>
if_bridge got a 5x improvement from using epoch vs mutexes
<heat>
this is big
<heat>
mutex -> 3.9 million packets per second; rw locks -> 8 million packets per second; epoch -> 18.6 million packets per second
sonny has joined #osdev
heat has quit [Ping timeout: 240 seconds]
heat has joined #osdev
<mrvn>
kingoffrance: doesn't "dont cross the streams" apply to NAT?
<mrvn>
epochs sound fragile. "frees must Benjojo deferred until after a grace period haselapsed". So if the system is under heavy load and tasks don't run fast enough the free happens and they crash?
<mrvn>
s/Benjojo/be/ stupid tab :)
<heat>
that's how RCU works too
<heat>
basically you defer frees until a certain point
<heat>
i haven't read the epoch stuff very well but the biggest difference is probably the "when"
No_File has joined #osdev
<mrvn>
heat: it looks like it's for kernel threads and the kernel should never be overloaded like that. .oO(Until one day it is)
<kingoffrance>
i dont know networking really, but i wager for joe average it was a financial decision
<kingoffrance>
i.e. such questions did not enter thought process at all
<kingoffrance>
and i imagine for isps as well
<kingoffrance>
so ill rest my "you get what you pay for"
<kingoffrance>
that implies a tipping point perhaps :)
<mrvn>
In networking it's usualy: Deal with this within X ms or you've got an error.
<geist>
yah that's the part i've never really grokked with rcus and whatnot, i get that things are delayed but presumably there's a failsafe that makes sure that in the worst case they still are never freed early
<geist>
but then in the past i had been told to explicitly not look at RCUs so i really dont know how they work
<No_File>
"In networking it's usualy: Deal with this within X ms or you've got an error." Not so in the RS232 interface.
<mrvn>
I think in an RCU like wikipedia describes each element would have a reference count. So basically you still lock every item you want to read but the lock can never block.
<geist>
yah that's what i assume is at the bottom of it, a lazy ref count
<gamozo>
Mornin!
<jimbzy>
I am really starting to dig the 6502.
<mrvn>
The trick is that you implement COW and update links lock-free. So no write locks.
<mrvn>
"3. Wait for a grace period to elapse, so that all previous readers (which might still have pointers to the data structure removed in the prior step) will have completed their RCU read-side critical sections." or not ref counting. That's the part I never got with RCU.
<kingoffrance>
jimbzy, i noticed 65c816 IIRC had some kind of "back compat" 6502 mode, i wonder if games used that to "port"
<jimbzy>
Yeah, it starts up in 65C02 mode
<jimbzy>
Wait I have that backwards I think.
<kingoffrance>
that might be what the c is for :)
<kingoffrance>
8 or 16 bit
<jimbzy>
Nah the C is for CMOS
<jimbzy>
Nah it starts up in emulation mode from what I can tell.
diamondbond has quit [Quit: Leaving]
sonny has quit [Remote host closed the connection]
<heat>
they don't have reference counts in RCU
<heat>
a read operation is, as I understand it, a simple atomic read
<mrvn>
heat: so how do you know when you have waited long enough to free?
<heat>
but before reading, it disables preemption (this stops RCU from freeing anything)
<heat>
a write operation is a simple atomic swap with some extra sauce on it
<mrvn>
heat: disabling preemption only means you don't get stopped, not that there is no concurrent access
<heat>
ok?
<jimbzy>
kingoffrance, I thought about going with the '816, but there was some weirdness in the addressing that I don't fully understand yet.
<heat>
never said it did
<mrvn>
all of the RCU operations are clear except the "how long to wait before free"
<heat>
that's what makes RCU fast
<heat>
or linux's RCU, that is
<heat>
they have like 3 different variants now
<heat>
all slightly different
Vercas has quit [Remote host closed the connection]
lainon has joined #osdev
sikkiladho has joined #osdev
sonny has joined #osdev
psykose has quit [Remote host closed the connection]
psykose has joined #osdev
sonny has quit [Remote host closed the connection]
Vercas has joined #osdev
gog has quit [Ping timeout: 276 seconds]
sonny has joined #osdev
haliucinas has quit [Ping timeout: 260 seconds]
mahmutov has joined #osdev
<gamozo>
Alright. Who here did IA-64 dev
<gamozo>
I feel like I never quite understood all the hate it got. Mainly just compat with legacy x86?
<sonny>
isn't that itanium?
<gamozo>
yeah!
<sonny>
I thought that was the new arch
<sonny>
intel64 is the x86 compatible one
<mrvn>
ia64 != amd64
<sonny>
from some books I read, it seems that itatium is hard to write compilers for
<gamozo>
That's actually what I was thinking
<sonny>
s/some books/a book/
<gamozo>
It looks like a _great_ architecture as you get really good control over hardware, but that offloads some of the difficulty to the compiler
<sonny>
plus other economic factors
<gamozo>
but The 2003 ecosystem for compilers was terrible. I'm legit curious if it'd be viable with modern LLVM/extensible compilers
<mrvn>
it never was quite stable
<sonny>
iirc, windows did a itanium port
<sonny>
did linux ever get one?
<mrvn>
And the architecture isn't really upgradable. All the dependencies are hardcoded.
<sonny>
gamozo: they had gcc
<sonny>
doesn't sound that bad
<No_File>
whats so special about itanium?
<gamozo>
sonny: Yeah, but _2003_ gcc. Honestly kinda all compilers I feel were pretty terrible until maybe like 2008-2010?
<gamozo>
Largely just due to perf constraints of optimization passes/memory limits on data structures
<sonny>
why? lol
<sonny>
people have been writing fast compilers for years
<gamozo>
Compilers just scale super well with memory and compute, and they've gotten probably 20-30x faster since then
<sonny>
eh
<sonny>
I don't know how you are making these assumptions
<gamozo>
A mix of optimization pass dev and reading codegen from old code
<gamozo>
You could tell the difference between a for loop and a while loop at the assembly-generated code level until the early 2000s
<gamozo>
Compilers were _very_ literal
<sonny>
I think herb stutter "free lunch is over" is from like '03
<sonny>
there was lots of proprietary compilers back then I bet
<sonny>
instead of defacto best open source stuff
<gamozo>
I know VS98 codegen was pretty bad. I think Intel had their compiler out by then?
<gamozo>
GCC I think was still an up-and-comer
<sonny>
I dunno
<sonny>
one of my profs still thinks GCC sucks lmao
<gamozo>
Tbh, I think it's an absolute cluster as a codebase, but it has more consistent codegen than LLVM IMO
<gamozo>
LLVM will do some really stupid things here and there. GCC often doesn't have the same weird pitfalls
<sonny>
I don't recall who he said used to make good compilers
* sonny
shrugs
<gamozo>
Honestly, I don't think compilers really started to get good until LLVM started bringing some more academic ideas into compilers
<sonny>
from the material I've read, that just seems silly
<gamozo>
I think a lot of the proprietary ones were pretty legacy, outdated, and really just sold because people depended on them rather than being competitive [citation needed]
<sonny>
optimizing compilers have been around
<gamozo>
Hmm, what do you mean. I'm also relatively young, so I don't have the greatest first-hand experience other than reading a lot of assembly from old code/games/projects
<gamozo>
Of course
<sonny>
I'm 24, I don't know what happened back then
<gamozo>
So. From my experience, until maybe, ~2005 (VC2005), and GCC around that era. Most things were taken superl iterally from the C -> asm. They were "optimizing compilers" but with such small boundaries on what they could reason with
<gamozo>
I wonder if godbolt has old compilers...
<sonny>
well, I can't argue with your experience, I have not looked into gcc or llvm much
<gamozo>
4.1.2 oo
<mrvn>
gcc's core design is still always "read memory; op; write memory;" repeat.
<gamozo>
Yeah. Memory is _really_ hard to optimize around as it's such a big blob. I feel like only recently have we gotten some good through-memory optimizations
<gamozo>
I wonder if I can find some old benchmarks ahaha, would be a good trip down memory lane
<sonny>
also considering the (complaints) discussions around ub in gcc, seems to be from that era as well, but the best way would be to test if it's on godbolt
<gamozo>
Yeah. I wish godbolt went older than 4.1.2, I think that is actually right about when things started to get pretty good, gonna play around with it a bit!
<zid>
It has sorta just scaled with how good desktops are
Vercas has quit [Quit: Ping timeout (120 seconds)]
Vercas has joined #osdev
<zid>
An average project you might compile on a desktop uses an average desktop amount of RAM to compile, and takes about as much time as you'd be willing to wait for it to do so :P
<gamozo>
That's kinda what I've thought. Lots of O(n^2) things in compiler theory that kinda require you to make approximate shortcuts (eg. stop optimizing past a barrier) that now we can go deeper
<gamozo>
There was a lot of loop stuff really starting to happen around this era of mid-to-late 2010s
<zid>
I think it just comes down to it being willing to do all its regular optimizations AFTER unrolling the loop
<zid>
even if the loop seems at first glance to otherwise be too big to unroll
<zid>
It's "searching" for optimizations, now
<gamozo>
Yeah. I think the big thing that started to happen at this time was symbolic expressions being used to reduce complexity of code such that it can optimize ti better
<zid>
9/10 that optimization will be pointless and unrolling it to see if it can re-roll it tighter won't work
<gamozo>
Yup
<zid>
but that's what -O3 is for now we have 5GHz desktop cpus imo :P
<gamozo>
I think a lot of modern compilation techniques require locally unrolling and simplifying. Eg. if you can collapse an inside loop, now you can unroll the outside loop, since you've simplified out a loop
<zid>
I'd love to find some of the tunables in gcc and make a build with them set higher for fun
<gamozo>
I know that's been going on forever, but these thresholds just keep going up and up (largely due to compute power, but some great algo design, and just better compiler code!)
<gamozo>
I do that a lot with Rust (thus LLVM). I have some code which builds crazy nicely with custom unrolling limits
<gamozo>
Intel's compiler had a `#pragma unroll` where you could help inform the compiler on a individual function/loop level
<sonny>
I thought clockrate peaked 10 years ago
<sonny>
isn't it more common to have a 'burst'
<gamozo>
CPUs have still gotten way faster per-core, and we're getting better at using cores during building (which honestly is still pretty rare)
<sonny>
2Ghz seems common
<gamozo>
Modern CPUs have no problem doing 4 x86 isntructions/cycle
<zid>
clockrate did peak 10 years ago for the most part, 2GHz is not common.
<zid>
2GHz is reserved for cheap netbooks
<gamozo>
even if clock rates are about hte same, many cores have gotten 4-15x faster single-threaded due to way faster caches, latencies, better paging structures, etc
<sonny>
2.5Ghz?
<zid>
All modern M chips turbo over 4
<gamozo>
I'd say 3.2-4.4 GHz probably average desktop now
<sonny>
I am talking about the base
<zid>
base clock is a lie
<sonny>
o.O
<gamozo>
Base clock kinda doesn't matter anymore
<sonny>
lol
<zid>
The turbo goes in increments of 100MHz
<zid>
That's the bclk, base clock
<gamozo>
Like, it does, but kinda doesn't. It's really all TDP based
<sonny>
ok, time to learn about clock rate
<zid>
It will pick a multiplier, which is what denotes your model, like mine is 12-45x and 12 is the 'base'
<gamozo>
And then you have different clocks based on AVX, AVX-512, it's great
<zid>
and 45 is the max turbo
<zid>
and whatever thermals it can not cook itself under, it will clock to that
<sonny>
different clockrates for the extentions?
<gamozo>
yep!
<zid>
avx-512 just used so much power the chips throttled
<sonny>
how does that work??
bauen1 has quit [Ping timeout: 260 seconds]
<gamozo>
Let me get you the good source for seeing it
<zid>
If your 300W cpu ran using 300W 24/7 you'd be pissed
<sonny>
yeah
<zid>
they generally clock down a bunch and shut off parts of the die etc quite aggressively
<sonny>
oh damn
<gamozo>
You get different clock rates based on the number of cores active as well, dynamically
<zid>
then there's a bunch of settings for how many microseconds it has to wait for the power delivery from the mobo to ramp back up to the 200 amps it will need at 300W etc
<gamozo>
Go to like, page 19
<sonny>
this is an amd64 thing or is it general?
<gamozo>
There are different tables for AVX-512, AVX, and non-AVX
<gamozo>
and for different numbers of active cores
<zid>
which is why avx-512 was annoying, it used so much extra power your chip would just do.. nothing the first time it saw an avx-512 instruction for a few million cycles
bauen1 has joined #osdev
<zid>
becuse it was waiting for the voltage regulators to stabilise
<zid>
cpus using 1V is nice until they wannt use 300W and you need to deliver 300A to them lol
<sonny>
iirc I saw something recently about gcc and the like being better at auto vectorization
<gamozo>
mrvn: Ooh that's a cool example.
bauen1 has quit [Ping timeout: 240 seconds]
<mrvn>
note how the older gcc increments %edx while modern gcc decrements %edx
<mrvn>
Modern gcc knows the loop is run 10 times.
<gamozo>
I love this stuff so much
<gamozo>
I shouldn't because it's very distracting
<sonny>
what are you working on?
<mrvn>
gcc -O3 code looks horrible.
<gamozo>
I've had so much fun doing compiler optimization stuff. It's actually something I think most people should get a chance to dabble with
<gamozo>
I'
<gamozo>
I've had bad experiences with -O3
<gamozo>
Huh, with -O3 4.1.2 and 12.1 have the same codegen
<gamozo>
that's wild
<mrvn>
I would rather had less compiler optimization. Give me code like I wrote it while being smart about it.
<sonny>
that just means your code is ub ;)
<gamozo>
Nah, -O3 sometimes hurts performance by unrolling too much which can lead to it preventing itself from doing more optimizations
<gamozo>
Or just, really bad icache usage
<mrvn>
gamozo: almost always
<sonny>
ah
<gamozo>
Idk how much of that is still accurate, as time goes on I would imagine it matters less and less, but I know historically a lot of people would advise against -O3
<gamozo>
That being said, I just write Rust then my compiler can really do some fancy optos! <3 strict aliasing
<mrvn>
gamozo: change the code to "ii < 100" and look at the clang output.
<gamozo>
Oooh that's beautiful
<gamozo>
Keeps it nice and tight, does minimal operations
<mrvn>
-O3 used to be unstable, now it's usually just slower.
<gamozo>
I don't really see how to make that code faster on the CPU honestly
<gamozo>
There's a chance unrolling to 6 would be better though? Not sure
<mrvn>
with ii < 100 clang unrolls and then rerolls the loop or something.
<mrvn>
gamozo: 100 / 6?
<gamozo>
For absolute maximum perf I think doing loops by 6 and then flushing the remainder would be fastest
<mrvn>
6 times would require a jump into the middle of the shifts at the start.
<gamozo>
just because I think the 6th is free, idk, it's like a super micro optimization
<mrvn>
why 6?
<gamozo>
My theory is that it should be able to execute 2 of those shls per cycle, and the branch forces a sync to a cycle boundary. So this is doing 2.5 cycles, and then doing nothing for 0.5, then branching
<gamozo>
_I think_ the fact that they hit memory might make that not matter
<gamozo>
for register based math that would be true in this case
<gamozo>
Would it actually be measurable? Probably not
<mrvn>
the register part it eliminated completely because 4 << 100 == 0
<mrvn>
gamozo: modern CPU have cycle counters. you can measure very accurately.
Vercas has quit [Remote host closed the connection]
<gamozo>
I'm familiar, it's just that at this level such a minute amount of system noise (even cache coherency traffic) can overpower measuring something this miniscule
<gamozo>
Hmm, I have an example of this benchmark somewhere I think
<gamozo>
it's kinda neat
Vercas has joined #osdev
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
<mrvn>
gamozo: I love the paper that shows that what shell variables you have in your ENV makes more of a change in speed than -O2 vs -O3.
<bslsk05>
'"Performance Matters" by Emery Berger' by Strange Loop Conference (00:42:15)
<gamozo>
That is a masterpiece and one of the best talks I've ever seen
<gamozo>
Legit it should be required watching. It's so accurate to optimization and the worldview my brain has and what has worked for me
<gamozo>
it's one of the very few talks that really seems to hit the nail on the head
<gamozo>
I do a lot of research in my RTOS for x86 for this exact reason, to eliminate noise
<gamozo>
The points in that talk is arguably why I do osdev lol
bauen1 has joined #osdev
<sonny>
what gives the R the realtime in RTOS?
<gamozo>
I think at the highest level, the biggest difference is whether or not pre-emption is in it. Eg. does your task ever get interrupted
<gamozo>
Knowing that if you don't yield to the kernel, your process has full control of CPU reseources for as long as you want, seconds, minutes, whatever
<sonny>
I see
<sonny>
polling IO ftw
<gamozo>
Yeah, I do IO polling in my OS and I like it a lot
<mrvn>
gamozo: pre-emption != pre-emption. Basically every kernel does pre-emption.
<mrvn>
gamozo: the thing that makes a difference is priotization
<sonny>
I wonder if for a personal computer it's better to have it dstributted so low latency stuff getts polling IO but like sound and stuff is done elsewhere
<gamozo>
mrvn: Of course!
<mrvn>
sonny: sound is verry supectible to jitter. You realy don't want delays there
<mrvn>
video is even worse
bauen1 has quit [Ping timeout: 240 seconds]
<sonny>
yeah, I don't mean polling IO for the audio
<mrvn>
You want 2 things: race to sleep, and fast wakeup when it's time
<sonny>
in the video, "we'll just upgrade next year" lol
bauen1 has joined #osdev
<geist>
re: ia64 it was neat, and i remember at the time everyone was drinking the koolaid saying it was the future
<geist>
many other architectures died as a result of companies switching to it
<zid>
ia64 is HP's future, not yours :p
<gamozo>
ahahah
<geist>
alas the first few implementations had performance issues with regular code
<geist>
and it never caught on
<zid>
it actually... ran regular code eventually?
<geist>
oh i ran regular code, just didn't really kick ass like it was supposed to
<geist>
first few implementations
<gamozo>
Shame. It was really power hungry too right?
<zid>
maybe this is just hindsight, but how the fuck did they think it'd kick ass at regulr code
<geist>
intel fully expected for x86 to go away and all desktops/laptops/etc would also switch to ia64
<geist>
lots of koolaid
<geist>
and also to be fair the first few impls just had design issues
<geist>
huge cache latency, etc
<sonny>
so, everyone just stayed on x86?
<geist>
also had pretty terrible code density but there was some assumptions as to how processor tech would be developed
<geist>
basically. AMD came out with x86-32 shortly afterwards (2002/2003) and then eventually that caught on
<gamozo>
Yeah, that was honsetly wild. Intel was pre-occupied and AMD made the 64-bit standard for Intels arch
<gamozo>
honsetly pretty wild
<geist>
but really x86-64 wasn't an instant success, took quite a few years for it to really take off
<geist>
mostly intel had to make their version, and really windows took a few years to get ported, and that made it legitimate
sikkiladho has quit [Quit: Connection closed for inactivity]
<bslsk05>
'Intel vs. AMD - What happens when you remove the fan?' by JM Lainez (00:02:18)
<gamozo>
I remember this Intel vs AMD video from wayyyy back when
<gamozo>
ahahah
<sonny>
oh I forgot amd64 happened first
<gamozo>
the chiptunes are a banger
<sonny>
damn CPUs get hot
<geist>
yah so the problem with *that* (the amd things overheating) was at the time i believe intel had the patent on thermal based shutdown
sonny has quit [Remote host closed the connection]
<geist>
and wouldn't license it to anyone else. so it was well known that AMD cpus would just cook themselves
<geist>
the patent eventually expired, and/or they licensed it
<geist>
i think other vendors had the same problem (VIA, etc)
<mrvn>
And they made these nices specs that the bios/os set the fan speed dynamically depending on temp and the hardware is not allowed to override it when the cpu gets too hot.
<Griwes>
another example of how patents actively stifle innovation
<mrvn>
if the CPU doesn't cook itself you are violating the specs
<geist>
i remember at some point in the late 2000s my cpu fan fell off while running
<geist>
like the clip broke and the fan pivoted off the cpu and fell to the side
<geist>
system shut ddown within 10 esconds or so. it was a K10 i believe, so I was really worried it had killed itself
<geist>
but by then i guess AMD had their thermal runaway thing implemented
<mrvn>
Here is a nice challenge for you all: Write a program that when run with "env -i SPEED=LOW ./prog" it is slower than "env -i SPEED=FAST ./prog" and doesn't getenv("SPEED")
bauen1 has joined #osdev
<moon-child>
mrvn: stack alignment!
<mrvn>
moon-child: obviously. now exploit it
<moon-child>
meh i don't feel like it
<heat_>
geist, how does thermal based shutdown get patented lol
<heat_>
my car is on fire, i guess it must keep running
heat_ is now known as heat
<mrvn>
heat: patents for cars don't apply for cpus
<heat>
same principle
<heat>
it's a basic safety measure
<heat>
do we patent safety?
<mrvn>
what has that got to do with anything? We are talking patents.
<heat>
do we patent safety?
<heat>
how the fuck is this patenteable?
<mrvn>
You can fight the patent claiming it's "basic". That's a valid strategy. Do you have a million bucks?
<heat>
it shouldn't be a patent in the first place yeah?
<mrvn>
they pay, they get a patent. That's basically how it works if a patent lawyer writes it.
<mrvn>
If it's not patentable then you didn't obfuscate it enough.
<gamozo>
Ahhh, patents
<kingoffrance>
heat: you are assuming principles are involved rather than pure formalism "the judge will do this"
<kingoffrance>
which is 180 of "law"
<kingoffrance>
no legitimate court proclaims themself "god" it is people operating under bogus pretenses
<kingoffrance>
i.e. short-term thinking, to long-term destruction
* kingoffrance
crawls back to shadows
* kingoffrance
whispers premature optimization root of all evil
sonny has joined #osdev
<geist>
yep. what mrvn said
<geist>
doesn't mean it makes sense, but the idea is to narrowly scope the patent to ctually get it, but broadly scope it such that its applicable to the largest amount of stuff
<geist>
and that's what they pay the lawyers for
<mrvn>
and you pay the big bucks to have the patent look like it has very narrow scope but can actually defend a large scope if desired.
<mrvn>
anyone used coz?
<mrvn>
coz-profiler
<moon-child>
afaik it's unmaintained
<moon-child>
oh hmm, last commit a couple of months ago
GeDaMo has quit [Quit: There is as yet insufficient data for a meaningful answer.]
<moon-child>
oh no I was thinking of stabilizer
<mrvn>
you kind of want both
<moon-child>
I want a pony too
<mrvn>
there is a whole OS full with them
* moon-child
smacks klange around a bit with a large trout
<bslsk05>
developer.arm.com: Documentation – Arm Developer
<geist>
a few other guides there you can find under the 'guide' tag. kinda helpful maybe
<gamozo>
nifty
rustyy has quit [Quit: leaving]
vdamewood has quit [Remote host closed the connection]
sonny has left #osdev [#osdev]
vdamewood has joined #osdev
troseman has joined #osdev
heat has quit [Remote host closed the connection]
heat has joined #osdev
elderK has joined #osdev
heat has quit [Read error: Connection reset by peer]
heat_ has joined #osdev
ethrl has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<heat_>
someone on r/osdev got their os to run on apple devices
heat_ is now known as heat
<zid>
woo lightning
<zid>
not had a storm in a year
ethrl has joined #osdev
<klange>
With an unlocked bootloader, the basics of a recent iPad are bog standard ARM.
ethrl has quit [Client Quit]
<heat>
oh really?
<klange>
And the boot environment hands you a framebuffer, so showing something on screen and having a working timer are par for the course on those.
<heat>
device tree and everything?
<klange>
Not directly from the device, but they're using checkra1n's pongo as a bootstrap.
<heat>
oh
<heat>
that's substancially less impressive
<heat>
still cool I guess
<klange>
It's cute, though. It's like how I bought on a Surface :)
<klange>
Sure, it technically works, but then you have a useless tablet with a clock ticking away.
<klange>
s/bought/boot/
<klange>
bleh, gotta wake up, I have to run a meeting in a half hour...
<klange>
I could probably boot ToaruOS with their setup... my old iPad Mini 2 is probably a viable setup, and it's ancient and disused enough I don't care about wrecking it.
<heat>
btw something horrific I saw: I don't think they're using the device tree
<klange>
In fact, it's probably easier to get a useless 'hello world' boot on an iPad than it is on an M1 right now, just because of how esoteric the M1 boot environment remains, and how geared marcan's tooling is towards booting Linux?
<heat>
they invented their own device tree
<heat>
like, custom format in json
<klange>
I like their dedication, though. They've got a target device profile in mind, and they're pushing forward with it.
<heat>
huh i was expecting for M1 to use UEFI
<klange>
Nothing all that special about opuntia, but gosh darn it reminds me of the early days of ToaruOS. Or even the current days of ToaruOS.
<heat>
klange, where's all the python? :P
<klange>
I said early!
<klange>
ToaruOS was Python in the middle of its life.
<heat>
you liked python so much you wrote your own
<heat>
that's true dedication
<klange>
Even 1.0 was virtually all C, it wasn't until ~1.2 that the whole Python DE and suite of fun applications existed.
<klange>
Which is part of why the NIH project worked at all - I just reverted to the original C apps.
<klange>
And then improved them with the functionality I had prototyped in the Python versions.
<heat>
now you bring them back and run them on kuroko
<klange>
I really should. Kuroko is super viable, it's got over a year of dedicated use in my editor, and it's doing nicely in benchmarks.
<klange>
Threads are still garbage - Kuroko suffers from all the things CPython's GIL was meant to address, combined with the fact that my pthread lock implementations on Touru are garbage.
<klange>
garbage garbage garbage
<klange>
But none of those original Python apps were threaded - I didn't even build Python with thread support.
<heat>
btw you should fix bim for serial ttys
<klange>
Which is also why I haven't ported Python again to 2.0.
<klange>
what's wrong with bim on serial ttys
<klange>
are you setting appropriate quirks for your terminal? it's all manual config, none of that terminfo stuff
<heat>
it didn't work right because they have 0 cols and 0 rows
<klange>
oh you need to tell stty about the size of your terminal
<moon-child>
what are the threading issues?
<heat>
we went through this when I ported bim
<klange>
I have a tool for that if your terminal supports cursor reporting
<klange>
ToaruOS actually runs this on serial consoles on startup
<heat>
you should like just set a failsafe size for the tty
<bslsk05>
github.com: toaruos/ttysize.c at master · klange/toaruos · GitHub
<heat>
(or maybe they find it out with terminal sequences, I can't tell. I think some apps like irssi correctly run)
<klange>
as a convenience it will also just take width/height arguments if your terminal _doesn't_ support the cursor reporting callbacks
<klange>
so you can just `ttysize 80 24` or whatever
<klange>
(it's also useful since a serial console attached to a terminal emulator in a window has no way of reporting a change in size, so i just mash `ttysize` a bunch when reorganizing windows)
<klange>
my rpi has been sitting here running just fine for _nearly two months_
<klange>
Admittedly, it hasn't been doing anything besides running an idle serial console, but I'm kinda shocked - and the clock isn't even off.
_xor has quit [Quit: bbiab]
CryptoDavid has joined #osdev
<heat>
now make it do something
<heat>
httpd or ircd
<klange>
It doesn't have a NIC driver for the genet device.
<klange>
Plus the whole 'ToaruOS still lacks listening TCP sockets because I have no idea where my priorities are and there is way too much on my plate'.
<heat>
oh
<klange>
At least this network stack isn't fundamentally broken in a way that prevents implementing them.
<klange>
toaru32's was. I wrote a whole new stack for Misaka.
<heat>
i added listening tcp socket support a few weeks back and tried to have sshd
<heat>
totally forgot I didn't have ptys
alethkit has joined #osdev
<heat>
i should like finish some of my pending work
<heat>
i didn't actually fully get riscv support - didn't add IRQ support, also no thread spawning and signals
<klange>
I think XHCI/USB is my top priority, that's what I was working on before my vacation.
<heat>
and i have a handful of git stashes
<klange>
And it's what this RPi was testing when it was last booted two months ago, which is why it has a serial console and no compositor.
eryjus has quit [Quit: eryjus]
<klange>
It initializes the controller and can query ports, and I believe I successfully sent commands and received responses.
<heat>
i don't grok USB yet
<heat>
like, how tf do USB addresses work, how do you enumerate them, etc
<heat>
funnily enough I also have a "pending" ehci driver in the tree lol
<klange>
You ask the controller and it does the thing, apparently.
<klange>
idk I only got that far
<klange>
The devices I need to talk to on the rpi are behind a hub, so next step is talking to that.