<bslsk05>
'Is Poland's tap water really protected by clams?' by Tom Scott (00:04:31)
<mrvn>
mjg_: what serious functional language sucks perf wise?
[itchyjunk] has quit [Ping timeout: 276 seconds]
[itchyjunk] has joined #osdev
Burgundy has left #osdev [#osdev]
<mjg_>
mrvn: according the guy guy common suckage point everywhere is mutability, e.g. in qsort
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<Ellenor>
mutable data should be locked, even if you aren't on a multiprocessor
<Mondenkind>
I mean yeah mutability sucks for optimisation
<Mondenkind>
the problem is that computers are still worse than humans at optimisation in many cases. So we end up in this weird limbo-land with junk like c
<heat>
in many cases? how many cases?
<heat>
the compiler is infinitely better at optimizing than I am
<Ellenor>
have you eve carried on a conversation across two or more different platforms
<heat>
no
gog has joined #osdev
zoey has quit [Remote host closed the connection]
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
heat_ is now known as heat
zhiayang has quit [Quit: oof.]
zhiayang has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<mjg_>
Ellenor: except in fp langs data normally is not mutable
<mjg_>
hence things like qsort end up doing metric fuckton of copies
<Ellenor>
functional pure?
<heat>
bbbbut it's cleaaaaaaaaan
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]
<jafarlihi>
So I should use size_t instead of int?
<heat>
yes
romzx has joined #osdev
<mrvn>
.oO(in 32bit ocaml strings are limited to 16MB)
<mrvn>
maximum size the GC can deal with. Bigger things need external C storage.
<mrvn>
i.e. indirection
poisone has quit [Remote host closed the connection]
jafarlihi has quit [Quit: WeeChat 3.7.1]
wolfshappen has quit [Quit: later]
<sham1>
How can the GC only deal up to 16 MiB
<heat>
because that's all the memory you'll ever need
<GeDaMo>
Maybe uses a byte for tagging?
JerryXiao has joined #osdev
<sham1>
Well the tag shouldn't matter for the size of the actual object, the size ought to be a separate field in the object header
poisone has joined #osdev
[_] has joined #osdev
bauen1 has joined #osdev
[itchyjunk] has quit [Ping timeout: 260 seconds]
[_] is now known as [itchyjunk]
tacco has joined #osdev
srjek_ has joined #osdev
srjek|home has quit [Ping timeout: 260 seconds]
bauen1 has quit [Ping timeout: 260 seconds]
bauen1 has joined #osdev
<zid>
oh ben eater is back
<mrvn>
heat: every value has the size of (void*) with th lowest bist being a tag. Blocks have a header of 1 value with GC data containing the block size, a few coloring bits and tags. 22bit are block size meaning 16MB maximum.
<mrvn>
With just 32bit for the object header there just isn't that much space for the block size. Using 2 words for the object header would use more memory for everything while basically nobody has strings over 16MB. For large blobs of data you use BigArray.
<mrvn>
BigArray is reference counted and allows slicing. Bascially a std::shared_ptr.
xenos1984 has quit [Ping timeout: 246 seconds]
xenos1984 has joined #osdev
eschaton has quit [Ping timeout: 260 seconds]
eschaton_ has joined #osdev
selve has quit [Remote host closed the connection]
selve has joined #osdev
selve has quit [Remote host closed the connection]
selve has joined #osdev
raggi has quit [Quit: upgrades]
raggi has joined #osdev
raggi has quit [Client Quit]
raggi has joined #osdev
selve has quit [Remote host closed the connection]
selve has joined #osdev
selve has quit [Remote host closed the connection]
selve has joined #osdev
tacco_ has joined #osdev
tacco has quit [Ping timeout: 260 seconds]
bauen1 has quit [Ping timeout: 260 seconds]
bauen1 has joined #osdev
vdamewood has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
netbsduser has joined #osdev
xenos1984 has quit [Ping timeout: 246 seconds]
ZombieChicken has joined #osdev
<heat>
mjg_, at what point do jmps become cheaper than a bunch of nops?
<heat>
you mentioned that yesterday
<mjg_>
what
<mjg_>
they are never cheaper
<mjg_>
i'm sayin depending on how far you have to jump from an asm goto
<mjg_>
you may get away with a smaller nop sled
<mjg_>
than 5 bytes
<mjg_>
but for some reason people blindly patch it with 5
<heat>
my code patched with all sorts of nop sizes
<GeDaMo>
There are larger NOPs
<mjg_>
of course, there is multibyte nops
<heat>
for N bytes my code writes N/5 5-byte nops, (N%5)/4 4-byte nops, etc
<mjg_>
but that is a waste if you get away with fewer nops
<mrvn>
For 11 is nop5, nop5, nop1 or nop5, nop1, nop5 better?
<heat>
geist, does it run linux and are you willing to patch your kernel and build something for me
<heat>
mrvn, /shrug
<geist>
well, i have it, but i honestlyhaven't even fired it up in years
<geist>
it is my old Powermac G5. dual 2.0 Ghz
<heat>
ah, forget it then
GeDaMo has quit [Quit: You are becoming what we French call 'Le Fruitcake'.]
<heat>
no need
<geist>
it's in a box
<mrvn>
heat: seeing as you have the benchmark tools installed could you run it?
<geist>
kk, i also have a ppc32 somewhere. a mac mini G4
<heat>
I think i've tested it enough, although not in the ideal situation
<heat>
there's a funny linux elf interpreter loading bug
<mrvn>
food, brb
<heat>
they don't clear more than one bss
<heat>
which is actually showing up in lld 15
poisone has joined #osdev
<heat>
i have a linux kernel patch and im going to send it but this is intimidating
<zid>
page of nops can be faster than a jmp
<zid>
just depends whether the decoder gets to see it in advance or not
<zid>
cus it'll optimize down into a bunch of nothing inbetween the ops on either side's uops
linear_cannon has quit [Ping timeout: 255 seconds]
<heat>
a jmp is already faster at 64 bytes here
<zid>
in a loop?
<heat>
yes
<zid>
I wonder if it has some kind of minimum block size or something
<heat>
so 63 nop-bytes and a ret vs 1 32-bit jmp and 58 bytes and a ret
<zid>
so that it can only turn 15 nops into 1 nop, or something
<zid>
rather than infinite nops into no nops
bradd has joined #osdev
<heat>
for all 0x90: nopslide/10 2.90 ns 2.82 ns 264834343
<heat>
nopslide/512 63.0 ns 61.6 ns 11547872
<heat>
nopslide/64 10.0 ns 9.87 ns 70475827
<heat>
nopslide/4096 479 ns 458 ns 1386963
<heat>
nopslide/32768 3921 ns 3899 ns 175492
<heat>
nopslide/65536 9058 ns 9011 ns 85905
<mjg_>
that's a 64 byte slide?
<mjg_>
try just 5 kthx
<heat>
i did 10
<heat>
10 is still faster when nopsliding
<zid>
what are the columns
ZombieChicken has quit [Quit: WeeChat 3.6]
<heat>
wall time cpu time loop iterations
<zid>
average, best, rand()%MAX_INT
ZombieChicken has joined #osdev
<zid>
oh reciprocal
<geist>
oooh
<geist>
heat is gonna patch linux heat is gonna patch linux!
<geist>
<everyone runs to the screen to watch heat fight morpheus>
<heat>
kees cook as morpheus
<zid>
greg kh is the oracle
<mjg_>
is viro agent smith?
<zid>
no, that's google
<heat>
who's matthew garrett
<heat>
mjg_, you pick
<mjg_>
cypher
<zid>
I don't know matthew garrett
<zid>
linus is trinity obvs, the hot one in the cat-suit
<heat>
it's mjg
<heat>
the real one
<mjg_>
right in the heart
<heat>
wdym it's you
<heat>
not that freebsd poser
<heat>
you know i really respect alan cox for writing code for two kernels
<zid>
That's how he got his last name, alan fux
<zid>
also why is everybody called al in the kernel world
<zid>
or greg
<heat>
shut up greg
<heat>
why is everyone mjg
<heat>
that's a better question
<kof123>
cause theres already 2 matthew dillons
<mjg_>
and why is mjg joel spolsky
<zid>
I am Robert Sapolsky
<zid>
I have a WAY better beard.
<geist>
thing to do today: play with proxmox
<zid>
wassat?
<geist>
actually kinda looks neat, a much nicer VM solution than my cobbled together qemu starter scripts
<zid>
wiki says they have three produts, email servers, some virutalization thing, and a backup thing
<mjg_>
geist: that's a normie thing to do man
<geist>
oh yeah?
<mjg_>
ye man
<geist>
but does it work well?
<mjg_>
i hear it is great
<mjg_>
never tried myself
<zid>
It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel[3] and allows deployment and management of virtual machines and containers
<zid>
that's MEGA normie
<geist>
oh i see. as in if it works well then that's too easy?
<mjg_>
is not this #osdev
<geist>
yes but lets you dev more!
<zid>
We don't actually deploy anything here to machines
<zid>
we make one machine barely work
<heat>
qmeu-system-x86_64 beeeeeeeeeeeest
<zid>
in a very specific hardware config, for one small task
<mjg_>
geist: if i wanted it easy i would program in html!
<heat>
if you wrote a bunch of scripts to do something, never drop them
<heat>
especially for something better
<geist>
okay here's my challenge then to posix hobby os people: stand up a Mastodon instance on your OS
<heat>
hear that openbsd
<mjg_>
loud and clear
<zid>
If it's to do with web I bet it needs 84939 random deps
<mjg_>
or shit did i reveal something :S
<mjg_>
oh shit*
<zid>
if it's not written in a language I've never heard of I'd be surprised
<geist>
i actually have no idea what mastodon is written in, good question
<zid>
Written inRuby on Rails, JavaScript (React.js, Redux)
<zid>
see, it's a web thing, so it's written in slow bad languages with no types :P
<geist>
so that means it should be easy tos tand up on your os, just port ruby!
<zid>
Only web people care about web things, so they write their backends in web languges too
<geist>
which are generally fairly abstracted from the core os
<zid>
C developers are too busy leaving root exploits in fundamental pieces of core software like openssl
<mrvn>
heat: unless I'm mistaken a nopslide/10 is just 2 nops.
<geist>
side note: last night i think i finally grokked linux's PCID implementation
<geist>
it's pretty bizarro, but makes sense.
<geist>
but i *also* see why AMD's new INVLPGB/TLBSYNC thing is an issue for linux. it's fundamentally a different solution than linux's PCID implementation, so would require alot of work
<mrvn>
geist: what does linux do?
<geist>
notably: linux's PCID implementation does not attept to permanently or semi-permanently assign a PCID to a process. it simply cycles through literally PCID 1-7 on each core, and then each core tracks up to the last 6 contexts it has loaded, opportunistically reusing them if it can
<geist>
the idea being that 6 is a good number for cache line locality of the array of N entries per cpu, and really realistically storing more than 6 contexts of TLBs is diminishing returns
<geist>
but that means that at any given point in time any given cpu has a completely different notion of what PCID is assigned to what address space, so things like AMD's TLBSYNC cross-process shootdown is fundamentally incompatible
<mrvn>
geist: 1-7 on each core or a set of different 7 IDs per core?
<geist>
1-7 on each core
<geist>
as in each core independently assigns a rotating set of PCIDs as context switches happen, and look in the last 6 to see if it can reuse it
<geist>
(or if it's been shot-down by another cpu because of a generation counter getting rolled on the aspace)
<geist>
ie if the cpu has remembered that i had previously used PCID 3 at generation Y for aspace X
<geist>
when it goes to reload aspace X if it's genertion is Y+1, it fully invalidates the aspace when it reloads it
<mrvn>
yeah, you would need to notice if the aspace changed between running a thread.
<mrvn>
so if a process is running on 2 threads that requires an IPI on aspace change, right?
<geist>
right. thats to avoid keeping you from having to IPI to *all* cores all the time. you still only cross IPI to cores that are known to be running a process, ad there the receiving cpu can look in its list of 6 processes and figure out which PCID it has
<mrvn>
as it's unlikely they have the same PCID
<geist>
right, you still do the IPI, because you know it's there. on cpu A where you original the TLB shootdown you might be using PCID 2 for exampe, and you know cpu B has it loaded too, so you send an IPI to it
<geist>
cpu B which is using PCID 3 for the same aspace then says 'yep i'm running this' and does a local shootdown
<mrvn>
which also means you have a race because the other core might just be scheduling at the same time.
<mrvn>
must be done carefull
<geist>
for all the cpus that arently runnig it, cpu A will bump the generation counter, so the next time those cpus context swich if they noticed the aspace's gen counter has been rolled they do a total PCID TLB dump
<geist>
so you roll the gen counter first before sending the IPI, which solves the race
<mrvn>
geist: the other core then needs to first switch and then check the generation counter
<mrvn>
and all of iti atomic
<geist>
nah, it's not that difficult
<geist>
it only needs to dump the TLB if the aspace is in the last 6 it had loaded, if it's not in that list there's nothing to dump
<geist>
but if it's within the last 6 then as it loads it it marks it as active, then does the atomic check, etc. there are 2 or 3 pieces of atomic stuff here that if you do them in the wrong order worst case you do a full TLB invalidate of a PCID you dont need to
<geist>
but yeah, it's a little subtle i guess
<geist>
anyway the only real global atomic state here is per aspace there's still an atomic bitmap of which cpus have the aspace active at that instance
<geist>
everything else is per cpu, which is pretty slick
<mrvn>
I would have stored the last PCID + generation in the task struct and if task->pcid_generation != core->pcid_generation or aspace->generation != task->aspace_generation then you can't reuse the pcid.
<geist>
6 PCIDs was chosen because the array of state there to store the last 6 aspaces on the cpu fits just within a cache line
<geist>
that's a problem because there are multiple cpus
<geist>
so you'd need per task an array of for *every cpu*
<mrvn>
geist: you set generation=0 when you migrate a task
<geist>
but yes it's a fundamental inversion of the whole thing: instead of assigning a PCID to an aspace globally, you do it per cpu and jut use it as a little cache tag
<geist>
downside is algorithms like ARMS ASID or the AMD PCID solution that allows for cheap cross-cpu invalidates
<geist>
fundamentally requires that the same IDs be used across all cpus
<geist>
so you need a fundamentally different solution, generally involving some sort of semi dynamic assignment/reassignment of IDs to processes as it comes up
<mrvn>
geist: the assertion that more than 6 PCIDs have a diminishing return seems also turned around. Would be better to say that with more than 6 PCIDs the probability any cache entries of the old PCID remain is approaching 0
<geist>
sure
<geist>
so the diminishing return is you dont see any real wins in real world benchmarks
<mrvn>
it's not like using more PCIDs costs you anything
<geist>
it does: because there are places where you have to do an O(N) search through an array, per cpu, of the last N pcids you had
<geist>
so you dont want that array to get too big because at some point that cost dominates things
bauen1 has quit [Ping timeout: 260 seconds]
<mrvn>
geist: with the inversion, sure. I ment in the hardware
poisone has quit [Remote host closed the connection]
<geist>
right. the main observation at the time the patch went in is 12 bits is not enough to do a static assignment
<geist>
and even if you do a dynamic assignment, large machines would be hitting it
<mrvn>
A cacheline is 64 byte, so 6 PCIDs means 10 byte per ID. What do they store there?
bauen1 has joined #osdev
<geist>
you're free to look at the source code at this point
<geist>
the details are available
<mrvn>
.oO(But are they understandable :)
<geist>
i understood them
<mrvn>
well, you are geist so that doesn't mean much. :)
<mrvn>
I allways hate trying to grock linux arch specific code.
<geist>
anyway t as something lke a PCID id + the last gen counter it saved + a pointer to the 'mm struct' which i think is linux's equivalent to a aspace
<geist>
this patch was actually pretty easy to read. they even left a large block of comments and whatnot
<geist>
this data is packed in fairly tight with bitfields and whatnot to try to keep it small
<geist>
becauase yes i know you're doig math right now and jst bout to point out that that's > 10 bytes
<mrvn>
Must be. pointer would be 8 byte otherwise leaving only 2 for a generation.
<mrvn>
I always want to write: struct { Bla *ptr : 40; int x : 24; };
<mrvn>
or Bla *ptr : 40 : 2; saying to discard the lower 2 bit too
<geist>
this is not the important detail, mrvn
<mrvn>
no, sorry for going off-topic. :)
<geist>
dont fixate on the minutae. just accept that tricks were done to make it compact, and yo ucan look at the source if you want
<geist>
the overall trick is the per cpu assignment
<mrvn>
prio to remove TLB shootdowns that looks like a nice trick.
<mrvn>
s/remove/remote/
<geist>
but now i see the issue with trivially extending to the AMD TLBSYNC solution, which is fundamentally the same thnig as the ARM one
<geist>
now you have to have two completely differnet mechanisms to assign PCIDs to aspaces and/or cpus
<geist>
that's one thing to do it at the arch level, sicne the code is not shared there
<mrvn>
geist: if every core would use a disjunct PCID range (assuming you have enough) you could still to remote shootdowns.
<geist>
and then if you do the PCID assignment per aspace thing, you're back to 12 bits not being enough
<geist>
perhaps, but now every core has to k now about the assigment space of all the other cores
<geist>
and then that becomes a scaling issue
<geist>
trying to avoid that is basically mandatory for somehting like linux
<mrvn>
12 bits total, 3 bits for the 6 IDs of a core, 9 bits for cores. So up to 512 cores. More if you pack it better.
<geist>
plus as someone has pointed out, there are already 256 or 512 cpu systems, etc, and there are only 4096 entries in the PCID space. so at 512 cpus you only have 8 bits
<geist>
and then KPTI requires that you assign 2 per process
<geist>
but again the problem there is in order to do a TLB shootdown across cpus you'd have to know what IDs are being assigned on every other cpu in the system
<geist>
and on a 512 cpu machine that's up to 512 TLBSYNCs per sync, etc. in this case you have no won at all, it's much slower than a cross IPI to just cpus that have it currenlty mapped
<mrvn>
wich would turn into a uint16_t pcid[512]; and be rather large.
<geist>
yah
<mrvn>
more with a generation.
<geist>
and then you've ffectively capped max cpus to 512 right ow with no good solution past that
<mrvn>
With 512 cores you also have to consider at which point an IPI to all is better than individual cores.
<mrvn>
How does that actually work on x86? Do you pass a pointer to a bitmask?
<geist>
yah and note it's an IPI to cpus that have it mapped, not an IPI to all cores (unless it's a massively multithreaded process that's simultaneously running on all cores)
<geist>
which part? the sending of the IPI?
<mrvn>
yes
<geist>
they have some api thats' like `mp_sync_task(cpu bitmask, ...)`
<mrvn>
Say I need to IPI 100 cores. Do I do 100 calls?
poisone has joined #osdev
<geist>
how that internally works on whatever version of whatever APIC is currently in existance i dont remember
<geist>
internally it might be a O(popcount(bitmask)) operation
<mrvn>
Ahh, you ment the APIC has an mp_sync_task call?
<geist>
it ight, but worst case it doesn't
<geist>
like, say, GICv3: you're basically stuck individually firing for every bit in your bitmask
<geist>
since ther'es no good way to represent in hardware a bitmap of all the cores when the number of cores can be larger than a word
<mrvn>
I looked at that a bit for the GIC. I was just wondering how x86 did it.
<geist>
i think it's probably the same, but honestly i dont remember what the current APICs do
<mrvn>
x86 likes to pass descriptors to opcodes pointing at structures larger than a word.
<geist>
i think xAPIC and x2APIC changed the mechanism, much like how GICv2 changed with GICv3 so it could address more cpus
<geist>
yeah but the IPI is not part of x86 instruction set per se, it's part of the local APIC, which is accessed via mmio (or MSRs)
<geist>
so its functionally similar to GIC in that it's a piece of hardware
<geist>
'externa;' peripheral
<geist>
but yeah a descriptor with a N bit mask would be lovely in this case
<geist>
i think the general observation is *generally* one does not run massively parallel processes like that on linux at least, or at least that's not necessarily the best scalable solution past a certain point
<geist>
but then i just made that up. i have no real idea
<geist>
i frequently see rustc processes using 3200% cpu on my machine or something
<mrvn>
geist: The only people I know that go for 64-512 core systems run computing clusters where they run a single process with MPI on 4000 cores or similar.
<geist>
mrvn: oooh what does it do, spin around?
<geist>
yah most of the 512 sized machines i know about run lots of VMs
<geist>
ie, the biggest intel box you can get is i think 486 or some number like that
<mrvn>
geist: yep. Under the cork the copper pipe must be bend clockwise (or anti-clockwise) to give a rotational effect
<geist>
though i think newer AMD systems might approach that now?
<mrvn>
Yeah, VM hosters would be the other group. I don't really deal with that side.-
<geist>
448 i think now that i think about it: quad socket 112 thread intel machine
<geist>
but yeah generally you carve that machine up into a bunch of VMs or a bunch of containers each running smaller things
<mrvn>
The contraption is a variant of the old put-put boat. You fill the pipe with water. It evaporates and pushes out the water from the pipe, then it cools down and condenses pulling water back in and repeats. Pushing out is more directional so you get a rotational effect.
<geist>
cute!
<geist>
now i kinda want something like that
bauen1 has quit [Ping timeout: 248 seconds]
<geist>
tell me how it works
<mrvn>
I want to put some figurines on the outside of the cork so you get a shadow puppet show.
<geist>
i suddenly remember that i used to have one of those desktop rock waterfall things
<geist>
was kinda nice except over time it tended to splash everywhere
<mrvn>
geist: a rock sitting in a lake with a fishtank pump pumping it to the top of the rock?
<mrvn>
Do you know the japanese bamboo things? They have a bamboo flute on an axis that gets filled with water. At a certain level it tips over and empties and then swings back making a sound when it hits the endstop.
<geist>
oh it was just a little desktop waterfall with 2 or 3 tiers and a bunch of pebbles you place
<geist>
with some lights. was kinda pleasing. had it in college
<geist>
i think the pump finally died and i tossed it
<mrvn>
Every time I see "We value your privacy" I think "Yeah, sure you do. You are counting the dollars you get for selling it"
ioPan is now known as theWeaver
<mrvn>
geist: You can buy the thing from Grand Illusions but I want to modify it.
<dh`>
my grandparents had one of those things (the portable rock waterfall)
<dh`>
no idea what happened to it
<mrvn>
They are easy to build yourself too
<sham1>
Technically that would count as valuing your privacy
outfox has joined #osdev
theWeaver has quit [Quit: WeeChat 3.5]
ChaosWitch has joined #osdev
ChaosWitch is now known as Stella
Stella is now known as Stella[OotC
Stella[OotC is now known as Stella[OotC]
<geist>
yeah there are a bazillion of them on amazon i just checked
<geist>
(the rock waterfalls)
<heat>
you know something that died pretty quickly
<heat>
gnu gold
<heat>
i kinda want to help out
<heat>
having only one good linker is worse than two good linkers
bauen1 has joined #osdev
<kazinsal>
finally have power again
<kazinsal>
gonna have to go through my fridge and chuck a bunch of stuff I bet, that was almost 18 hours
<heat>
america freedom moment
bradd has quit [Ping timeout: 260 seconds]
bradd has joined #osdev
<kazinsal>
canada, and a major power transformer exploded due to a tree bashing it in half
<heat>
very cold nice people moment
<clever>
kazinsal: did it actually explode? 90% of the time its just the fuse going off
<clever>
the fuse on those things does sound like a gun
<kazinsal>
yeah, it took part of the pole with it
<kazinsal>
they had to replace the pole as well, so, extended outage
<kazinsal>
the tree probably didn't help with that
<mrvn>
heat: what makes a linker good?
<clever>
ah, yeah, that sounds pretty big
<heat>
mrvn, fast and featureful
<heat>
ld.bfd is very clearly bad, ld.gold was shaping up well (and was defaulted to ld on a bunch of distros) but then google moved to llvm
<heat>
and now ld.lld is far superior to anything else
<heat>
mold is fast but not featureful, also no linker script support, etc
<clever>
a lot of baremetal things i do rely on linker scripts to work at all
<geist>
kazinsal: oh wow, 18 hours
<mrvn>
My linker can link my hello-world and nothing else. Is it good or bad?
<heat>
it's bad
<mjg_>
negative heat
<heat>
but it's also your linker which is cool
<heat>
mjg_, negative heat = cold
<mjg_>
except then you may try to claim you are cool
<mrvn>
It kind of just dumps the compiler output into a binary.
<kazinsal>
geist: yeah, we had some really bad winds here
<mrvn>
mjg_: that's hot
<kazinsal>
honestly the worst wasn't even the outage itself. the worst was the fire alarm running low on battery at 6am and just blaring itself to death for an hour and a half
<mjg_>
:]
<mrvn>
kazinsal: the fire alarm needs the power grid? Not enough battery to run for a year like everyone elses?
gxt has quit [Remote host closed the connection]
<geist>
probably an older fire alarm that usea a 9v backup battery. all of mine do too
<geist>
i think newer stuff (2010+) is generally mandated to have lithium backup batteries nowadays
gxt has joined #osdev
<kazinsal>
it's an apartment building, so big centralized system with a landline for call-home
<kazinsal>
I think the other issue may have been that the battery at the CO finally gave out so the landline went down
<geist>
yah that's generally what happens here with internet and whatot. even if i run the cable modem off a generator or whatnot, my experience is the cable network usually goes down in an hour or two
<geist>
probably some remote concentrator box that has a battery backup
<geist>
also ugh: since macos 13 Ventura update both of my macs have a much less stable bluetooth connection to my mouse
<geist>
seems it drops and comes back now like once an hour or so
<geist>
wonder if they had a bluetooth stack rewrite or something
<kazinsal>
yeah, it was different when every CO had a shitload of generator power because they all had rows and rows of 5ESSes in them