<heat>
i've just intensely eye-sweated for a 41 year old man
divine has joined #osdev
divine has quit [Read error: Connection reset by peer]
<gog>
what
<mats1>
should i dial child protective services
Matt|home has quit [Remote host closed the connection]
Iris_Persephone has joined #osdev
<Iris_Persephone>
Hia-
<Iris_Persephone>
what did I walk into
<gog>
the place where programming dreams go to suffer
<gog>
sometimes die
<Iris_Persephone>
So I am in a little bit of a silly scenario
<gog>
what's wrong
<Iris_Persephone>
Nothing strictly _wrong_, just suboptimal
<gog>
ok
<Iris_Persephone>
I have, like, five machines right
<gog>
ok
<Iris_Persephone>
All the ones I use for personal use/development (~2-3) are Windows, but since I'm deciding to get into OSdev, I need to move to something *nix since I don't wanna use Cygwin
<heat>
you can use wsl2
<gog>
yeh that's probably the easiest way to get a linux environment without installing linux
<gog>
well i mean
<gog>
you *are* installing linux
<gog>
just running it in a different way
<heat>
or any vm really
<heat>
the key is that graphics on a VM suck
<heat>
so use ssh
<gog>
virtualbox desktop integration works OK
<gog>
most of the time
<Iris_Persephone>
So right now I am doing things on a circa decade old laptop (one of my test machines) using Linux Mint which is _so old_ that it can't do a parallel make without shutting down due to thermals, which obviously isn't ideal :p
<graphitemaster>
Any make experts around
<gog>
i know a thing or two
<graphitemaster>
I'm running into another bizzare issue :/
<graphitemaster>
I like to think I know make well but every so often something weird happens
<Mutabah>
graphitemaster: Come on, you know how to ask better questions than that :)
<graphitemaster>
LOL
<graphitemaster>
So I have a recipe foo: a b, an d the end of this recipe does touch $@ to make a file named foo to indicate that this recipe has run and not to rebuild it, sort of like .PHONY but I like to have that file so I can remove it through other means to mark rebuilds
<graphitemaster>
The only problem is this is not working ..., it's rerunning foo, it works fine when foo has no dep list there after the :
<Mutabah>
Is it deleting `foo` between runs?
<graphitemaster>
Nope
<Mutabah>
You can pass (iirc) `-b` to make to get a verbose debug of how it processed the rules
<heat>
-d
<Mutabah>
yeah, `-d`
<graphitemaster>
Humm
<graphitemaster>
Prerequisite 'src/mbedtls-3.2.1' is older than target 'mbedtls'.
<Mutabah>
(Not sure where I got `-b` from, it's ignored)
<heat>
gog, it works fine but its always slow in my experience
<graphitemaster>
That's weird because I touch the directory to update the timestamp
<heat>
are you depnding on a directory?
<graphitemaster>
Does touching the directory not work
<heat>
you can't do that
<graphitemaster>
:(
<heat>
make is funky when depending on directories
<graphitemaster>
It works fine in other cases for me
<graphitemaster>
I wonder if it's because touch $@ is not working when the recipe name has a / in it
<graphitemaster>
Yeah it does not seem to be updating the modified time
<graphitemaster>
Okay that's a start
<graphitemaster>
Yeah the modified time is only changed when a file or subdirectory is added
<graphitemaster>
So I just had the wrong idea of how touch worked
<graphitemaster>
I'll just throw a random file inside the directory
<Iris_Persephone>
I can't decide whether moving to Linux on my main machine would be worth the hassle, or whether it would make more sense to try and just make things work out on the old laptop
<gog>
my main os is linux and it's not a hassle really
<gog>
depends on your purposes ofc
<graphitemaster>
Unix: Everything is a file, even directories are a file. Me: Tries to use a directory as a file. Unix: No, not like that :surpised-pickachu-face:
<graphitemaster>
Nah this is still not working, sigh
lanodan has quit [Ping timeout: 268 seconds]
Iris_Persephone has quit [Ping timeout: 246 seconds]
gog has quit [Ping timeout: 264 seconds]
Iris_Persephone has joined #osdev
Iris_Persephone has quit [Ping timeout: 244 seconds]
xenos1984 has quit [Read error: Connection reset by peer]
<GreaseMonkey>
"even directories are a file" - one difference between Linux and FreeBSD is that in the latter you can actually cat a directory and it will produce its contents as per readdir... which is NOT what you want
<GreaseMonkey>
anyhow... anyone with any experience with fasm? doing some 16-bit x86 stuff and a bunch of segmentation (which should partially explain why i gave up on nasm for that) and i want to be able to automatically create a singly-linked list via the preprocessor
srjek has quit [Ping timeout: 246 seconds]
[itchyjunk] has quit [Remote host closed the connection]
saltd has quit [Remote host closed the connection]
saltd has joined #osdev
<kof123>
never used fasm, but i have no problems [*] with nasm. there are ways to be explicit etc. [*] except for some macro calling other macro trickery, but may be user error, not a problem atm
<GreaseMonkey>
the problem i have with nasm is that for what i want to do, i *need* segments, and i *need* separate code and data segments
<kof123>
ah, thought you just meant addressing. that stuff i dont muck with (yet)
Iris_Persephone has quit [Remote host closed the connection]
Iris_Persephone has joined #osdev
frkzoid has joined #osdev
Iris_Persephone has quit [Remote host closed the connection]
<GreaseMonkey>
but can i set my offset, and can i get the math to actually behave without complaining about scalars?
<GreaseMonkey>
erm, without complaining about subtracting by scalars
<\Test_User>
ahh that, hmm
<\Test_User>
yeah right, definitely problematic
Iris_Persephone has quit [Remote host closed the connection]
Iris_Persephone has joined #osdev
elastic_dog has quit [Ping timeout: 244 seconds]
elastic_dog has joined #osdev
Iris_Persephone has quit [Ping timeout: 244 seconds]
nexalam__ has joined #osdev
nexalam_ has quit [Ping timeout: 260 seconds]
freakazoid332 has joined #osdev
frkzoid has quit [Ping timeout: 246 seconds]
freakazoid332 has quit [Ping timeout: 244 seconds]
Ram-Z has quit [Remote host closed the connection]
Ram-Z has joined #osdev
sympt has quit [Read error: Connection reset by peer]
<GreaseMonkey>
...ok, managed to get a cyclic list autofilled
<GreaseMonkey>
well... it's a doubly-linked list, but i got one direction autofilled with the assembler, the other direction is currently being filled in at runtime
<mrvn>
Just write it as constexpr in c++.
<GreaseMonkey>
i am not writing a C++ compiler for this OS
<GreaseMonkey>
also if i were to use an external tool for that it'd probably be written in Python
<klange>
but are you going to write a python for this OS?
<GreaseMonkey>
no, it just happens to be one of my go-to languages for scripting stuff
<klange>
sounds like a good reason to write a python, then
ThinkT510 has quit [Quit: WeeChat 3.6]
ThinkT510 has joined #osdev
<mjg>
GreaseMonkey: that fetaure got disabled (read on dirs)
<GreaseMonkey>
huh, i guess there genuinely wasn't any good reason to leave it enabled
<klange>
If you have only one filesystem, read/write on dirs may make sense as the format is controlled - and in most filesystems made for Unix(-likes), directory contents really are stored just like any other file.
<klange>
(And in fact, if I recall correctly, some systems did in fact have the ABI for dirents, returned by readdir, match up with the actual on-disk content, so readdir was just read with some extra offset adjustment based on the data read?)
<klange>
(but don't quote me on that)
GeDaMo has joined #osdev
<mjg>
GreaseMonkey: it was a security problem to an extent
<klange>
Howso?
<mjg>
GreaseMonkey: you were given de facto binary on disk format and could find trash in there you would not be able to see otherwise
<mjg>
as an unpriv user
<mjg>
trivial example: say dir starts with mode 700, you add and remove bunch of shit
<mjg>
chmod 755
<mjg>
now reading it will reveal some of the names which were there
<mjg>
and which just happen to linger
<klange>
Ah, you mean filesystem drivers mistakenly not cleaning up orphaned data and just skipping over it.
<mjg>
why mistakenly
<mjg>
you freed up an entry, no need to overwrite it
<mjg>
but if then if someone read(2)s the dir they see it
<klange>
Call it circular reasoning, but because of what you just said ;)
<mjg>
... even though they should not be able to using regular tools
<klange>
Historically, readdir was implemented in userspace.
<mjg>
don't get me started
<mjg>
unix is full of wtfs
<mjg>
did you know old unix systems had swap config *compiled* into the kernel?
<mjg>
there was a case of learning the hard way where moving the kernel to a new machine would mysteriously corrupt storage
<klange>
Supposedly Sun is responsible for the migration of readdir to a system call, because NFS was sort of the first "differenet" filesystem.
<mjg>
.. it just happened to have a different partitin layout and hardcoded swap would land somewhere in the fs
<mjg>
look man, given how most tooling was getting kernel data from /dev/kmem
<mjg>
readdir does not look that bad, does it
d5k has joined #osdev
d5k has quit [Client Quit]
gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
scoobydoo has quit [Ping timeout: 246 seconds]
scoobydoo has joined #osdev
gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
scoobydoo has quit [Ping timeout: 260 seconds]
scoobydoo has joined #osdev
zaquest has quit [Read error: Connection reset by peer]
gog has joined #osdev
scoobydoo has quit [Ping timeout: 246 seconds]
scoobydoo has joined #osdev
zaquest has joined #osdev
srjek has joined #osdev
scoobydoo_ has joined #osdev
scoobydoo has quit [Ping timeout: 260 seconds]
scoobydoo_ is now known as scoobydoo
srjek has quit [Ping timeout: 244 seconds]
srjek has joined #osdev
srjek has quit [Ping timeout: 244 seconds]
<jjuran>
In OpenBSD, /dev/kmem is still the suggested approach to display graphics on the console.
<mjg>
did they clean up netstat et al to not use it?
gildasio has quit [Ping timeout: 258 seconds]
gildasio has joined #osdev
opal has quit [Remote host closed the connection]
[itchyjunk] has joined #osdev
opal has joined #osdev
gxt has quit [Ping timeout: 258 seconds]
gxt has joined #osdev
saltd has quit [Remote host closed the connection]
vdamewood has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
SpikeHeron has quit [Quit: WeeChat 3.6]
elastic_dog has quit [Ping timeout: 260 seconds]
elastic_dog has joined #osdev
saltd has joined #osdev
heat has joined #osdev
<heat>
damn
<heat>
we were shitting on old unix and i wasn't here :/
<heat>
and openbsd too
StoaPhil has joined #osdev
<zid>
awww poor heat
<zid>
oh whoops
<zid>
did emerge @world and it failed at rust because I forgot to mount /var/tmp/portage to a temporary bigger drive
saltd has quit [Quit: joins libera]
saltd has joined #osdev
frkzoid has joined #osdev
StoaPhil has quit [Quit: WeeChat 3.6]
<gog>
zid is a masochist confirmed, uses gentoo
<sham1>
What other platform would be so good for OSDev
<sham1>
After all, you're already compiling things, might as well compile your own OS
<gog>
literally anything else is better than using gentoo from a UX perspective specifically :P
<gog>
but i don't distro shame
<gog>
if it works glhf :D
<gog>
same reason i never update
<gog>
updating is for chumps
<zid>
gentoo's nice cus it means I have headers and stuff for everything
<zid>
without having to fuck around installing trees of dev packages etc
<zid>
and I can trivially install packages with weird ./configure options
<gog>
yes
<gog>
very customizable was one of the reasons i used it for a long time
<zid>
idk what the fuck you're supposed to do on say, ubuntu, if the stock package has the wrong ./configure
<sham1>
I mean, I think you can get the same thing with NixOS or even GujxSD
<gog>
that is a benefit esp when managing toolchains
<gog>
like if you need some weird specific arm toolchain
<gog>
but the shipped ones don't have it
<geist>
i keep both around (ubuntu based things and gentoo) but i tend to just use ubuntu
<zid>
rossdev <3
<zid>
crossdev too
<geist>
but i have a few key tools i build myself. like cross gcc and qemu
<zid>
crossdv -s0 -t mips-elf-none
<geist>
since the distros are always out of date
<zid>
bam, gcc 12 for playstation and n64
<sham1>
Doesn't crossdev have some weird Linuxian patches?
StoaPhil has joined #osdev
<zid>
I'm running it on linux, not making canadians
<sham1>
I see
<gog>
heh linux can run on n64
<zid>
good luck running gcc on it though
<gog>
yeh
<gog>
it's not built for that for sure
<zid>
You can't even really run gcc on x86 anymore
<geist>
maybe even Dreamcast. i do remember folks had hacked netbsd on it pretty easily
<zid>
it doesn't have enough address space for the amount of ram it'd want to use
<geist>
biggest downside being it only had 16MB ram
<sham1>
n64? One can run it probably on Amiga
<gog>
i heard about dreamcast linux before i think
<zid>
dreamcast runs windows
<gog>
dreamcast is still big with modders
<geist>
it's a pretty generic SH box
<zid>
out of the box
<sham1>
Well, at least could. Not sure if Linux supports the Amiga anymore or if that's nowadays just NetBSD
<gog>
is dreamcast x86?
<zid>
no
<geist>
superH
<gog>
ohh was it wince?
<zid>
but old versions of windows had actual po`rts
<geist>
SH-4 specifically
<gog>
or a full on NT?
<zid>
CE
<geist>
right, it was wince
<gog>
ah ok
<sham1>
I don't think any consoles were really x86 until the XBox
<zid>
lol 'wince'
<zid>
yea xbox, then it skipped a gen, then every console
<zid>
(ps3/360 were cell/pcc)
<gog>
did NT get ported to other platforms before arm?
<gog>
i know alpha
<geist>
that was bac when SH was still a player in the handheld space too. a lot of the early pocketpc handhelds were SH. was SH vs strongarm for a while
<geist>
until arm slowly won
<GeDaMo>
I think NT was on everything in the 90s
<geist>
gog: yes. alpha, mips, PPC
<zid>
a lot of arcade gear is SH too
<zid>
in terms of making game consoles
<zid>
so a lot of /software/ already expected to be running on a SH
<geist>
SH is an alright little machine. SH-4 was the first non x86 i ever ported my first OS to (newos)
<gog>
that makes sense since it was sega
<zid>
now as you can imagine, arcade gear is x86 again
<geist>
it's about as opposite from x86 as you can get. 32bit, i think big endian, 16 bit compressed instructions, risc
<geist>
had one of those TLB-miss-exception based MMUs
<sham1>
Big endian? Eugh
<geist>
i think, may be little, or runnable in little. lots of risc machines it's just a control register you whack and hen it's in the other endian
<gog>
bi endian
<gog>
nice
<zid>
bi endian or trans endian?
<geist>
yah, as ARM has been etc
<geist>
running arm in bigendian just generally doesn't come up here
<geist>
but a few folks do that, mostly porting some old ntworking software from powerpc
<clever>
the pistorm guys are running an amiga/m68k JIT under big-endian aarch64
<geist>
yah, that'd probably be a good use of it
<clever>
i dont know of any other projects that use BE
<zid>
can I tie it into my task switching
<zid>
so I can have big endian processes
<clever>
but i want to get BE linux running on the rpi, to test byte-swap stuff
<geist>
in general my experience is most things are big endian *except* x86 and then all of the modern arches nowadays that intend to be compatible with x86 (ARM, riscv)
<geist>
so LE won by sheer attrition
<clever>
for example, zfs is also bi-endian, a lot of on-disk structures are written in NATIVE byte order, and if the magic# is backwards, you byteswap on load
<clever>
but to test that code properly, i need a BE machine to write those structures
<zid>
I've sent gog into an existantial crisis
<clever>
or just say no to BE support :P
<geist>
yah BFS had that rule too, since BeOS ran on PPC and x86. whatever the machine was at the time that formatted it got to pick the endian
<gog>
zid: nah i got over that :P
<zid>
welcome back
<clever>
but zfs isnt setting the entire disk to le or be, zfs is dynamically creating every object in the current native order
<clever>
and at read time, it may have to deal with a mix of orders
<geist>
anyway SH is fun. one day i'll toy with porting LK to it, but i dont have a physical machine to run on except a dreamcast and meh.
<clever>
depending on whatever cpu last wrote that specific object
<clever>
i also want to try running aarch64 BE LK, to test all of the byte-swaps in something like ext4
<geist>
makes sense. i wonder if it started off big endian and then they added the LE path when sun started working with x86 more
<clever>
are there any LE's i missed, and its only working because native==LE?
<clever>
zfs does show signs of starting off BE, a large chunk of the structures are BE only
<clever>
and only the performance sensitive ones have been moved to bi-endian
<heat>
bonk
<clever>
xfs is also weird, the journal is native byte order, and linux lacks the ability to replay a journal from the wrong byte order
<heat>
>since the distros are always out of date
<clever>
so if your only BE machine dies, your LE system cant replay the journal :P
* heat
laughs in arch lunix
<geist>
that's a general decision when you build a FS: is it ever intended to be run on removable media?
<geist>
if not, you can generally assume it'll only be used on the machine that formatted it
<geist>
or at least it is *a* decision you could make, may not be a good one
<clever>
in that xfs case, i was using a sparc machine as my nas
<clever>
and then it died, so i went back to x86
<heat>
the only sane way to do things is to use little endian and convert to it if you need to
<geist>
XFS is designed as a high end server thing for IRIX, i doubt at the time they had assumed it'd be on removable media or onanything but a SGI box
<geist>
for disks you'd have used EFS i think (which i think was just a variant of FFS)
<clever>
one kind of weird design feature of ext4, is that a lot of the records are self-checksummed
<geist>
oh haha just read that HP ended up buying irix in the long run (via a few sales post SGI)
<clever>
often just shoving a 4 byte checksum into a hole caused by alignment issues
<geist>
thus reinforcing my statement that HP is where all OSes go to die
<clever>
but having seen zfs internals, that just feels wrong on so many levels
<heat>
it's not weird though
<clever>
if bugs lead to you reading the wrong block, the checksum within that block is still valid!
<heat>
it makes total sense
<clever>
heat: zfs instead has the checksum next to the block# pointer
<heat>
but there are no block checksums
<geist>
yah seems reasonable to me. would be nice if data structures had a generic structure, but at least having a checksum is a win
<clever>
so if you read the wrong block, or the block got overwritten, the checksum is no longer valid
<geist>
which seems like the point?
<geist>
oh oh i see. yes
<clever>
yeah
<heat>
>if you read the wrong block <-- ext4 isn't protecting against buggy code
<clever>
heat: or if your hdd fails and writes the block to the wrong location
<clever>
mechanical drives can do that, when the head motor looses power mid-write
<heat>
again, there are no block checksums in ext4
<geist>
NT has a cheaper but 80s style version of it: FILE records that span more than one sector have in the header of the FILE record a list of bytes that are replaced in subsequent sectos of the same data structure
<clever>
yeah
<geist>
and in those sectors a known byte (i think the last) is replaced with i think a generation counter, which is stored in the first block
<geist>
so as you read i the sector you check that the gen count mathes, and then stuff the byte from the header in its place
<clever>
heat: random control structures, like an inode, or a block of a dir listing, half a checksum within that record
<geist>
kinda cheesy, but functional
<heat>
ext4 has extent tree checksums, superblock checksums, block group descriptor checksums, directory checksums
<heat>
inode checksums
<clever>
yep
<geist>
its to protect against a half written, multi sector data structure
<geist>
without an expensive checksum
<clever>
yeah
<clever>
while zfs is better, ensuring the data you read, is what the record was pointing to
<clever>
protecting against any type of corruption
<heat>
ext4 isn't really competing with zfs
<heat>
ext4 is "take classic UNIX filesystem design and make it go fast"
<geist>
yah gotta remember zfs was a really Big Iron thing when it came out
<geist>
it was expensive to run on aything big high end hardware for liek 10 years
<maksy>
I can't get my bootloader to jump into the loaded kernel. Qemu just stalls.
<geist>
eventually consumer hardware caught up
<heat>
zfs, btrfs, xfs, etc are all still a good bit away from ext4 in terms of performance
<geist>
maksy: ooh, okay. how are you doing it?
<clever>
zfs does also support multiple different checksum algos, and you can pick whatever you have hw accel for
<clever>
so it doesnt cost too much in terms of cpu
<geist>
yah a real osdev question lets work on this one
<zid>
ah 'qemu stalls'
<heat>
why are you "relocating the mbr"?
<clever>
i also got some stuff to watch so sure!, bbl
<maksy>
I already asked about the problem, and it turned out that I hadn't set up gdt properly. It should be working now, but the kernel still won't get running
<zid>
a stall is easy to find with qemu, if that's your symptom
<zid>
just type info registers
<geist>
right, so the first thing i'd do is use the qemu console to inspect where the cpu is
<geist>
`info registers` is a good one
<geist>
should show you if the cpu is stuck or in a loop, or just completely off in the weeds
<zid>
Yea it may just be busy executing 4GB of 00 00 00.. somewhere
<zid>
or it may be in a loop you thought should have exited
<heat>
what's wrong with your monitor what
<geist>
perfect, i was also going to suggest -no-shutdown -no-reboot bit you already have that
<heat>
sending your shit through a unix socket is something I didn't know was possible
<geist>
can you get to the qemu monitor with that?
<geist>
heat: oh yeah you can run it to a socket, etc. it's very customizable
dude12312414 has quit [Remote host closed the connection]
<geist>
can do the same thing with serial ports too
<zid>
yea the monitor stuff is really nicely well featured, bit of a pain to figure out though
<zid>
"idk, do what most people do here?" isn't really one of the switches :P
<heat>
-monitor stdio
<heat>
ez
<zid>
yea that's what I use
<geist>
sure but if they can get to the monitor console, it doesn't matter, which is why i was asking if they can
<heat>
the default behavior (on the GUI) is horrible
<geist>
heat: it's worse than that, also depends on the host OS, how it was compiled, etc. sometimes you get a menu, sometimes you dont, sometimes you get scrollback, etc
<geist>
sicne that gets into host compilation of qemu features
<zid>
but I know it's different if you compile it for X vs gtk+ vs SDL or whatever
<zid>
whether the window is usable or not
<geist>
right, i have a whole list of ibs to install for it to detect it
<geist>
alst `libvte` is important if you want nice console behavior within the window
<geist>
gtk+ + libvte
<geist>
pure SDL window just gets you the output of the main display and nothing else
<zid>
I assume gdb would be very unhappy with real-mode code?
<zid>
you could bisect down to where it goes nuts by throwing some jmp . in and seeing which ones it gets stuck on and which ones never run I guess..
<gog>
i've done it before and it kinda works
<zid>
or just use bochs, tbh
<zid>
this is the one use case where bochs is superior
<gog>
but disassembly gets weird
<zid>
it has a built in real mode debugger, and the magic breakpoints etc
<heat>
use qemu
<gog>
yeh
<heat>
x/8i $ip
<heat>
qemu can disassembly shizzle ell
<heat>
well*
<gog>
neat
<heat>
*heat
bauen1 has joined #osdev
<geist>
yah i generally recommend the old skool jmp . + 'see where the cpu sticks' using the monitor
<gog>
heat neat
<geist>
once you get into the swing of thigns you can find stuff pretty quickly
<geist>
and that strategy works basically on whatever you'e running on (including real hardware)
<heat>
noooooooooo not jmp .
<heat>
be power efficient: hlt!
<zid>
what does x86 do with hlt + cli, deadlock?
<zid>
gameboy does weird things
<heat>
no
<heat>
you can get out of a cli + hlt state with NMIs or SMIs
<zid>
SMIs are total hackers
<heat>
the true safe halt() is "cli; 1: hlt; jmp 1b"
<maksy>
when I change KERNEL_OFFSET on the line "call KERNEL_OFFSET" I get an exception, so I guess that line gets executed
<geist>
yah some emulators exit wjhen they see that (i think qemu can?) but that's bedcuase they know there's nothing that can generate NMI or SMI
<heat>
I don't know if qemu can, but qemu can definitely generate NMIs and SMIs
<geist>
maksy: did you write all of this code?
<geist>
heat: sure, but for a given configuration it can know if it can or can't, so it can be configured to bail
<maksy>
maybe I'll install bochs next
<geist>
maksy: you need to get a bit more directed at what you're debugging
<geist>
fiddlig with variables doesn't really *tell* you anything
<geist>
you need to really get in there and debug why it fails
<heat>
💯
<maksy>
geist: well I've read all the tutorials I've found online and checked other projects so not sure if it's all mine :p
<heat>
can u just multiboot
<heat>
thank
<geist>
anyway we can help you help yourself, but can't debug it for you
<geist>
what we're telling you is tools to get in there and debug it, and you have fantastic tools available already, just have to learn how to use them
<geist>
bochs isn't really goig tot ell you anything more at the moment, because you haven't really exhausted what you have in qemu right now
<heat>
and bochs isn't a magic bullet, and bochs won't help you figure out what's going wrong if you don't even fully understand what you wrote
<maksy>
so I kinda have written it and I think I understand what it does
<geist>
excellent, so that means you're in a good place to use qemu to see where it gets stuc, etc
<zid>
Work forwards through it verifying it's in the state you think it should be, in chunks
<geist>
right
<zid>
until you narrow down when that it isn't
<geist>
💯
<zid>
So if this we me I might go "The code loads and the first instruction gets ran" by shoving a jmp . after the first instruction
<zid>
then if that checks out
<zid>
"The thing does what it's supposed to" and shove one at the end too (before jump to kernel or whatever)
<zid>
if that fails, I know the fenceposts I am working with for where the problem lies
<zid>
repeat
<heat>
💯
<clever>
thats similar to how i had debugged the mmu setup in LK a while back
<geist>
also you can inspect the state of the registers at the point too, make sure they are set up the way you think, etc
<clever>
i wrote a tiny asm macro, that could print a single byte to the yart
<clever>
uart*
<geist>
this sort of trick tends to only work that well for early boot failures like this, but it works very well
<clever>
and then just spammed it and bisected
<geist>
and it involves manually walking over the code, which is very useful for really truly understanding it
<geist>
even if you're very experienced, etc. you forget and/or make mistakes. sometimes having to really deeply debug something is the best way to really understand it
<bslsk05>
nurh.org: Debugging A Kernel In A Virtual Machine – Nur Hussein
<vin>
I need some help understand a peculiar performance behavior. On a two socket machine, I created a tmpfs on node 2's memory (tmpfs mpol bind) and a bunch of threads spwaned on node 1 read to a local buffer sequentially. I make sure these buffers are created on node 0 using numactl -l. I expect this to have higher latency/lower bandwidth compared to binding these threads to node 2. However, I see no
<vin>
difference. Any thoughts?
<clever>
vin: prefetch in the caches?
maksy has quit [Quit: WeeChat 3.4]
<vin>
clever: Before each run I do a "sync; echo 3 > /proc/sys/vm/drop_caches;"
<geist>
if it's a tmpfs there isn't any cache to drop, i'd expect
<clever>
yeah
<heat>
you don't need sync, and yes ^^
<geist>
since tmpfs is simply a bunch of in memory files hanging off dirents
<clever>
and its prefetch in the L1/L2 cache, not the linux caches
<heat>
drop_caches syncs for you
<vin>
How I create the tmpfs ` mount -t tmpfs -o size=60g,mpol=bind:1 tmpfs /mnt/cxl`
<geist>
but anyway if your test isn't reeally right up at the end of the bandwidth of the system you might not see anything
<geist>
i'd expect the inter-socket bandwidth to be at leat greater than the memory bus
<geist>
so i'd assume you can easily max out whatever dimm channels are on the other socket
<vin>
How I run my program `numactl --cpunodebind=1 -l ./tmmap` I change cpunodebind between 0 and 1 and there is no performance difference
<geist>
so maybe a latency thing, but if the cpu has enough prefetch it can soak all that up
<geist>
basically i'm not sure your test is sufficient to detect any thing. depends precisely on what its doing
<geist>
reading a buffer sequentially gies the cpu optimal situatino to prefetch and soak up any latency it may see
<vin>
Good point. Let me try random. Where each thread randomly memcpy's a block from the file to a local buffer.
<geist>
if you had a test case that was randomly bopping around such that it's testing pure latency you might see something
<heat>
how are you mmaping?
<geist>
and even memcpy it might be a long enough run of data that the prefetcing is enough to soak it up
<heat>
make sure you're not page faulting your ass off
<geist>
ie a bunch of random memcopies of 64k may still mostly hide the latency
<vin>
heat: I am doing PROT_READ | PROT_WRITE | PROT_SHARED
<heat>
hm?
<heat>
you mean PROT_READ | PROT_WRITE, MAP_SHARED?
<vin>
yea my bad
<heat>
add MAP_POPULATE
<heat>
that will stop you from slowly page faulting everything
<vin>
geist: Maybe. When threads bound to same node, I get 17.1 GB/s and when they are not I get 16.8 GB/s
<vin>
heat: no MAP_POPULATE
<vin>
I don't want to prefault
<heat>
why?
<geist>
vin: yeah that tracks. i'm assuming the inter cpu bandwidth is at least greater than that, so the bandwidth isn't capped based on local vs remote note
<heat>
tiny little hickups along the way will just screw your measurements up
<mrvn>
Maybe you should use another NUMA node to populate the memory and then access a bunch of unrelated memory to flush the caches.
<mrvn>
Also consider that memory access itself is different for random and sequential.
<mrvn>
(caches == cpu caches)
<vin>
Interesting, I see a 6 GB/s bandwidth difference this time. Threads accessing local tmpfs is 6 GB/s faster than far tmpfs. I have measured the latencies for each run, which I am plotting now. I can share them if you are interested.
<mrvn>
Maybe measure the time to fetch a single cache line with memory barriers and cpu cycle counters?
<mrvn>
How do you control where the tmpfs will put it's memory?
<vin>
mpol:bind mrvn
<heat>
mjg, any good resources for freebsd's vm system? particularly the dirtying and writeback of inodes, pages and bufs
<bslsk05>
github.com: lcd.c: Don't draw sprites past the edge of the screen · zid/gameboy@39640ba · GitHub
<zid>
boring and functional
<mjg>
i don't think anybody disputes usefulness of the one liner upfront
<geist>
i've found if you're good about it, the tags can actuall y make the line *shorter* because the sentence already has the context established
<mjg>
the queston is how much more text (if any) is needed
<geist>
oh totally
<zid>
yea I like the tag system
<mrvn>
Do ypu put the filename in the commit message? That's already in the diff and GUIs show affected files. Why waste space for that in the message?
<zid>
I always do filename.c: blah or subsystem: blah
<geist>
zid: yah your example is just about right for simple stuff
<mrvn>
zid: I prefer the later
<mjg>
zid: that's roughly what i'm doing
<geist>
if i refactor something i tend to try to leave a bit more context, since i'm thnking that someone down the line that now has a broken build can see what to change to, etc
<dh`>
one of the problems with git is that by default the history doesn't give you the file or subtree that the change applies to, so that context for the commit message is lost
<mrvn>
dh`: huh? every commit has parents
<dh`>
this is something people who grew up on cvs don't adjust to well
<dh`>
git log does not show which files a changeset affects
<dh`>
without some non-default options nobody can remember
<dh`>
(or maybe at all, wouldn't put that past git's ui)
<geist>
oh sure, but it's *there*
<mrvn>
dh`: git log is also totaly useless with merges
<geist>
also why it end to use stuff like tig or gitk
<mrvn>
I like "qgit"
<dh`>
mrvn: yes I've discovered that
<geist>
i like to see at least some sort of lines as to what is related to what
<dh`>
that really irritates me.
<mrvn>
I kind of want "git log -p | diffstat"
<geist>
that being said if it's a global config for log, set it and be done with it
<mrvn>
Show what files changed how much but not all the diff.
<geist>
and since the git config format is so easy to work with, copy it between machines
<geist>
*heart* utiities that have simple config files you can just edit with a text editor
<mrvn>
geist: myapp.xml?
<geist>
NEIN!
<mrvn>
*duck*
<dh`>
this is the kind of thing I tend to write in commit messages:
<mrvn>
click on "1 files" and it adds a box at the bottom wiht a rotating "processing" thingy and then the box goes away.
<mrvn>
Looks to me like the web interface is broken or confused by that commit
<geist>
hmm, okay the bsds are off the hook. maybe it was some other project i was complaining about that has bad messages
<dh`>
we don't run freshbsd
<dh`>
(I'm not sure who does)
<geist>
gcc maybe. lemme see
<dh`>
but the processing chain from cvs is fragile
<dh`>
fsf projects are notorious for useless changelog messages
<dh`>
like, their standards actively make it impossible to write good ones
<geist>
ah no gcc seems okay
<dh`>
anyway one of these days we'll finally ditch cvs and some of this will improve
<mrvn>
dh`: you still sue cvs?
<mrvn>
use
<dh`>
migrating has been immensely difficult
<mrvn>
dh`: migrating the content or your workflow?
<dh`>
yes
<mrvn>
there are tools for both of that for git.
<dh`>
yes, there are tools.
<dh`>
_now_ we have an automated conversion from cvs that works most of the time
<mrvn>
Although if you keep using cvs syntax then what was the point for switching to git?
<geist>
one of the weirder ones is the haiku project. they have a system that generates a new numerically incrementing tag on every CL
<dh`>
that took a good deal of work, both to make the tool not just tip over and to clean the repository to make it convertible
<geist>
so there are ilke 50k tags in git now as a result. really stresses out any viewers of it
<geist>
but it was becaue they migrated from svn and really got to like the incrementing version number
<mrvn>
geist: we do that at work in one project and it's a pain
<dh`>
hg has local version numbers, it's one of many ways it's superior to git
<geist>
mrvn: yeah this is precisely why git pack-refs exists i guess
<mrvn>
if two people build at the same time you git conflciting tags
<geist>
but still fouls up any ui that wants to show tags
<mrvn>
s/git/get/
<geist>
oh in this case the haiku stuff the tags are generated ont he server, so at least they're always monotonically incrementing
<dh`>
the netbsd repository is both extremely large in breadth (that is, it's a whole OS) and depth (it goes back to 1993)
<dh`>
and when we started this just caused the available conversion tools to bomb out
<mjg>
geist: they can count commits, freebsd is doing it
<mjg>
geist: but that requires force pushes to be whacked
<mrvn>
dh`: I think one conversion tool just checks out every version in cvs and commits it in git. Imagine how many years that will take for netbsd.
<mjg>
git does not scale with tags
* zid
force pushes a typo fix from 1999 over mjg's master
<geist>
mjg: yep. though it scales better than you'd think, but it's mostly external uis that get upset
* mjg
observes zid get perm denied and a "try rm $(which git)" message
<CompanionCube>
aren't there multiple conversions of the netbsd repository though?
<dh`>
mrvn: just identifying versions is a problem
<mjg>
geist: well at a certain workplace there was a process which resulted in metric fuckton of tags
<dh`>
remember cvs is a tree full of rcs files, not an actual database
<mjg>
geist: aand it was DOG slow
<dh`>
identifying parts of the same commit in different files is itself nontrivial
<mrvn>
dh`: and every file has it's own version
<dh`>
especially since we have had a few cases where "single" changes had to be committed in batches to avoid making cvs itself tip over
<CompanionCube>
yep, you can see this in the RCSIDs as well with the per-file version numbers and ,v suffix
<dh`>
anyway besides the technical challenges it's also been a political nightmare
<dh`>
which I shouldn't say too much about since iirc this channel has a logger
<mjg>
:)
<geist>
wise
<mrvn>
dh`: Most projects switched because cvs/svn just didn't perform
<dh`>
cvs absolutely does not perform, we are quite familiar with that
<mjg>
dh`: fwiw, for better or worse, freebsd just went with the switch to git and flamewars died off
<mrvn>
Less a "we want to switch" and more a "we have to switch"
<geist>
thats why i didn't want to call out particular commits or whatnot a while ago
<geist>
that stuff has a tendency to get hooverd up in google
<dh`>
before the advent of SSDs and huge amounts of ram it used to take hours to tag a release
<mjg>
it's an old adage that ultimately someone needs to do the did
<geist>
yah i remember the freebsd switch war went on for a while didnit it?
<mrvn>
I bet most of it is "Was der Bauer nicht kennt das isst er nicht"
<mjg>
*prior*, sure, there was a contingent of people who did not really have legit arguments, but were opposed to change on its principle
<mjg>
well there was a cvs -> svn -> git switch route, with svn having quite a few years of life in the project
<mjg>
and being seen as less of a sysetm shock than git
<geist>
yah and iirc it worked reasonably well
<mjg>
it was tolerable, but everyone i knew was developing locally in git anyway
<geist>
at the time i think a 2010 era mid range machine would probably deal with svn on a freebsd size project pretty smoothly
<mjg>
i have no opinion whether svn was a sensible choice at the time
<dh`>
data loss was always a concern with svn
<mjg>
but the point was, ultimatley someone decided to bite the bullet, and do the switch
saltd has quit [Remote host closed the connection]
<geist>
i do remember one of the downsides about then was the performance of git on large repos. was at Palm at the time and we were mostly svn based but a few fo the teams (mine included) were trying to use git for our local repos
<mjg>
there is no going back now, so adjust to it
<geist>
and were getting lots of pushback basd on things like performance
<dh`>
with an OS-sized tree any op that needs to scan the whole tree for changes is inherently slow
<geist>
svn is fairly efficient at syncing with server since it just grabs what it needs, but it keeps a local copy of the current head. so it was easy for the cliet to locally decide what it needs and then just grab the delta
<dh`>
that is a problem shared by basically everything, the difference being that git and hg and svn have many more ops that need to scan the tree than cvs does
<mjg>
svn log was total crap, at least by default
<mjg>
i don't know if it supports any form of local caching
<geist>
yah the invention was the local head that svn kept a cache of (for better or worse). so it locally knew what was different. still had to stat but at least didn't involve any roundtrips
<mrvn>
geist: but svn didn't have the history local, right?
<dh`>
cvs update on netbsd is still a "go make tea" operation
<geist>
good question. i forget. probably not which i guess is why it's slower
<dh`>
even with fast networks and ssds
<geist>
yah gotta remember svn vs cvs is night and day, but a similar model path wise
<mrvn>
With git you just download and copy everything and then everything is local.
<dh`>
but cvs is just amazingly stupid and bad
<geist>
i do remember getting into an argument with a guy at palm when i was saying it downloaded the whole history and all the fies
<dh`>
mjg, never ktrace cvs, you'll have a heart attack
<mjg>
reverse psychology?
<geist>
and he wa like "<eyes roll> of course it doesn't"
<geist>
he was like "i workedon cvs before you were born, you can't do that" etc etc
<geist>
(git that is)
<mjg>
i love the old greybeards
<mjg>
strong claims supported by deep convinction
<dh`>
cvs could totally download the whole history, there's nothing preventing it other than nobody wants to work on cvs
<mjg>
of course bullshit if validate them
<mrvn>
geist: lol. then how does checkout work offline if it didn't download everything?
<geist>
yah you could i suppose ssh in and just rsync the whole history locally
<mjg>
dh`: i'm pretty sure nbjoerg mentioned some form of caching for cvs log?
<geist>
since at the end of the day it's just a fat FS with a bunch of .v files
<geist>
flat FS
<clever>
back when i was a noob, i tried writing a web game in perl, using a flat fs as a db
<dh`>
mjg: we sometimes rsync a copy of the repo for local use but that's not a cvs feature
<mjg>
dh`: sure, but the point was you can make it happen
<geist>
hmm, does `cvs log` require a server roundtrip? I assume it does because it'd have to look at the ,v files of the files
<clever>
half way thru writing the first page, ftp'ing files and refreshing on some random free-hosting site
<geist>
unless there's a copy of them?
<clever>
it just goes 404!
<dh`>
geist: yup
<mjg>
geist: normally it does
<clever>
the entire site was deleted by the admins, and all my work went *poof* :P
<mjg>
so does svn
<mjg>
which is, like, fuck off man
<geist>
yah okay. was thnking it might cache the head log in the subdirs .cvs dir or whatnot
<dh`>
nope
<geist>
er CVS i forget. been a while
<mjg>
cvs would be way less slow if it did some directory-wide ops
<mjg>
instead of literally everythiing per file
<geist>
i only really had to use it once. when i was at apple in 2005 the XNU stuff was still in CVS. was actively being migrated to svn, but hadn't completed yet
<mjg>
quite frankly super peculiar to me that such a nework stinker like cvs was a thing with slow networks
<geist>
they wer still working on the script to import, which would take days
<dh`>
what lives in CVS/: the location of the repository, the path of this dir in the repository, the tag you've checked out, the list of files and versions you have
<mjg>
i would expect local caching to be forced by shit connetions
<geist>
dh`: ah yeah. that's right
<dh`>
it's all text, which is nice if your tree gets corrupted
<geist>
and that's why you can say switch a subdir to a differfent tag or whatnot
<geist>
since it's all local
<dh`>
but it's ass slow
<dh`>
and there is exactly zero that's non-local
<geist>
yah up until fairly recently i used to keep a copy of the netbsd and openbsd CVS checkout
<geist>
and syc it every so often. eventually switched to the git mirror since it was just taking too long
<dh`>
it is quite possible to corrupt your working tree by introducing extra directories or extra subtrees
<geist>
also a fun thing to do: load up netbsd on an old sparc or whatnot and then cvs sync
<dh`>
I have a couple dozen checked-out netbsd trees
<dh`>
but then, I actually work on it
<geist>
or even on the vax. did a cvs sync for lulz, but the SSH overhead dominated everything
<geist>
like 95% SSH on top of an already slow system :)
<mjg>
there is something wrong with the way netbsd maintains the github repo though
<mjg>
freebsd used to do an export from svn and it was just a stream of extra commits, git pull looked precisely like you would expect
<mjg>
netbsd seems to regen the branch somehow(?)
<geist>
honesty TLS and ssh are a major reason old machines aren't that usable anymore
<mjg>
you can't git pull
<geist>
you can still run vi, mutt, etc on one of these old machines but every socket it has to make to talk to something nowadays just grinds to a halt
<geist>
or sshing into or out of one of them.
<mjg>
well it could still be a terminal to a headless sucker
<dh`>
it's a conversion that's done over and over again, so sometimes it force-pushes
<geist>
if you ssh out of it sure, as lomg as you're willing to wait 2 or 3 minutes for the ssh connection to go through
CryptoDavid has joined #osdev
<mjg>
geist: telnet or rcs to your local bastion host
<mjg>
geist: before you expose yourself to the baddies
<geist>
that's what i do. the only reaso to keep telnet around
<mjg>
even then, what do you expect to vi on such a machine
<mjg>
for example i would not compile squat on it
<geist>
oh sure
<mjg>
basically he compilers which are era-approriate are not usable today
<geist>
i mean there 's no *reason* to use these old machines, but it's a qusetion of if there's any useful thing syou can, even if it doesn't make sense
<mjg>
ye i think they are an all around loss
<geist>
but even that list of things you can effectively do goes away as connecting to other machines get more expensive, is what i'm saying
FreeFull has joined #osdev
<mjg>
even if you "outsource" everything with telnet et al
<mjg>
you want a sensible screen with sensible resolution, and probably more than 1
<geist>
and yeah this is why i use older netbsds like 3.1
<mjg>
they are screwed if only for that reason
<geist>
which iirc is like early 2000, so it's already heavyweight for an early 90s workstation
<mjg>
so there goes you productivity, even if the box itself would be able to keep up
<mjg>
your
<dh`>
I've never really understood the point of trying to run current software on ancient hardware
<geist>
lulz
<geist>
no practical reason
<zid>
does 'the thing I am too broke to replace' count
<geist>
*except* if you needed modern, say, openssh to connect to something else
<zid>
or does it have to be like, a vax
<dh`>
sure but I don't _get_ it, you can run the same software on a $300 machine from walmart
<geist>
i'd put the vax in the 'ancient hardware' category
<zid>
yea that's why I used it as an example
<geist>
well,either the novelty of ancient hardware gets you going in the morning or it doesn't
<geist>
if it doesn't, yes it's pointless
<geist>
if it does, it's fun
<dh`>
running old software on old (or new) hardware is different: old software actually does different things
<geist>
sure
<dh`>
that's just where I stand though
<geist>
anyay gotta go. social get togegther in a bit. have fun!
<dh`>
netbsd has a lot of retrocomputing users and most of them see it differently
<mrvn>
dh`: you can run the same software on a $30 RPi and save $60 on the electric bill
<dh`>
that too
<mjg>
it's just a kink
<mjg>
there are people who use amigas with accell cards as a daily driver
<zid>
I have the kink of not upgrading windows until it's physically impossible to run the older version
<mjg>
there is no pragamtic reason for any of it
<zid>
My pragmatic reason is they keep moving shit and adding new 'features' (that I will either disable, or fail to disable and be annoyed). And laziness.
<mjg>
i'm fortunate enough to not need windows
<mrvn>
zid: worse, they add features I don't want that can't be disabled.
<zid>
linux is getting bad in this regard too
<mrvn>
and the removce features I need
<dh`>
at this point the primary barrier for that kind of retrocomputing is that compilers just don't fit on old machines
<zid>
currently for X you have the option of systemd, or elogind + polkit + a bunch of other crap which ends up including spidermonkey because polkit decided to use json for its config files
<mjg>
but do they generate correct code for them?
<zid>
despite not being witten in javacript
<mjg>
even if you were to cross compile
<mrvn>
dh`: yeah. Please give me a modern c++ compiler for a C64.
<zid>
I am now legacy using linux as well as windows
<zid>
cus nothing modern will work right without systemd or spidermonkey
<dh`>
mrvn: indeed
<mrvn>
zid: that's not for X but for desktops
<mrvn>
dh`: I don't think you can do a lot of templates with just 64k of address space
<mjg>
:)
<dh`>
I imagine that you can treat templates as normal polymorphic types and write a compiler that will be not-quite-standards-compliant but operable in a reasonable amount of ram
<mrvn>
dh`: no, templates are not polymorphic like that
<dh`>
but because it's C++ it's automatically a four-year project for a ten-person team
<mrvn>
dh`: template deduction doesn't work like that
<dh`>
doesn't it? I know it's weirdly broken around the edges but I would expect most code to work
<dh`>
but maybe not
<dh`>
I swore off C++ some 15 years ago
<mrvn>
dh`: how would SFINAE work?
<mrvn>
"When substituting the explicitly specified or deduced type for the template parameter fails, the specialization is discarded from the overload set instead of causing a compile error. " You have to keep trying all possibilities in the right order till one doesn't give an error.
<mrvn>
You can't just go: "vector<T>" Ok, so whatever T is doesn't matter, here is the code for vector.
saltd has joined #osdev
<dh`>
that's like function overload resolution, it doesn't prevent you from treating each function in a normal way
<Griwes>
Worse, you have to try them all and at the end see if there's one that's better than the others that are left :P
<mrvn>
Griwes: "in the right order"
<dh`>
it just prevents you from trying to have a single parametric-polymorphic function
<dh`>
but idk
<Griwes>
Ordering comes *after* sfinae :p
<dh`>
I'll write a C compiler for fun but I'd have to be paid to tackle C++
<dh`>
with one exception, I'd probably work on a tool whose purpose is to convert C++ code to something else for permanent migration purposes
<mrvn>
dh`: c++ compilers don't do that at all. Even if the code would allow a single parametric-polymorphic function they still just duplicate the template every time. Then later they check if the generated code has duplicate functions and merge them.
<mrvn>
dh`: insanity
<dh`>
yes, they do, and iirc it's required; the conjecture is that not doing that would still mostly work
<mrvn>
I wish C++ had a syntax for saying: template<T but I don't care what it is>
<dh`>
just don't use C++, it's by far the best route
<mrvn>
then I would use ocaml and there it doesn't have a tempate<T and I do care what T is>
<mrvn>
or at least not one without tons of syntax.
<dh`>
you can do it with modules and functors, it just becomes very heavyweight
Iris_Persephone has joined #osdev
<mrvn>
yeah. I would like a "This is a functor, go figure it out yourself" syntax.
<dh`>
typeclasses are the best answer I know of, but unfortunately they're afaik still only available in haskell
<mrvn>
type classes go that way but I don't want to write haskel.
<dh`>
(and in coq, which doesn't help much)
<mrvn>
It's hard to have type classes without adding overhead like a v-table or duplications like templates.
<dh`>
if you have thing(T) and you want to be able to do more with the T than just pass it around, you need some kind of indirect reference to the operations on it
<mrvn>
dh`: in ocaml that would be using modules. And if you don't want the indirection then functors.
<dh`>
in practice you still get the indirection
<dh`>
unless you do strictly things that the compiler can devirtualize, but that works in all approaches too
<mrvn>
dh`: a functor has the concrete type you instantiate the functor with.
<dh`>
only when it's applied
<dh`>
the functor itself can be compiled separately; for example, the maps and sets in the ocaml stdlib
<dh`>
(I don't know how this works in ocaml's backend implementation)
<mrvn>
that's not compiled as in binary code. That's just intermediate language.
<mrvn>
ocaml basically always does LTO.
<mrvn>
The backend has always done cross module optimizations. For the little optimization it does.
<mrvn>
But overall it's a implementation detail wether you get code generated when a functor or module is applied or some generic function with indirections.
<dh`>
idk, with ocamlopt you get a .o out as well as the .cmx file
<mrvn>
dh`: yes. it's a mix
<mrvn>
dh`: iirc if all the types are known or polimorphic you get code but functors you get metacode
<dh`>
disassembling something simple suggests that there's code for the functor in there
<mrvn>
dh`: does the function you get code for depend on the type of the argument to the functor?
<dh`>
but it's hard to tell what it's actaully doing
<mrvn>
did you use flambda or the old optimizer?
<dh`>
flambda is still not on by default, right? so the old stuff
<dh`>
it is hard to see exactly what it's doing but there's a callq *rsi in the middle of it
<dh`>
which is presumptively where it calls the function in the functor argument module
<mrvn>
dh`: it can also have both. Maybe the inliner decides not to inline and then it uses the pre-generated indirect function
<dh`>
right
<dh`>
see: "devirtualization'
<dh`>
anyway, it's not important
<mrvn>
Now for something completely different...
<mrvn>
You know how you can't measure the speed of light just going one way? You always have to measure a round trip. The speed could be c/2 one way and infinite the other way.
<mrvn>
Does that have to be uniform or could it be there is a point in space-time so the speed of light towards that point is c/2 and away from it infinite?
<mrvn>
e.g. the origin of the big bang
<saltd>
+me@here
<dh`>
I don't think that's true; you can establish a known distance, and you can synchronize clocks across it by creating two clocks at one end and carrying one of them slowly to the other side
<dh`>
this is only valid in your lab reference frame, but that's sufficient
<mrvn>
dh`: by carrying the clock the flow of time changes and the clocks are out of sync
<dh`>
hence "slowly"
<mrvn>
dh`: that doesn't seem to matter
<dh`>
why not? it's only the acceleration that affects it
<dh`>
or rather I suppose it also runs slightly slower while it's moving too, but you can bound those effects
<dh`>
at least if you accept relativity as a framework for this test
<mrvn>
it runs slower for a longer time
<mrvn>
At the speeds you can move the clock I guess the slowdown is linear with the speed so speed * time = out-of-sync-ness
<mrvn>
i.e. distance
<dh`>
also, you can correct for it by carrying it out and back
<dh`>
and seeing how far it's drifted
<mrvn>
dh`: how? To correct for it you need to know the speed of light
<dh`>
no
<mrvn>
yes.
<mrvn>
You know that taking it to the other side and back makes it 1s slower. Was that slowadon only on the way there or only on the way back?
<dh`>
you carry it out and back and it's off by .00000002 seconds and if all the motions are the same each time, dividing it by two will give you the offset for carrying it one way
<dh`>
if you start proposing that nothing at all is isotropic you rapidly lose the ability to do any experiments at all
<mrvn>
nope. If the speed of light one way is infinite then the clock doesn't slow down that way.
<dh`>
hmm
<dh`>
you can still bound the effect
<dh`>
then you know that within that tolerance C is isotropic and then repeat
<mrvn>
by doing a round trip. We know the round trip has speed c.
<dh`>
no?
<mrvn>
dh`: the experiment you propose works out for any pair of speed of light as long as the round trip adds up to c.
<mrvn>
really confusing but that's what relativity seems to say.
<dh`>
you carry the second clock 186282 miles each way and it drifts by say one part in 10^8
<dh`>
resync it, carry it to the other end, use it to time the one-way speed of light
<clever>
dh`: just moving the clock will change the rate the clock ticks at
<clever>
how do you re-sync it? that sync signal had to travel at the speed of light?
<dh`>
the error introduced by moving the clock is limited to one part in 10^8
<mrvn>
dh`: but was that on the way there or on the way back or 60/40?
<dh`>
you resync it when you have the two clocks right next to each other, like I said
<dh`>
it doesn't matter which way it was.
<clever>
but the math says direction can matter, and there is no way to know
<mrvn>
dh`: the error bounds you get is the maximum value you want to measure.
<dh`>
sigh
<zid>
I like squirrels
<dh`>
1. start by measuring 299792458 meters; 2. procure two identical clocks; 3. stand at one end sync them; 4; carry one clock at say 1 m/s to the other end and back; 5. compare the clocks, note the divergence; 6. resync the clocks; 7. carry one clock at 1 m/s to the other end; 8. use both clocks to time transmission of a signal; 9. know that the error introduced by the transmission time is bounded by the divergence you previously measured
<zid>
clever: direction can *exist*, it'll never *matter*
<dh`>
s/transmission time/carrying/
<zid>
because you can't measure it, so it doesn't matter definitionally, because it can't have an observable effect
<mrvn>
dh`: then you measure a speed of c both ways even if the actual speed is c/2 and infinite because the drift of the clock with cancle the effect.
<dh`>
no, the clock can't drift that much
<dh`>
unless you flush special and general relativity entirely
<mrvn>
dh`: the math says it will drift eactly that much.
<bslsk05>
'Why No One Has Measured The Speed Of Light' by Veritasium (00:19:05)
<zid>
I think the canonical example, as above, is mars comms
<zid>
you can't tell if the speed of light is 0ns to mars, and 2 minutes back
<zid>
or 2 minutes to mars, and 0ns back
<zid>
both sides see 2m delay
<clever>
dh`: but lets say for example, that the speed of light is directional, and when you carried the clock one way at 1m/sec, it was slow relative to light
<zid>
and there's nothing you can do to prove one or the other, just that mars is 2 mins away
<dh`>
mrvn: no, it'll drift on the order of 1 part in C
<clever>
dh`: but when you carried the clock back the other way at 1 m/s, you where moving it at 90% the speed of light
<clever>
mathmatically, the drift the click will have gained, is identical to if the speed of light wasnt directional
<mrvn>
dh`: 1 part in C isn't a constant.
<zid>
note you moving away is the same as the other end moving towards you
<zid>
err, words is hard
<zid>
other side moving away.
<dh`>
clever: that doesn't make sense
<mrvn>
dh`: indeed. but that's all the math tells us
<zid>
dh`: I don't understand your argument, why does it matter *who* is moving away?
<dh`>
zid, it doesn't
<clever>
dh`: the video i linked above explains all of this
<zid>
your system is assymetric
<zid>
asymmetric
<dh`>
it has to be asymmetric to break the symmetry mrvn is claiming
<zid>
If it's measurable, there must be a way to measure, if you can measure it, there's an asymmetry in the system
<zid>
and your example of the symmetry break was moving away/toward
<zid>
so why does it matter which side moves
<mrvn>
zid: so if a twin gets into a rocket and flies away and then flies back the twin that stays is younger because time flowed slower?
<dh`>
the twin paradox is resolved by general relativity
<zid>
twin paradox isn't a paradox by any useful definition and doesn't need "resolving", it's a fact of spacetime
<zid>
you trade space for time and back
<mrvn>
zid: or are you saying they are the same age because that's the only solution where it's irrelevant which one moved?
<zid>
you wouldn't notice unless someone *told* you about general relativity, sorry pet peeve
xenos1984 has quit [Ping timeout: 260 seconds]
<mrvn>
clever, zid: So does the directionality have to be uniform or not?
<clever>
good question
zaquest has quit [Remote host closed the connection]
<dh`>
anyway in the thing I proposed the point is that the drift of the clock is small and that limits the error of the one-way measurement of C
<dh`>
it doesn't matter what the drift is or what component of it comes from moving either way
<mrvn>
dh`: except when the magnitude you want to measure is the same as tghe drift.
<dh`>
but it's not.
<dh`>
the expected drift is something ilke one part in C, which is not consistent with the premise that C is half in one direction and infinite in the other
<clever>
dh`: skip to ~10:37 in the above video
<clever>
10:40
<mrvn>
clever: that's about using the speed of light to sync the clocks though
<clever>
yeah, and that accounts for any idea you can come up with where your moving one or both clocks
<mrvn>
dh`: the drift is proportional to v/(c-v) or something and c isn't constant.
<dh`>
yes so?
<zid>
You've just rephrased the same issue if you try to sync the clocks
<zid>
you now have to deal with the *clock* moving in a certain direction, rather than the light
<dh`>
if the supposed anisotropicity of C affects the drift you can measure it from the drift
<zid>
and you don't know if that's undoing or doing the change in C
<zid>
same as for the light
* dh`
gives up
<mrvn>
zid: but dh`'s argument is that the drift is much smaller than what you are trying to measure.
<clever>
dh`: not even eistiein could solve this :P
<zid>
or lorentz, possibly more importantly?
<mrvn>
Lets say you move the clock to mars and back and the drift is 1s. Then you know that moving the clock to Mars will add at most 1s drift. So when the clock says a laser takes 1m to reach Mars that's +/-1s.
<mrvn>
so speed of light can't be c/2 and infinite anymore. Why is that wrong?
<Griwes>
it's not often that #osdev turns into a conversation already resolved by Veritasium :V
<mrvn>
dh`: that's your argument, right?
<clever>
Griwes: i know! :D
<dh`>
yes
<zid>
To be fair, veritasium doesn't understand how electricity works
<Griwes>
he does
<zid>
so it's not unreasonable to believe he doesn't also know how the speed of light works
<clever>
zid: how many vids did that spawn? lol
<zid>
he just happens to, in this case
<zid>
clever: many
<zid>
he wasn't *wrong* he just completely neglected to admit capacitors exist :P
<Griwes>
and the ones that actually did experiments show his answer to be the correct one :P
<bslsk05>
'Why does WATER change the speed of electricity?' by AlphaPhoenix (00:24:25)
<zid>
"engineers have no way to resolve this issue, as engineers only deal in abstractions" was his main point
<zid>
but you can.. perfectly model it, with.. a capacitor
<mrvn>
11:09 - "You might think you can move the clocks really really slowly ...
<clever>
AlphaPhoenix needs more subs
<zid>
yea I like his channel
frkzoid has quit [Ping timeout: 244 seconds]
srjek has joined #osdev
freakazoid332 has joined #osdev
<zid>
Griwes: yea he was never *wrong* that the light bulb would light up instantly, but he picked an intentionally(?) obfuscated scenario then lied that capacitors don't exist
<clever>
and didnt define how much current was needed for the bulb to light up
<clever>
AlphaPhoenix has done some of his own measurements on the speed of a single impulse in a wire, and also mentioned how a series of pulses would start getting into fft and ideal sine wave stuff
<zid>
Other ways you can model it: Transmission effect, air-core transformer, etc
<zid>
we have a *bunch* of ways to model it
<zid>
"In that sense, he hasn't really explained how it's a misconception. It's only wrong in the way that all models are "wrong" but often still useful. (ie. https://en.wikipedia.org/wiki/All_models_are_wrong)"
<bslsk05>
en.wikipedia.org: All models are wrong - Wikipedia
<zid>
is a decent quote
<zid>
In practice, this is why people talk about "reflections" and crap in circuits
<zid>
because we model it using a model that isn't physically accurate, but is instead a series of useful abstractions, not that the abstractions are incapable of handling it
<mrvn>
dh`: 11:20 in the video. The time dilation with directional speed of light is horribly complex.
<clever>
zid: i think that is part of what termination resistors are for, if your transmission line has a 75ohm impediance, and you slap a 75ohm resistor on the end, its functionally the same as the line continuing on to infinity
<clever>
but ive also seen similar concepts come up in fiber and OTDR, internal self reflection
<mrvn>
clever: isn't that to bleed of charge so the wire doesn't keep accumulating electrons?
<mrvn>
(and therefore noise)
<zid>
yea, the reflection is just the light-speed catching up to the capacitance/reactance/transmission-effect part as far as I kno
<clever>
mrvn: no, its because of funky AC mechanics, where an impulse on the wire, will reflect from an unterminated end
<mrvn>
clever: right, that too
<zid>
if you line the wires up like =======
<zid>
then it's just a huge capacitor
<zid>
with the plates 1m apart
<zid>
but also a short circuit in 2 seconds time
<mrvn>
zid: and if you twist them then they are a spool which makes it even worse
<clever>
with an OTDR and self reflection, basically, every 1cm of fiber, is imperfect, and 0.01% of the light is reflected backwards
<clever>
if you fire a pulse into the fiber, youll then get a long smeared out pulse back, as every 1cm segment of the fiber, returns 0.01% of the remaining light back at you
<clever>
and as that remaining light decays, the reflection you get decays
<zid>
There's not really such a thing as a 'reflection' in electronics, that's just what it looks like once you hit the light-speed version of the circuit
<zid>
rather than the slow-speed version
<zid>
cus they're different circuits
<clever>
but, if there is a bad joint in the fibers, there will be a sudden dip in that reflected light
<zid>
light genuinely has reflections though :P
<clever>
and an OTDR machine can measure all of that, and tell you the distance to the damage
<mrvn>
zid: electrons don't bounce of the end of the wire and come back?
<clever>
mrvn: i think its more, that you have a current flowing thru the wire, and due to inductance, the current doesnt want to change
<clever>
and if there is an unterminated end, that current and inductance winds up forcing extra charge to build up on segment of wire that doesnt go anywhere
<clever>
causing the voltage to spike
<clever>
potentially high enough that it can reverse the flow of current
<clever>
and now you have an impulse flowing backwards, a reflection
<mrvn>
i.e. the electrons bounce back at the end and push back
<zid>
i.e your end thinks it's closed circuit, far end thinks it's open circuit
<zid>
you stole or pumped some electrons out/in
<clever>
if you instead have a termination resistor on the end of the line, that allows the charge between the 2 wires to equalize, and convert to heat within the resistor
<zid>
now it's a big charged cap and will push back, once light-speed sorts it all out
<zid>
if you wanna use mechanics rather than electronics
<zid>
electronics is fucky and weird and you have to use different abstractions at different lengths, frequencies, thicknesses etc because reality is too complicated to model :p
heat has joined #osdev
<clever>
yep
<mrvn>
zid: and numbver of electrons
<clever>
there are also 2 different models ive seen for dealing with differential signaling
<zid>
A lot of the time picking up some new electronics is just learning whatever dumb model they're using
<clever>
the first model i had seen, the tx end is driving a D+ and D- wire, the receiver then just ties D+ and D- together with a resistor, and has an opamp measuring the voltage across the resistor
<clever>
and its meant to be a constant current loop
<zid>
Like, they end up making up terms like pullup resistor or whatever and come up with some reason why they exist and what they do. But none of it is actually *true*, it's just part of their model
<clever>
but now that i type that, i can also see how thats basically just a termination resistor on a transmission line
<clever>
the second model ive seen, is to basically just drive D+ and D- oposite directions, receive it on the far end, but the receiver is basically just 2 gpio pins?
<clever>
it could just be that the 2nd model is a cheaper way to implement the same thing?
<zid>
And that's why so many arguments about electronics start imo, because everybody's models say weird things at the edges, and people aren't trained in physics to be able to resolve which is making shit up
<mrvn>
clever: 2 gpio pins doesn't work without a common ground.
<clever>
mrvn: yeah, i'm assuming a common ground as well, with a floating ground you risk having both inputs at +1000v, and then your signal arcs over to gnd, lol
<mrvn>
clever: that would be extrem. But your signal could also be +5V and +7V instead of +/-1V
<clever>
yeah
saltd has quit [Remote host closed the connection]
<clever>
but if your input pins are only rated for say 0.5v above vcc
<clever>
then even your example will fry the chip
<mrvn>
You do want to measure the difference in the 2 wires compared to each other and not ground. But you also want to bleed of excess charge so you don't get the +1000V case.
<clever>
but then again, the same problem can exist with the opamp model i said above
<clever>
oh, maybe this is why some systems (pcie) have a series capacitor on both wires in the pair
<mrvn>
clever: the oamp shouldn't care about the 7V as long as you don't get a signal arc over.
<clever>
so the dc offset is blocked
<clever>
but the ac signal can pass
<zid>
it's all AC baby
<zid>
Just which abstraction you wanna pretend is real today
<clever>
but with a capacitor in series with D+ and D-, the rx end is going to bleed "dc" charge thru the gates, and be in the gnd/vcc region
<clever>
and the absolute voltage difference between the tx and rx sides could be higher
<clever>
the tx side, can only ever pull the data lines low
\Test_User has joined #osdev
<zid>
magic
<clever>
the rx side, is responsible for providing pullups
<zid>
(there's the pullup magic)
<clever>
and you send a symbol by pulling just one line from the pair low
<zid>
Now describe what the pullups are doing without resorting to picking two other models it's incompatible with ;)
<zid>
err s/restorting to//
<mrvn>
bleeding of charge
<zid>
I should really learn to type one day
<clever>
zid: i would say they are prodiving a default level of charge to the system, for when its not being driven low, but then ac and time of flight gets into the mix and it gets complicated
<clever>
providing*
<clever>
i also need to learn to type! lol
<mrvn>
clever: isn't the cable length limited so the time of flight is below the reaction time of the pullup?
<clever>
thats a fuzzy thing i would have to study further
<clever>
if the tx end was driving both high and low, then i can see how you could have multiple packets of both high and low in-flight on the wire at once
<clever>
and as long as they arrive intact on the other end, it doesnt mattter how long it is
<clever>
but with the pulls on the rx side, that may not happen
<mrvn>
clever: you could still send waves of electrons down the wire.
saltd has joined #osdev
<clever>
yeah, the low periods, are a burst of electrons being fired down the wire
<clever>
and the high periods are silence
<clever>
then when those patterns of electrons and silence hit the pull resistor, you get lows and highs
<clever>
so yeah, the time of flight doesnt matter, and as long as enough electrons reach the rx end, the signal can be recovered
<clever>
and you could have multiple symbols in-flight at once
<mrvn>
and the pullup lets a limited amount of electrons flow through so it dampens out low frequencies?
<mrvn>
(3rd model needed)
<clever>
the other complex part, is those packets of electrons, may not want to go the direction you intend
<clever>
if you fire packet of electrons into the wire, and then let it go silent for a bit
<clever>
some of those electrons may travel backwards, towards the lower charge region behind them
<clever>
and then from the rx end, it looks less like a square wave, and more like a sine wave
<clever>
which gets into how square waves are basically impossible in the analog domain
<clever>
its just a sum of many sine's
<mrvn>
clever: now you're talking. sending multiple signals at different frequencies on the same wire.
<clever>
so if your symbol rate has too high of a frequency, for that transmission line, the data is lost
<clever>
i'm still speaking in terms of an entirely binary transmission media, QAM takes things to a whole other level
<clever>
how familiar are you with hdmi as encoded on the wire?
<mrvn>
enough and not at all
<clever>
its basically just a 8:10 encoding scheme
<clever>
there are 256 specially chosen 10bit symbols, such that a single bit error can be corrected
<clever>
and then you fire that 10bit symbol out the differential pair at some baud rate
<zid>
wigglies go wiggle-wiggle
<zid>
and then, bits
<clever>
3 of those data lanes then give you 8/8/8 rgb
<clever>
but with one extra layer on the side, during blanking periods (hblank/vblank), its instead using a different 2:10 encoding scheme
<clever>
so you get a 6bit value per "pixel time" in blanking periods
<clever>
2bits of that are for hsync/vsync
<clever>
some are used for hdmi audio
<zid>
can we put teletext back in
<clever>
you could, but you would need a special hdmi receiver that knows where to look
<mrvn>
and hidden caption
<zid>
Yes, we should add it to the HDMI spec
* saltd
loading ...
<mrvn>
directors comments, text for the hearing impaired, locale support
<clever>
and you would also need a special hdmi transmitter, to set those extra bits
<zid>
yes, we should add it to the hdmi spec
<mrvn>
x MB/s video, 2x MB/s extra channels. :)
<zid>
It'll just be one of the things the devices support, like audio return path and stuff
<zid>
and I will boycott anything that doesn't provide teletext support
<mrvn>
zid: hdmi supports mics on the screen?
<clever>
mrvn: the audio return channel (ARC) is more for sound systems
<clever>
so the tv can send the audio backwards down an hdmi cable to a sound system
<clever>
and then you dont need to deal with rca cables or connecting every single sound source to the sound system
<zid>
aka reimplementing SCART one pin at a time
<clever>
yep, lol
<clever>
SCART had ARC first!
<zid>
Pin 1Audio output (right)
<zid>
Pin 2Audio input (right)
<zid>
hdmi doesn't even have aspect ratio pins, smh
<clever>
it also solves another issue, optical/spdif(rca) is limited in bandwidth, and cant handle newer audio codecs
freakazoid332 has quit [Ping timeout: 246 seconds]
<clever>
either direct hdmi (every device must go thru the sound system) or ARC, is the only way to get those codecs into the sound system
<mrvn>
clever: seems kind of stupid. If you send the audio to the TV then why can't you just grab it on the sending side instead of having it come back?
<clever>
i currently have 6 RCA cables wired up to my sound box :P
<zid>
We used to just hook everything into the VCR
<zid>
and the VCR had ARC built in cus scart
frkzoid has joined #osdev
<clever>
mrvn: then you need to connect every sound source (pc, xbox, playstation, bluray) up to your sound system
<clever>
mrvn: and oh no, your sound system only has 3 hdmi in
<clever>
the 5 hdmi on your tv are now useless
<zid>
but then everything went hdmi and you had no way to actually get the audio to the TV
<mrvn>
clever: oh you mean you send the sound from Input1 back down Input2
<zid>
it just terminated at the vcr (or soundbar, etc)
<clever>
mrvn: yep, thats ARC, sending audio backwards to input2, the sound-system
<mrvn>
clever: that way makes sense
<clever>
so now the extra 5 inputs on the tv can actually be used
<mrvn>
But seriously. It's a digital signal. Just add addressing.
<zid>
glorious 80s technology, that we just lost for a decade for no reason
<mrvn>
frames, channels.
<clever>
zid: north america never had it :P
<zid>
Yea just toss it into the pile with gun control, cheese, etc
<clever>
mrvn: with ARC, you can use all 3 inputs on the sound system, plus 4 inputs on the tv, and 1 ARC port on the tv for the sound box
<Griwes>
mrvn, but that removes the nice property of hdmi where you can just look at the signal at a random point and figure out where you are shortly and just pickup signals on the fly
archenoth has quit [Ping timeout: 244 seconds]
<clever>
and if the sound system is off, the tv still gets audio for internal speakers
<clever>
without ARC, you need to run analog or optical cables, from every source to the sound system
<clever>
and optical cant do 5.1 pcm
<Griwes>
that's really the main difference between DP and HDMI, HDMI is much less sophisticated which allows you to just... read or write it lol
<clever>
and analog is analog :P
<mrvn>
Griwes: why? you just wait for a frame gap and then you know where you are.
<zid>
DVI-I for life
<mrvn>
Everything should just have TP.
<heat>
mjg, yeah so I was just wondering how exactly freebsd tracks dirty shit and then how it writes things back
<Griwes>
"wait for a frame gap" kinda sounds like I now need more stateful logic :P
<clever>
Griwes: yeah, with hdmi/dvi, you can detect the pixel clock basically instantly, within ~2 pixels, and you can then start capturing frames basically at the next vsync
<heat>
i've seen VOP_PUTPAGES(iirc) in the freebsd man pages but it wasn't very clear how things work
<mrvn>
Griwes: same as the hsync/vsync symbols
<zid>
wait, is that osdev I smell?
<zid>
heathen
<zid>
we're talking about veritasium being right, veritasium being wrong, differential signalling, and audio return path, and scart, not osdev
Iris_Persephone has quit [Ping timeout: 246 seconds]
<clever>
zid: shal i not mention the zfs driver i'm writing then?
<zid>
Check the calendar first
<zid>
next hour is cats
<Griwes>
mrvn, but how do I read a "channel" now? with hdmi I just bitblast whatever turns bits into video and audio :P
<mrvn>
Hey, they left some mirrows on the moon. Lets send a laser there and measure when it gets back.
<mjg>
heat: see vm_object_page_clean
<heat>
basically the questions being 1) Linux tracks dirty pages by appending dirty inodes to a list in the writeback system of the block device. Does freebsd follow this principle, or do you queue vm_objects?
<mrvn>
Griwes: your NIC handles that and your TCP stack does the rest. :)
<zid>
wrong cats
<Griwes>
NIC? what's a NIC? I just have a serdes and a tmds encoder/decoder :P
<heat>
ah ok, so the pager seems to do it?
<mrvn>
Griwes: lets get rid of the cable and let everything speak wifi.
<heat>
although i'm looking at the 386bsd source accidentally :P
<Griwes>
what's wifi? all I have is a serdes-- yeah I've said that already
<Griwes>
...well okay several serdeses, 3 for the video signals and, uhh, one or two for audio
<mjg>
heat: there is a vnode scan to find dirty vm objs
<Griwes>
I don't really know the audio part of hdmi, I've only done video
<clever>
Griwes: hdmi also has some fun stuff for clock recovery
<mjg>
heat: grep for periodic_msync
<heat>
that seems wasteful?
<clever>
Griwes: the pixel clock is sent as a 50% duty cycle differential clock, 1 full cycle per clock, but the color is 10 symbols per pixel clock
<clever>
Griwes: so a 10x PLL generates a symbol clock from the pixel clock
<heat>
and a bottleneck? I assume you're locking a huge global inode list or so
<clever>
but!, that pll has multiple taps, that are say 30, 60, 90 120 degrees out of phase
<clever>
the hdmi rx core, will then try each clock, for each rx channel
<mjg>
heat: only possibly dirty cases are scanned
<clever>
Griwes: to account for the rx channel being slightly mismatched in length
<mjg>
heat: it's not optimal but not very bad either
<Griwes>
look, my beng project was literally just driving the 6 differential color pins and it worked, alright? :P
<clever>
Griwes: hdmi audio, is just sent in one of the color channels, as a 1 or 2bit/pixel (i forget) signal, during vblank
<saltd>
Day changed to Sunday, 25. Sep 2022
<Griwes>
oh that's clever
<mrvn>
Oh yeah. How much longer can you make the red wire before all the red information shifts a pixel?
<Griwes>
right because the sync signals are only sent on one of the color pairs
<clever>
mrvn: good question
<clever>
Griwes: kinda, during vblank, its using a different 2:10 encoding, on all 3 color pairs
<clever>
so while hsync/vsync are on a single color, the fact that its vblank is present on all 3 color channels
<mrvn>
clever: so the vblank would get different length wire back in sync?
<clever>
mrvn: and even if you get a whole pixel out of alignment on the raw 10bit capture fifo, you could detect when blanking ends and re-sync things
<clever>
typo above, s/vblank/blanking/
<heat>
mjg: also, I may as well ask: what do you need page locking for?
<Griwes>
ah so it's using the same bit inputs to the encoding as the h/v sync signals?
<Griwes>
damn that's clever
<mrvn>
then the question becomes how much out of sync can each wire be before the other side says the signal is garbage?
<heat>
I've noticed linux does a boatload of locking for struct pages. freebsd seems to do a lot of locking too
<clever>
mrvn: yeah, the hdmi specs must define limits of what is allowed, and then the hdmi rx core has to meet that
<clever>
Griwes: yeah, during the active region, its sending 3x8bit per pixel, during the blanking regions its 3x2bit per pixel
<clever>
Griwes: hsync, vsync, and audio are on i think 3 of those 6 bits
<clever>
and the receiver can know if its the 8bit or 2bit encoding
<Griwes>
yeah
<heat>
oh wow so you use a hashtable of mutexes for page locking
<clever>
so its effectively a 30bit output, with either 24 or 6 bits being valid for any given pixel
Iris_Persephone has joined #osdev
<heat>
anyway right now my problem is that I don't have any locking for my struct pages (although I've built a futex-style mechanism for them like linux has)
<mjg>
depends what you mean by page locking
<heat>
my code does a best-attempt at atomically setting and clearing dirty and writeback bits in the flags but that doesn't really work I think
<heat>
vm_page_lock
<mjg>
almost all of changes to page state are done with atomics
<mjg>
there is a dedicated hand rolled lock to protect contest of the page itself
<mjg>
see vm_page_sbusy et al
<mjg>
used for example to satisfy read(2) from the page cache
<mjg>
i don't know what the regular page lock is still used for, i suspect it would be for queue placement
<heat>
queue placement?
<mjg>
would you look at that, that's also using atomics
<mjg>
see vm_page_activate
<mjg>
so no, i don'tk now what, if anything, is the page lock used for
<mrvn>
it's easy to find out. Just comment it out and check what fails to compile :)
<clever>
Griwes: something i dont know though, is how ARC works...
<mrvn>
clever: does it send the audio of the input displayed at the time back on all inputs?
<clever>
and prior to that, 14 was just reserved or ethernet
<heat>
mjg, what's an activated page?
<heat>
dirty?
<clever>
and 19 was a hotplug detect (to know the far end of the cable is connected) or ethernet
<mjg>
heat: no, afair it has to do with the pageout scanner
* mjg
is not a vm person :>
<clever>
mrvn: so it looks like the tv is sending the active audio, backwards, on a dedicated diff pair, that was previously unused pins
gordea has quit [Quit: gordea]
<heat>
lol
<heat>
is freebsd's vfs not in bed with mm?
<mjg>
parts of are, that i stuff i know enough :>
<mjg>
you may notice i mostly deal with this fuckingl ayer is doing above it, so to speak and within itself
<mjg>
lemme rewrite the above
<mjg>
you may notice i mostly deal with what this fucking layer is doing vs its consumers and internally for its own purposes
<heat>
fuck this layer
<heat>
virtual memory is poopy
<mjg>
for example there is 0 vm in path lookup :>
gildasio has quit [Ping timeout: 258 seconds]
<heat>
use real physical memory
<mjg>
this mach vm is pretty crazy though, i'm sure i already ranted about it
<mjg>
all this vm obj stuff is full of surprise object "reposession" from under you
<heat>
yeah, the cow rant
<mjg>
it's not even the possibly long obj chains, but mostly that any of them can suddenly get reallocated and fuck you
gildasio has joined #osdev
<mjg>
so you have to lock the sucker, check if perhaps it got fucked and only then proceed
<mjg>
if it got fucked, fault handling unwinds itself and tries again
<mjg>
and yes, it does happen in practice
<heat>
wdym?
<clever>
mjg: that reminds me of the relocatable heap in palmos and the rpi's official firmware
<clever>
basically, any object on the relocatable heap, is referenced by an opaque token
<mjg>
heat: say obj1 -> obj2 -> obj3
<clever>
the lock function returns the current physical addr and bumps the refcnt
<mjg>
heat: the vm may conclude obj2 can be whacked from the chain
<clever>
the unlock function decs the refcnt
<clever>
but, any object with 0 references, can be freely moved, to defrag physical memory
<mjg>
heat: as you process the fault go from obj1 to obj2, you may find obj2 is now marked "dead"
<mjg>
heat: see vm_fault_object
<mjg>
and the loop called by it
<clever>
and i also recently learned, there is a performance flag, when you unlock an object, you can tell the system that the data itself is no longer of value
<mrvn>
mjg: this is so much easier without threads or shared memory
<clever>
then it will just skip the memmove() next time it relocates the object
<mjg>
mrvn: but is it any fun? :>
<clever>
and only update the addr/size tracking
<mjg>
fwiw i have an ok enough fix for it for my purposes in the works
<mrvn>
mjg: sure. just different
<mjg>
heat: note how the call to vm_fault_getpages is preceeded by a drop of the current obj lock
<mjg>
heat: ... which gives vm license to fuck the obj up
<mrvn>
mjg: one thing that's challenging for me is making mapping, unmapping and moving a page from process A to B fast.
<heat>
mjg, vm_fault_getpages doesn't seem to be a thing?
<bslsk05>
'12 what is RCU 2013 Paul McKenny at IISc' by YuTeh Shen (01:10:25)
<mjg>
watch this motherfucker:
<mjg>
disabled meltdown and whatnot mitigations to better ilustrate degradatin
<mjg>
multithreaded dup +_close
<mjg>
# cpuset -l 2,4,6 ./dup1_threads -n -t 2
<mjg>
min:1985959 max:3162026 total:5147985
<mjg>
min:1858573 max:3315486 total:5174059
<mjg>
min:1886596 max:3254005 total:5140601
<mjg>
[snip]
<mjg>
and now with one core from another socket
<mjg>
# cpuset -l 2,4,100 ./dup1_threads -n -t 2
<mjg>
min:435523 max:462468 total:897991
<mjg>
min:443116 max:453170 total:896286
<mjg>
min:446126 max:449652 total:895778
<mjg>
so ye, it is FUCKING SLOW
<mjg>
in fact numa is such a problem that seroius companies often explicitly order single-socket systems so that they don't have to alter their software to deal with it
<mjg>
[on the other hand numa can be great if you have software which understands it and you need more computing power than a single socket system can deliver]
<mjg>
for a non-microbenchmark, the old yeller from the illumos bugreport builds the linux kernel under freebsd emulation in < 15 minutes
<mjg>
doing the same under illumos which also can run linux binaries and while using the zfs pool, it is over 17 minutes
<mjg>
and that's what the code is doing twice epr loop
<heat>
linux is around 75k on tmpfs
<mjg>
so ye, i would say bad :-P
<heat>
would this be a good target for a flamegraph?
<mjg>
yes provided you don't have surprise sleeps in there
<heat>
might have locks
<mjg>
but they would no contend, would they
<heat>
file creation will
<mjg>
against who?
<mjg>
in the single-threaded case
<heat>
no one
<mjg>
which i presume is what you ran
<heat>
no
<heat>
I ran multi-threaded
<heat>
-t 4
<mjg>
oh ok
<mjg>
then try single for starters
<heat>
lets see how fast I can knock out a flamegraph
<mjg>
if you alloc, say, 16 * 8 bytes entries per stacktrace * 4 threads that's 512 bytes per sample
<mjg>
1000 samples per second * 10 seconds would give 5120000 bytes, or just shy of 5 meg to alloate in total
<mjg>
dump it into a file at the end
<mjg>
just grab IPs at they are, no resolving while sampling
<mjg>
and zero pad so that trace is always 16 entries, then it should be trivial to post process
<heat>
i was thinking about grabbing a sha of the stack trace and dedup those maybe
<mjg>
would be mys uggestion
<mjg>
youw ould be interferring with the workload more than you need
<mjg>
aka probe effect
<heat>
yeah probably
<mjg>
see above, 5 meg is nothing and in 10 seconds you should already get a decent picture
<mjg>
i would say, provided you don't run into bugs in your allocator, go for 60 seconds even
<mjg>
you can probably truncate ips to 4 bytes and still be able to correctly resolve them
<heat>
that's a cool idea given the kernel is in the -2GB
<mrvn>
heat: are you insane? Do you know how slow sha is?
<heat>
yes and yes
archenoth has joined #osdev
<mrvn>
hehe
<mjg>
i would do a request to a webservice
FreeFull has quit []
<mrvn>
heat: I like rsync as example how to detect duplicates with hashes properly. It first does a rolling adler32 checksum. That's pretty weak but easily updated byte by byte in a rolling window. On match it then does the more expensive md4 checksum.
<mrvn>
For stacks why copute the hash of the whole backtrace if the last entry of the backtrace isn't identical?
<mjg>
why compute hashes to begin with
<mrvn>
mjg: if you have 1 million backtraces and want to find duplicates what do you do?
<mjg>
albeit now that i asked, i have no idea what dtrace is doing to handle their aggreations
<mjg>
mrvn: i mean for this particular purpose
<mjg>
just filling the space will be enough for the time being
<mjg>
as for a real solution for arbitrarly long runs, i never put any thought into it
<heat>
dragonfly does a circular buffer huh
<mjg>
kind of a cop out, but maybe it's ok
<mjg>
how big is it
<mjg>
oh btw dragonfly's systat -pv is neet
<mjg>
shows some lolo stats *and* sampled IP
<mjg>
neat even
<mjg>
hm he is exporting with a sysctl
<mjg>
well it can be done in a myriad ways
<mjg>
whatever you decide i strongly advise to be as uninvasive as possible when sampling
<mjg>
to that end the idea is present is imo a decent option