dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
RAMIII has quit [Ping timeout: 240 seconds]
anandn has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pax_os_ has joined #osdev
anandn has joined #osdev
anandn has quit [Client Quit]
epony has quit [Ping timeout: 240 seconds]
anandn has joined #osdev
sdfgsdfg has joined #osdev
anandn has quit [Client Quit]
dormito has joined #osdev
anandn has joined #osdev
anandn has quit [Client Quit]
FreeFull has quit []
Clockface has quit [Ping timeout: 240 seconds]
heat has joined #osdev
heat has quit [Remote host closed the connection]
heat has joined #osdev
newpy has joined #osdev
<newpy>
I'm not sure if I'm following the cross-compiler instructions correctly
<newpy>
do I download the binutils tar.gz, extract, ./configure & make, then use that to create the compiler?
heat_ has joined #osdev
<Lugar>
ye
<Lugar>
make sure you're building the cross binutils
heat has quit [Ping timeout: 268 seconds]
<newpy>
yes I ran ./configure with --target et al set
<newpy>
which gcc download do I need?
<klange>
Preferably the newest one.
<newpy>
I created a docker container so I don't wreck my main env
<newpy>
it came with gcc 9.3.0
<klange>
That's mostly irrelevant.
<Lugar>
that version is pretty new
<klange>
Barring a handful of times older gccs have been unable to compile newer gccs.
<klange>
We have a couple of different guides for cross-compilers; if you're following the one that instructs you to build a bare elf target, the latest version is always worth grabbing.
<klange>
If you're following one of the sections on adding your own target, the files move around between different versions of gcc so you may need a particular release to match up with the instructions.
<heat_>
what error do you get?
<heat_>
what's the actual problem?
heat_ is now known as heat
<geist>
also get a pretty new binutils too. usually the most current
<heat>
and its not 2.9 ;)
<klange>
A perennial issue is people mastakenly downloading very old versions of binutils because of how the directory sort worked
<geist>
yah, exactly
<heat>
its a classic
* kazinsal
blinks and realizes people are still building their own -elf-generic crosscompilers
<heat>
c l a n g
<kazinsal>
I just use the script and then alt tab and come back to an installed toolchain after an FFXIV dungeon
<newpy>
I think it's going ok
<newpy>
just wanted to double-check, make all-gcc is taking a bit
<newpy>
oh, which script is that?
<klange>
gcc is very big, regularly takes a half hour
<geist>
there are many scripts. lots of folks put them together
<geist>
i have one, but didn't want to push my stuff on others
<newpy>
ah yea I think I saw a page of them earlier, but paradox of choice kicked in
<geist>
well, depends on if you want to go through the effort to learn/etc or if you just wanna toolchain now
<newpy>
not sure if this is on-topic, but is there an OS text any of you recommend over others?
<heat>
it's not hard to build a trivial cross compiler
<heat>
nyes
<newpy>
and tar -xf can handle it?
<geist>
yes
<heat>
also nyes
<geist>
if not you can decompress it separately, etc
<heat>
.xz gets you smaller files but its way slower to decompress
<geist>
i woudln't say way slower, but slower. it's much faster than bzip2 to decompress, for example
<geist>
somewhere in the middle
<Lugar>
unless your pc is from 1994 xz should be fine to decompress
<heat>
and your tar may possibly not support some formats, gzip is the best one for portability
<heat>
negative, i've seen some slow xz decompression on my rpi
<geist>
sure, i didn't say it wasn't slower.
<heat>
anyway , .zst is the best
<heat>
use it
<Lugar>
facts
<heat>
all praise zstd
<geist>
fine. anyway, tar supporting it is just a convenience, you can always decompress and stream
<geist>
xzcat foo.tar.gz | tar xv, for example
<geist>
or xz -dc, etc
<heat>
yes
<geist>
tend to have to do that if using an older distro or some of the BSDs or whatnot
<geist>
their tars dont necessarily know about newer formats
<heat>
or sadly, zstd
<heat>
ubuntu's GNU tar doesn't know what zstd is
<geist>
what a convenient format
<heat>
ubuntu being stuck on old software isn't zstd's fault *shrug*
<geist>
i always triple-base64 it too
<geist>
that gets you maximum convenience
<newpy>
this snark is too high-level for me
<geist>
newpy: in general lots of stuff is distributed .gz .xz and .bz2 nowdaays. gz for max compatibility and the others because they compress better
<geist>
best of all worls. if you have a 90s era machine or a raspberry pi, use gz
<newpy>
I'm just using Ubuntu WSL, testing on Bochs
<geist>
bzip2 was the hotness for most of the 2000s, but i thik there's no real compelling reason to use it anymore over newer stuff like xz and zstd
<newpy>
should I be testing on virtualbox or something instead?
<geist>
when just getting started, bochs is fine
<newpy>
ok
<newpy>
I was manually using gcc and ld flags to get it to work
<newpy>
then came across osdev
<geist>
it tends to be a bit simple for the machine it's emulating. qemu is also a good option if you want emulation
<klange>
you should test on everything you can manage to get your hands on, but bochs will be helpful early on
<geist>
eventually you'll graduate to real machines or virtualbox or vmware or whatnot if you want
<Lugar>
for some reason i could never get bochs to work for me. ive always used qemu
<geist>
yah, you'll want to do that pretty quickly, but when just getting started its a distraction
<geist>
yah i favor qemu, but then i also like to do non x86 stuff, and bochs is a x86 PC emulator only
<geist>
whereas qemu is cross platofmr
<klange>
bochs's config format and general operation remain a bit tricky to get right
<geist>
agreed
<heat>
newpy, WSL1 or 2?
<newpy>
WSL2
<heat>
ah great
<geist>
yah no problem
<heat>
it's still slower than native but its very usable
<heat>
WSL1 feels like you're running the system off an SD card
<geist>
otoh when you're just getting started, all your compiles and whatnot will be instantaneous so it's not a big deal even if it's an archaic machine or os
<geist>
you have to spend some time typing in enough stuff for the speed to really get to you
<newpy>
yea right now it's just handy to be able to code and test on the same box
<geist>
like heck, early on your makefile can be just cc *.c for the most part and it doesn't matter that you have to rebuild all the time
<newpy>
yea I just had a simple .ld script, playing around with loading from boot
<geist>
cool, sounds like you're on your way!
<heat>
i've just passed through my nic to a vm
<heat>
i feel like this is a solid way to get a few more drivers
<geist>
noice
<newpy>
heat, you're net booting or?
<heat>
no, just regular booting lol
<heat>
i've been theorising the past few days that I could just passthrough a regular PCI device like people do with GPUs, and yeah, it works
<newpy>
ah
<heat>
I was thinking about porting wlan + my wifi adapter's driver to my OS but I'm kind of deciding against that atm
<heat>
the driver would need to be imported from linux and the rest of the stack, yeah, I'd need to get it from somewhere
<heat>
at least I'm not really feeling writing a whole wlan stack
<heat>
probably one of those things where you really need to do a deep dive and know what you're doing
<heat>
and I know 0 about wireless in general
<heat>
other than "it has channels, it has frequencies, it works"
<newpy>
ok got i686-elf-gcc to report --version
<Lugar>
yay :]
<Lugar>
now start coding
<newpy>
yessir
<geist>
heat: which vmm are yo using to pass things through?
<heat>
geist, qemu through virt-manager
<geist>
ah interesting
<heat>
it's ridiculously easy, you just add a PCI Host Device and choose the pci device
<heat>
*and hope everything IOMMU related works*
<heat>
the i915 also has a feature where you can passthrough a virtual intel iGPU to a VM, no unbinding required from the drivers
<heat>
which is cool if you want to try it out inside a VM but the emulation is a bit primitive in some aspects
<heat>
in the sense that wrong accesses you may be doing actively trigger OOPSes and warnings inside the dmesg log of the host
<geist>
i wonder what the virt manager does in this case. there has to be some way to instruct linux to relinquish the device so it can bind to it
<geist>
or maybe that's all done via the kvm interface
<heat>
yep, there are a few ways you can do it
<heat>
essentially you force the driver to unbind itself from the pci device and bind vfio-pci to it
<heat>
all using sysfs
<geist>
ah
<heat>
then it's just a simple -device vfio-something-something with a few options and you're done
<geist>
might have to give that a try. i hvae a machine i can totally do that with since it has the extra i210 in it i was hacking on
<geist>
but it also has a linux install
Lugar has quit [Ping timeout: 256 seconds]
<heat>
yeah
<heat>
honestly virt-manager is super cool since it lets you easily do stuff that may require some tinkering and googling
freakazoid333 has joined #osdev
[itchyjunk] has quit [Ping timeout: 256 seconds]
[itchyjunk] has joined #osdev
nyah has quit [Remote host closed the connection]
sdfgsdfg has quit [Quit: ZzzZ]
<geist>
yeah, i really should spend more time with it
<bslsk05>
'Computer Chronicles - Concurrent CP/M' by Tech Perspectives (00:08:53)
<Jari-->
I am wondering if did equally, priotizing, share the CPU to the tasks. Output appears slow.
<Jari-->
Probably because of MS-DOS oriented CPU usage, CPU is basically split, not priotized, among the processes tasks.
<heat>
old CPUs were slow
<heat>
old OSes were slow
<heat>
a few days ago I was trying out SVR4 and it was sloooooooooooow
<heat>
anything IO at least
<heat>
also it rebuilt the unix kernel like 6 or 7 times while installing stuff
<heat>
for some reason, it does that
<heat>
and I'm honestly scared to find out why it needs to do that
Clockface has joined #osdev
bradd has quit [Remote host closed the connection]
<Jari-->
heat: yeah
bradd has joined #osdev
epony has joined #osdev
Matt|home has quit [Ping timeout: 256 seconds]
<CompanionCube>
heat: maybe it installed kernel bits?
<heat>
a quick google I did told me some similar SCO system relinks stuff for configuration purposes(??)
<heat>
so that's probably what its doing
<heat>
because the kernel's source definitely isn't there
<Jari-->
the memory management for traditional UNIX systems ain't perfect
<CompanionCube>
heat: iirc current openbsd also relinks the kernel regularlu
<epony>
Jari--, were 40 years after that
<Jari-->
epony: I mean it is really complex, some systems today in use are basically flat
<heat>
CompanionCube, yes but isn't that part of its kernel-internal ASLR?
<heat>
i remember something similar to that on the kernel and the libc I think
<heat>
also: linux is also working on that feature for the kernel
<epony>
Jari--, that's not UNIX-like then
<CompanionCube>
heat: yep, that's it
<heat>
an optional ASLR where it re-links things and shuffles them around
<Jari-->
if my heart lets me do it, I will continue on JTMOS... been heart sick for sometime, but quitting smoking pretty much fixed all problems... 5. day without
<CompanionCube>
it relinks the kernel every boot
<heat>
Jari--: damn :/ stay strong
<Jari-->
epony: oh I mean non-standard UNIX clones or even MS-DOS / CP/M clones.. mostly mean MS-DOS in this case
<Jari-->
heat: many people think it is mental, but no way, it is heart usually... and they tell to docs "I have anxiety" - thats heart usually, asked my surgeon, it is so.
<epony>
commerce and adaptations are.. just non-standards territory
<geist>
yeah old unix kernels you usually did that: relink the kernel for your particular hardware config
<geist>
same with non unix kernels, i think you did similar things for various DEC based oses. was common
<heat>
geist, what did they link-in/out?
<geist>
basically yuo statically config that you have N disk drives, etc
<geist>
makes sense when your kernel is like 20KB
<geist>
dunno if it acutlaly skipped linking drivers in too, but probably
<heat>
because I saw it relink the kernel when I installed a package from one of the floppies
<geist>
certainly was generally in an era beforethere were kernel modules
<clever>
in the really old days of rpi, the base address for the gpu firmware was set at link time
<epony>
a kernel tuned to the HW is not that wrong of an idea anyway (especially if it's automated) and includes system randomisations that make it non-static (known faults)
<clever>
and they released something like 20 builds of the firmware
<geist>
and if you didn't have a bus to detect it, then you basically have to hard link it that way, or at least have some sort of config
<heat>
geist, hey the svr4 is actually pretty advanced, it even has ELF shared objects!
<clever>
each a different offset from the end of ram, so the firmware has more or less ram available
<clever>
because they lacked the ability to relink it at runtime
<geist>
heat: sure. but the kernel was probably still a monolith
<clever>
or boottime
<geist>
actually that is a good question: when did kernel loadable drivers come along. i've heard VMS had it back in the 80s, but probably not initially i bet
<geist>
i'm guessing by late 80s some of the advanced unices started doing loadable drivers
<geist>
sunos, etc
<epony>
the modern systems have machine dependent and independent separation, also system interfaces and device specific separation
<epony>
so that "kernel arch" problem set is diminishing
<geist>
yah
<geist>
even fiddling with an old linux 1.0 era on a 386 or 486 you get into the 'kernel cant detect device' problems you get with MS-DOS
<geist>
you want your soundblaster to work? gotta configure it in the linux kernel config or on the command line
<geist>
also i distinctly remember spending lots of times manually configuring netbsd kernels for various sparcstations
<epony>
yes, each system walks the evolutionary path to some modern variant
<geist>
you'd go in and edit the netbsd config file to include jsut the drivers you need. and some of them were hard coded
<geist>
'there is a pcnet32 at address X'
<clever>
that sounds like typical ISA bus problems, there is no real way to auto-detect some devices
<CompanionCube>
geist: apparently sunos 4 has kernel moduled at least
<geist>
exactly
<geist>
CompanionCube: yah probably late 80s?
<clever>
device-tree solves the same thing on arm
<epony>
how direct and quick that walk is, depends more on the hardware epoch and the team size (rather than 'design' choices and preferences)
<geist>
uyah sunos 4 was 1988 +
<geist>
sunos 5 (ie, solaris) was SVR4 based and came along in 1992
<heat>
clever, you use ACPI for that now
<clever>
heat: yep, but can that report ISA cards you added to the system?
<clever>
how would the bios discover the card?
<heat>
>checks the date
<heat>
>2022
<geist>
you can imagine the problem immediately shows up with any hw maniufacturer like DEC or Sun or whatnot when they start having lots of devices in the field
<heat>
no ISA cards added.
<clever>
heat: your avoiding the problem by just not having an isa slot :P
<geist>
and then they need to start getting fancier about device detection
<clever>
but thats not fixing the problem
<epony>
so a modern functional system needs to support some 20-25 years of arches (which still run in production) and that defines the epoch
<heat>
the ISA bus was a bad idea
<heat>
ACPI fixes it
<geist>
clever: honestly i never looekd into isapnp, bt there *was* a scheme for it
<heat>
those same PNP numbers are still used by ACPI
<geist>
i dont know precisely how it worked, but there was some sort of scheme for isapnp devices to publish their stuff and be autoconfigured
<epony>
a post-modern one supports the last 5-10 years and that's it.. 64bit stuck in a compiler feature sets
<geist>
usually later stuff. soundblasters in the the late early 90s, etc
<clever>
it feels like the only way to do isapnp, is to publish some config beside some magic numbers
<clever>
and then just read the entire addr space looking for magic numbers
<clever>
but what about mmio, where reads have side-effects?
<geist>
yah i never looked into how it works, but i've seen it sort of work. for example if you run the soundblaster PNP utility on some 486 i have it sees the SB but it also sees the 3com network card
<geist>
whcih is also isapnp
<geist>
so it must have done some sort of bus scan to determine all the pnp things
<clever>
i remember windows 95 having a button to manually trigger a scan, and a warning that it can crash the system
<geist>
yah. early linux (and probably current linux) has some isapnp stuff too
<geist>
'i found your soundblaster at port X via isapnp' kinda stuff
<clever>
i also have an isa serial card in the other room, its just peppered with jumpers
<clever>
so you can manually configure everything
<heat>
does that code have cobwebs? :P
<CompanionCube>
geist: the BSDs seem to mostly cite sunos 4.1.3 as a basis for their modules
<clever>
found that isa serial card
<heat>
i was going to say someone out there is still running modern linux on an i386 but sadly it needs an i486 because of cmpxchg :(
<clever>
2 banks of jumpers to configure the IO base addr, for port1 and port2, as one of com1/com2/com3/com4
<Jari-->
make[1]: *** No rule to make target 'shellmainfunc.o', needed by 'all'. Stop.
<Jari-->
it compiles though well
<clever>
and then a huge 3x10 jumper bank, to set the irq for port1/port2 to one of 3,4,2,5,7,10,11,12,15 lol
<Jari-->
hire some nerd to fix my shell
<heat>
i wonder if dtb and acpi will ever win over the other or if its just going to continue fragmented as always
<heat>
fwiw windows 10 on arm64 requires acpi
<clever>
heat: i like device-tree a lot more, but ive not really used acpi
<heat>
so your dtb will be made into AML inside the firmware
<heat>
i.e if you use rpi-uefi on a rpi you'll get your dtb in AML form
<heat>
i've never used device tree before but I still like it a lot more :P
<heat>
acpi requires a big import of code that's not mine
<clever>
device-tree is typically read with libfdt, but the format is pretty simple and you could write your own reader
<heat>
libfdt is way way smaller than acpica
<clever>
acpi bytecode is also just crazy
<geist>
yah libfdt i entirely endorse
<geist>
it's simple, doesnt' do what you dont tell it to, and compiles nicely small
<heat>
103797 total <-- my acpica import
<CompanionCube>
iirc ARM uses a simpler 'hardware reduced' mode, dunno *how* simpler
<heat>
without headers
<heat>
CompanionCube, that's optional I think
<heat>
well, for one, acpica replaces any sort of chipset-specific drivers you may need
<heat>
which is the only plus I can find for it lol
<Jari-->
heat: geist: CompanionCube: clever: & friends || Personally I think take a small piece of cake [kernel] and digest that.. this is what I should be doing, day by day etc.
<heat>
don't eat cake, eat bread()
<Jari-->
not easy to get the whole picture of a complex multitasking microkernel for sure
<heat>
ba dum
<heat>
tss
<geist>
gdt
<Jari-->
lots of testing work awaits, bug fixing, etc.
<Jari-->
I am going to copy and paste the device API / System from kernel, make it work/compile on top of Linux.
<Jari-->
that would require ramdisk driver for the simulated operating system
heat has quit [Ping timeout: 240 seconds]
[itchyjunk] has quit [Remote host closed the connection]
newpy has quit [Quit: Leaving]
<Jari-->
Show me your linker source code.
<geist>
hmm?
<Jari-->
Run-time.
<Jari-->
They are actually doing setpixel calls on window drawing in some GUIs.
<Jari-->
Real-time video, sametime window drawing is visible.
zid has quit [Read error: Connection reset by peer]
zid has joined #osdev
<kingoffrance>
the one time i tried isapnp was because irq conflict or some such under linux or bsd. ran a dos util (helpfully, OEM that included the isa card, did not include this utility, had to download from vendor). and toggle some thing in BIOS. point: it was easier just to do things manually in my brief experience
<kingoffrance>
plug and pray :)
<kingoffrance>
its more the OEM fault maybe, old systems actually did have good documentation somewhat
<kingoffrance>
if default config with windows 95 didnt need that, you cant blame them for not bothering
<kingoffrance>
you have already gone to "unsupported" territory
<kingoffrance>
same situation can be compared to dos drivers for "integrated" sound. may exist, may not -- if its not the supported OS, good luck :)
sdfgsdfg has joined #osdev
srjek has quit [Ping timeout: 240 seconds]
<kingoffrance>
i believe i still had to change a jumper for "isapnp" mode anyways as well... "convenient" perhaps only if things were already set up that way
<clever>
the only difference i can observe in the compiled code, is each printf call is one opcode shorter, when dealing with the ptr to the format string
<zid>
I've got an answer but it's wrong
<clever>
what exactly is changing there?
<zid>
it's using a smaller encoding because you told it to use a tiny memory model with less dynamic range
Lycurgus has joined #osdev
<clever>
but what exactly was `add x0, x0, :lo12:.LC0` doing, to give a bigger encoding?
<zid>
lo12
<clever>
and what is lo12?
<zid>
low 12 bits only
<clever>
ah
<zid>
presumably if it ended up farther away than 12 bits you'd get a linker error saying the relocation could not be fulfilled
<zid>
'relocation against XBUM_32_12_ADDR_JR could not blah blah' or whatever
<clever>
but that makes even less sense
<clever>
ah, adr vs adrp
<clever>
the tiny side is using `adr x0, .LC0`, i assume that the linker is just going to fill in an addr as an immediate?
<clever>
so it basically compiles down to `mov x0, imm`?
<zid>
I mean, all addrs are immediates or relative offsets to begin with
<zid>
the question is can you get away with using a shorter relative offset encoding or immediate encoding by knowing various bits will be 0
<clever>
or even `add x0, pc, offset`
<zid>
and the -mcmodel option has informed it that it will definitely be close enough
<zid>
to have the high bits be 0
<clever>
but then, the question is, what is `adrp x0, .LC0` doing differently, to give you bits 12 and up?
<zid>
can't you just turn the machine code column on?
<zid>
godbolt broke itself in ff again yesterday and I haven't fixed it yet and I was too lazy to open chrome and paste the link, rip
<clever>
ah, hadnt seen that button before
<zid>
ReferenceError: queueMicrotask is not defined
<zid>
whatever.. that means
<zid>
some node wankery I guess
<zid>
ah no it's normal html
<GeDaMo>
godbolt stopped working for me a while ago but I use an old version of Firefox
<zid>
same
<zid>
I noticed a couple days ago
<clever>
so the default model, loads 0x40_0000 into x0, then adds 0x928, to get the final addr
<GeDaMo>
I'm sick of the upgrade treadmill :P
<zid>
it used to be broken but I had an addon to fix it
<zid>
something to do with regex
<clever>
while the tiny model just directly loads 0x40_0910 into x0
<GeDaMo>
Oh yeah, I had that a while ago too
<zid>
GitHub/GitLab Web Components Polyfill
<clever>
both reach the same goal, but i suspect the tiny method has less reach, so yeah, the linker could fail there
<zid>
to fix github too
<clever>
this feels like something the optimizer should be doing, but it would require bluring the lines between compiling and linking
<zid>
yea that's always annoyed me personally
<zid>
that it codegens with the rel8 rel32 etc already selected in the instructions
<zid>
then the linker can only do very basic relocations to patch it up
<zid>
I guess for x86 it isn't a huge deal though as you're never going to break 4GB regardless, but it can be a pain yea on things like arm where you get 12 bit offsets etc
<zid>
(and rel8 is always too small)
<clever>
my original question (before i noticed the above thing), was about that x86-64 mode, with 32bit pointers
<zid>
x32?
<clever>
but mcmodel is more about the codegen and linker contract, then pointer side
<zid>
than
<clever>
yeah, thats the name i was forgetting
<clever>
does aarch64 have anything like x32?
<zid>
x32 is rad I keep wanting to reinstall my VM with it
<clever>
i was also thinking, could you modify the asm rules some, so i can have a statement like "put a pointer to X in reg Y, i dont care how many opcodes you need"
<clever>
and let the linker decide later on, what the best option is
<clever>
but the problem there, is that the assembler hard-codes some byte offsets as it turns asm into binary
<zid>
The problem is you'd also have to convey "This this this and this reg are free, this reg might have the same high bits, ..."
<zid>
because the main point of these smaller encodings is that you OR them onto pre-existing high-bits
<clever>
i can only see something like llvm being able to solve this problem
<clever>
where you use IR to state the above
<clever>
and then when translating the IR to binary, you are linking at the same time, and can pick the right model
<clever>
but even then, it would require bluring the lines between the IR->binary and linker
<clever>
the only other solution i can think of, is something like the old apple fat binary trick
<clever>
where the compiler/assembler produces 2 versions of every function (and use -ffunction-sections)
<zid>
Just leave it to LTO and let god sort it out later
<zid>
like we do for everything else
<clever>
and the linker picks the shorter one
<clever>
ah, would LTO solve this whole problem as well?
<zid>
should do right? the linker only gets the GIMPLE/LLVMIR/whatever and does all the codegen at link time
<clever>
yeah, i can see that working
terminalpusher has quit [Ping timeout: 256 seconds]
<clever>
zid: for x32 to work, i can see 2 basic things being needed, 1: always use 32bit load/store for managing any pointers in ram (i assume that clears the top 32bits?), 2: ensure the kernel ABI will expect 32bit pointers everywhere, and things like mmap only use the lower 4gig of the addr space
<clever>
does that sound right?
<zid>
you just set sizeof(void *) to 4
<clever>
yep
<clever>
but the kernel also needs to expect that, or structs in ioctl()'s will be all wrong
<zid>
yup, that's the other change, an ABI change for syscalls
zaquest has quit [Remote host closed the connection]
<clever>
and the kernel must limit mmap to the lower 4gig, or your allocated ram wont fit into your void*
<zid>
It already knows about NUMA domains and stuff
<zid>
not code it doesn't already have
<clever>
i remember the mmap man page having an x86 only flag, to request that memory be in the lower 4gig
<clever>
and i had mentioned it, on the subject of fixing some rpi firmware issues
<clever>
basically, the messages passed to the closed firmware, had a 32bit field for a userland void*
<clever>
so when you get the message back, you can find your own state in userland, and react properly
<clever>
and a 64bit void* just wont fit in that field, and they didnt want to deal with fixing that
<clever>
my proposed solution, was to just ask mmap to give you something in the lower 4gig of userland, so you can still use a 32bit field to hold its pointer
<clever>
but that mmap flag was x86 only
<clever>
RPF instead went the far more complex (but better?) solution, of moving all of that interfacing with closed-source into kernel space, and exposing it to userland over existing standard apis like kms and v4l
<clever>
for some things like 2d video out, there are 2 solutions
<clever>
fkms provides a standard kms api, wrapped around the closed-source firmware, and the kernel can just use opaque tokens in the 32bit field, instead of pointers, and look them up elsewhere
<clever>
the kms route, puts linux in complete control of the hw, and the firmware is just not involved anymore
<clever>
for h264 and isp tasks, they opted to just move the problem into kernel space, and expose things over v4l
<clever>
and then v4l maintainers deal with the 32bit vs 64bit compatability problems, which they already did
the_lanetly_052_ has quit [Ping timeout: 256 seconds]
<clever>
so in theory, aarch64 could have an x32 mode, but it would require changing gcc to make void* 4bytes, creating a new syscall abi, and making the kernel support that abi?
<clever>
the apt-file command i think can also do that
<bauen1>
newpy: and in this case it is grub-common
<newpy>
ty
<zid>
oh rad, I found a bizzare link on imgur that sets a cookie that opts you out of the redesign
<zid>
so it works again now
<GeDaMo>
I just wrote a greasemonkey script for imgur to display the image URL from the page header :P
[itchyjunk] has joined #osdev
Lycurgus has quit [Quit: Exeunt]
<newpy>
hmm, I think I followed the barebones tutorial correctly, but when I tried to run the iso in virtualbox it said FATAL: No bootable medium found! System halted.
<zid>
no idea for vbox
<sham1>
Did you create the boot disc image properly
<sham1>
Did you actually mount it
<GeDaMo>
Are you using the right file? :P
<zid>
file it and try mount it
<zid>
might be a decent qa test
<newpy>
I created the iso according to the tutorial
<sham1>
And got no errors I presume
<sham1>
Hmm, odd
<Lugar>
maybe its a virtualbox issue. try running it in qemu
<sham1>
The Bare Bones tutorial ought to be pretty watertight
<zid>
If you do do qemu more people can help
<newpy>
installing qemu now
<newpy>
could also be that I didn't make the cross-compiler correctly
<clever>
newpy: even if your kernel was bad, grub should have still booted, and given its own error
<newpy>
ah ok, only other issue might be that I created the VM incorrectly (chose Other/Unknown for type)
<newpy>
(but also tried Linux)
<sham1>
The OS type is just some presets and an icon
<Lugar>
i didnt know that :|
<zid>
for qemu it's jsut -cdrom blah.iso :p
<sham1>
Now you know, and knowing is half the battle
<zid>
I think for vmware it sets up the device tree differently at least
<zid>
pick win98 and it'll add an ide drive or whatever I assume :p
<sham1>
Clearly a SATA drive
<sham1>
Or fuck it. nVME
<sham1>
Err, NVMe
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
<newpy>
qemu-system-i386: Error loading uncompressed kernel without PVH ELF Note
<Lugar>
huh
<zid>
what does 'file blah.iso' give
<zid>
and if you try to mount it what does that give
<zid>
cus it sounds.. fucked
<Lugar>
yeah
<Lugar>
it sounds like you are loading the iso file with the -kernel parameter
<zid>
ooh yea that's a good shout
<newpy>
I used `qemu-system-i386 -kernel mykernel.elf` per the instructions
<Lugar>
are you using multiboot1 or multiboot2
* zid
checks the barebones page
<zid>
okay so yea it does suggest that, and it suggests it because it's multiboot, so I guess it failed to find the multiboot
<zid>
okay so it looks like at the time this tutorials page was made
<zid>
his was a fork with 'improvements'
<zid>
but the normal one has since been updated massively and his hasn't
<newpy>
yea going to try the official tutorial and see if that works
<zid>
send me the elf :p
<zid>
base64 and paste it to a pastebin works
while has joined #osdev
zaquest has joined #osdev
<sham1>
"If this is your first operating system project, you should do a 32-bit kernel first." Heh, interesting advice indeed
<Lugar>
Too bad i listened to that
<Lugar>
i had to reform my whole codebase to switch to x86_64
<zid>
x86 is dead, long live x86 (64)
<sham1>
It's not like creating a loader for a 64-bit kernel is that difficult. Besides, one gets neat benefits like being able to put the kernel to the -2GiB of the 64-bit address space without stupid linker hacks
<zid>
64bit is freeing in terms of not having to give a shit about vm space
<sham1>
Similar benefit also exists for 32-bit kernels
<zid>
and also.. on x86.. x86_64 is just way simpler
<zid>
once you get over the additional hurdle of 'enabling EFER.LM'
<gog>
RIP-relative addressing is bae
<sham1>
And the NX bit
<sham1>
I love the NX bit
<gog>
yes
<gog>
NX is bae
<zid>
I have it enabled but I've never tested it :P
<gog>
i've tested it inadvertently
<zid>
gj
<gog>
had it enabled for a whole PDPTE
<gog>
it worked
<sham1>
Nice
<zid>
enable it on your whole PML4 or riot
<zid>
I mean, isn't that how it's supposed to work though
<zid>
you set it on every layer, and it's W^X that matters
<sham1>
I'd have to check the manuals, but I'd think that the most specific applicable paging structure might win in terms of "is this executable"
<zid>
it's executable if it isn't writeable, is what NX enforces though
<zid>
so *everything* should have that bit set
<sham1>
I thought that you can do W|X
<sham1>
It's just that you shouldn't
<zid>
that's the case where you wouldn't set it
<zid>
but the parent tables would still have it set
<sham1>
Indeed
<zid>
I forget if.. I actually bothered to set the bit on anything but the PML4, thinking about it
<zid>
I think my mmap just does | PT_PRESENT automatically, not PT_NX | PT_PRESENT
<gog>
i thought it inhereted the permissions of the next level up because that's how it worked when i had my little accidental test
<zid>
so everything past boot won't be nx enabled
<gog>
every page in the 2GB covered by the PD was NX
<zid>
oh hmm that also sounds reasonable
<zid>
if only there was a manual :( Guess we'll never know
<bslsk05>
github.com: qemu/multiboot.c at b1fd92137e4d485adeec8e9f292f928ff335b76c · qemu/qemu · GitHub
<gog>
i've had trouble with it if PA != VA
<zid>
That's why it doesn't boot in qemu, newpy
<sham1>
Can someone tell me what the purpose of the paddr even is
<zid>
unless those are ints?
<sham1>
Does anything actually use it
<zid>
in which case it does go to 64k, but maybe it isn't aligned
<zid>
but it is aligned, shrug
<gog>
grub uses it
<sham1>
But why
<zid>
it's the.. file offset
<gog>
/shrug
<zid>
pretty handy imo, knowing where to load things from :P
<sham1>
But that's the file offset. I mean the physical segment addresses
<zid>
there's no segmentation in elf
<sham1>
Program header
<zid>
paddr = fileoffset
<zid>
file is loaded into memory, then mapped somewhere
<sham1>
Oh is it
<sham1>
Huh
<zid>
you sorta wanna know where it is in memory before you can map it
<zid>
which means knowing all the offsets into the file
<newpy>
zid, only checks first 8k?
<zid>
could be yea
<zid>
it *might* be checking 64k depending on how big that pointer is
<newpy>
so when I say . = 1M; it's starting way past 8k right?
<zid>
that's VMA
<newpy>
oh ok
<sham1>
Really? The p_addr? I thought that the file offset is p_offset
<zid>
your multiboot is at 0x4000 in the file
* geist
waves
<zid>
note that elf is useful for ROMs and shit too
<gog>
you can make it set a p_paddr if you do the AT() attributes on sections
* gog
waves at geist
<newpy>
I'm not too familiar with .ld scripts, am I doing something wrong there?
<zid>
where you might have a weirdly constructed rom
<gog>
yeah i think that's mostly what it's for
<gog>
platforms that care about that kind of thing
<newpy>
I think I'll just use the official barebones tutorial and see if that works
<newpy>
not sure what this zerester guy did
<zid>
newpy: no idea, but you have .text first at 0x1000, then some crap, then some .rodata at 0x4000 with multiboot in it
<sham1>
Yeah. There's p_offset and then p_paddr
<sham1>
And of course p_vaddr which is the actually useful one
<geist>
right, the paddr stuff is usually ignored by most loaders, *Except* loaders taht are doing more low level embedded stuff or things like grub when not using multiboot
<sham1>
Hmrmrm
<geist>
and unless you go out of your way PA == VA in most binaries
<zid>
A lot of systems give a shit about where you copy the images to the physical memory, it helps not to try to load it to 2GB
<zid>
then map it to 1MB
<zid>
when you have.. 16MB of ram
<geist>
exactly. the canonical example is linking your kernel to run at say 3GB but load at 2MB. in taht case you set your paddr segments starting at 2MB and VA at 2GB
<geist>
grub when using ELF (vs multiboot) will honor that
<geist>
as well as qemu when using -kernel, etc
<zid>
newpy: anyway, the linker script for this is weird apparently, you have two .rodata and they're in a slightly strange order
<geist>
but a loader in say linux when loading an app will not pay attention to PA stuff and only use VA
<gog>
physical address _hint_
<zid>
you should probably make a .header section that's always first, or move your multiboot header into .text which is currently first
<zid>
(but linked to be 2nd in memory, weirdly)
<clever>
the other case where ive seen paddr and vaddr differ, is with .data living in rom (or flash), and you are copying it to ram on bootup
<geist>
a simple example is to do something like .text.multiboot in some ASM file
<clever>
the paddr is where the linker places it in the .bin you flash to the hw
<geist>
and then in yuor linker script call that out first so it goes near the start
<clever>
but the vaddr is what the linker inserts into the executable opcodes
<sham1>
I don't like the stuff where one loads the kernel in a physical place and then fixes up the virtual addresses. As far as I am concerned, that's gross
<zid>
yea mine just does .text { .text (multiboot.o); .text (*.o); }
<zid>
I might have written those inside out..
<zid>
I did infact, write them inside out
<zid>
my kernel does .text : { boot.o (.text); * (.text); }
<sham1>
I'll rather just do separate loading thing that grub can place wherever and then I can load the actual kernel wherever
<zid>
"The text section from boot.o, then the text section from everything else"
<gog>
maybe they're like me and insist that anything that isn't code should not live in the same page as things that are code
<bslsk05>
github.com: lk/kernel.ld at master · littlekernel/lk · GitHub
<geist>
where in that case if you put the multiboot header in a .text.boot section it'll get sorted first when getting combined with other text segments
<zid>
the ELF itself doesn't care wwhether the multiboot header is in .text or not, because it'll start executing from the entry point in the header
<bslsk05>
github.com: boros/linker.ld at master · zid/boros · GitHub
<geist>
what i was going to do was explicitly *not* recommend doing the file call out stuff in the linker script
<geist>
but whatever works for ya
<geist>
since zid is giving you a counter example
<zid>
I don't mind for a single .o dep, I know there's a workaround but I learned it afterwards and it does work
biblio has joined #osdev
<zid>
either way you need a hardcode, either on a 'special' section, or a 'special' filename
<zid>
whichever you pick is up to you
<geist>
for a single i guess it's fine, i just find it generally easier to use special sections
<geist>
and iirc the LLVM linker doesn't understand the file thing
<zid>
if you ignore it you need a special link line that puts boot.o first
<geist>
that eventually sealed it whenwe were porting the linker sripts in zircon over to llvm
<zid>
there's no real way to FULLY avoid it
<geist>
sure there is. see the script i linked before
<zid>
You need the special name,.
<geist>
you simply arrange for the special section to go first
<zid>
Special name (section), special name (filename), special link (order)
<zid>
pick 1
<gog>
wooo rudimentary vm allocations
<geist>
first one
<zid>
:D
<geist>
it's more stable
<zid>
but then how will I troll windows users by naming two files Boot.o and boot.o
<gog>
now i have some refactoring to do because i made a mess :|
<geist>
order of link line you should never rely on. i dont think any linker guarantees it
<zid>
and Boot.o says "Stop using windows"
<geist>
there are any number of switches that would cause the linker to rearrange things
<zid>
yea I'd never pick the third
<zid>
There's just no 4th option where you need 0 special
<geist>
ugh i've got an old gentoo install (last booted in 2019) on this old pentium 3 machine that is a huge mess of not being able to roll forward
<zid>
yea fuck that
<geist>
i guess i should just reinstall, since emerge takes like 5 minutes to just decide that i can't build
<zid>
nuke it, there've been 418 python updates since then
<geist>
trying to debug it is too slow
<geist>
exactly, it's the python stuff
<zid>
There's a reason git shifted from shell scripts to C
* zid
stares at emerge
while has quit [Ping timeout: 256 seconds]
<zid>
"Let's pick the thing with the worst possible dependency management, and write our depenedancy manager in it"
<geist>
it's paradoxically bad that in my experience gentoo is a great distro for old machines because you end up with an extremely thin system (openrc, only a handful of demons, etc)
<geist>
except you have to build everything which takes forever on old machines
<zid>
yea my VM is openrc and no daemons besides sshd and ntp
<zid>
It's getting pretty hard to get xfce working now though ngl
<zid>
have to install bizzare greeters and login managers and shit manually
<clever>
zid: nixos just wrote their own functional language for the package manager, and then guile stripped that half out and used scheme to replace it, lol
<geist>
but my other experience is gentoo really requires constant updates or you end up in trouble
<zid>
I've revived an old install a bunch of times
<zid>
but it was never pretty
<geist>
re: the python slow on old machines thing you really feel it with ubuntu or debian too
<geist>
apt-get and whatnot also sits there and churns for minutes thinking
<clever>
geist: yep, ive got a gentoo box i havent updated in 8 years, and i'm dreading if i ever need to touch emerge again, lol
<zid>
yea don't bother after 8 years unless you have a fetish and 128GB of ram and 32 cores
<gog>
is paludis still a thing?
<gog>
i remember a bunch of drama in like 2010 over that
<clever>
zid: 2gig of ram, 2 cores, lol
<zid>
that can't even build software at all anymore, sorry
<clever>
and i'm fearing problems where something like libc updates first, and borks everything else
<zid>
gcc won't even look at a .cpp file for less than 4GB
<geist>
for soem reason i still have a place in my heart for this old p3. dual pentium 3 500mhz (katmai) 1GB ram
<geist>
it's still a pretty usable machine, excep tyou *really* feel the slowness of stuff written in python or modern compilers
<geist>
which clearly run an order of magnitude slower than older ones
<geist>
one of the reasons i really keep it around is i think it's the only machine i have left around that has both 3.3 and 5v PCI in it
<geist>
and i have some old 5v PCI scsi cards
<geist>
so if i need to read a scsi disk, this is it
<clever>
i remember my gentoo machine having problems at one point, when just LINKING firefox, needed over 3gig of virtual memory
<clever>
i was on a 32bit cpu....
<clever>
swap cant fix that
<geist>
yah i just run this thing headless. not going to bother with a ui
<zid>
discord direct embedded it I didn't realise it was imageshack 2.0
<geist>
Huh you’re using a discord to irc proxy?
<zid>
no
<zid>
I lifted it off discord
<geist>
Ah got it
GeDaMo has quit [Remote host closed the connection]
<sham1>
Oh great. The opponent left again
<clever>
geist: what do you make of the backtrace in the gist i linked above?
<sham1>
How am I supposed to chess when people just leave without resigning
<sham1>
Why am I punished for people leaving by having to wait like a minute
<clever>
sham1: i saw somebody join a room i manage a few days ago, 17 seconds after joining they said hello, 5 seconds after saying hello, in the very same second i replied, they left
<clever>
as-in, they said hello, and then gave up after only 5 seconds
<sham1>
F
<geist>
clever: bunch of c++
<geist>
i dunno
<clever>
a: why is this even segfaulting, b: why is such a delicate task being handled in the master process, where it can cripple the entire session?
<geist>
because i work at google i'm supposed to know/care about this?
<geist>
i can tell you working in the guts of chrome is one of the least interesting things i can think of doing right now
<zid>
I'll chess you as long as you promise not to gloat when you win
<zid>
because you play chess
<clever>
thought you might know where i could file a ticket, i'm not familiar with all of the bug trackers google has
<geist>
okay, actually i think there are lots more uninteresting things
<geist>
oh, there should be some pretty easy to find public trackers for that
<geist>
which i honestly have no idea
<geist>
people do actually look at stuff, i can tell you that