<geist>
NieDzejkob_: possible you read about delayed allocation
<geist>
which is a fairly standard strategy nowadays
<clever>
i can see how that works on a few FS's ive used, write the data to the journal first (if data journaling is enabled), and/or only keep it in the ram's write-cache
<clever>
and then allocate and write in larger chunks
<clever>
which is where frequent fsync()'s could fragment more
<geist>
right
<clever>
zfs recordlength also acts as a per-file block size
<clever>
dynamically adjusted based on what is best for that file
<clever>
and if you modify something smaller then that block, it involves a read/modify/write cycle
<clever>
the default limit is 128kb i believe
ElectronApps has joined #osdev
<clever>
2021-07-28 21:57:06 < patdk-lap> if you want serialized files on the filesystem you don't use a cow based filesystem
<clever>
2021-07-28 21:57:21 < patdk-lap> use xfs/ext4 where it tried very hard to keep file level fragmentation from happening
<clever>
2021-07-28 21:58:19 < patdk-lap> where xfs/ext4 is the other way around, they speed up read access, at the expensive of writes
tacco has quit []
jeramiah has quit [Ping timeout: 252 seconds]
sts-q has quit [Ping timeout: 265 seconds]
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
jeramiah has joined #osdev
ElectronApps has quit [Ping timeout: 240 seconds]
ElectronApps has joined #osdev
srjek has quit [Ping timeout: 252 seconds]
Matt|home has joined #osdev
nyah has quit [Ping timeout: 265 seconds]
bradd has joined #osdev
trufas has quit [Ping timeout: 265 seconds]
trufas has joined #osdev
mahmutov_ has quit [Ping timeout: 258 seconds]
iorem has joined #osdev
Terlisimo1 has quit [Quit: Connection reset by beer]
<Mutabah>
`undefined reference to `inportb'` - Here's you problem
<Mutabah>
:D
<klange>
yeah I've got a lot of stuff floating around in some drivers that is x86-specific and should either not be there or should be #ifdefed away
<klange>
I'm mostly just poking around at gcc/binutils right now, trying to do what I did for x86-64 and get things set up correctly and then I'll work on the actual porting once I can build appropriate binaries
<klange>
I think I missed something for slibgcc...
GeDaMo has joined #osdev
iorem has joined #osdev
<klange>
> /home/klange/Projects/workspace/toaru-aarch64/util/build/gcc/./gcc/nm: 106: exec: -pg: not found
<klange>
something's definitely borked, I'll come back to this later...
<j`ey>
klange: this is why I was fighting gcc build a week or two ago :P
<klange>
ah there we go, failing on bad reference to memset and memcpy because the libc.so I built a few hours ago doesn't have those (only had x86-64 assembly versions ;) )
<klange>
I think I've got the binutils and gcc configs right now
<klange>
> /home/klange/Projects/workspace/toaru-aarch64/util/local/lib/gcc/aarch64-unknown-toaru/10.3.0/../../../../aarch64-unknown-toaru/bin/ld: /home/klange/Projects/workspace/toaru-aarch64/util/local/lib/gcc/aarch64-unknown-toaru/10.3.0/crtbeginS.o: relocation R_AARCH64_ADR_PREL_PG_HI21 against symbol `__dso_handle' which may bind externally can not be used when making a shared object; recompile with -fPIC
iorem has quit [Ping timeout: 258 seconds]
<klange>
oh that was made suspiciously long ago...
ElectronApps has quit [Read error: Connection reset by peer]
ElectronApps has joined #osdev
Burgundy has joined #osdev
sortie has joined #osdev
dennis95 has joined #osdev
dormito has quit [Ping timeout: 268 seconds]
Bonstra has quit [Ping timeout: 255 seconds]
<klange>
I seem to have a bootstrap problem wherein crtbegin is built by libgcc, but libgcc needs a libc unless you configure it specially... ugh, how did this work for the x86-64 build
<klange>
oh maybe it's because this is also trying to build an ilp32 target, go away, I don't want you
<klange>
Maybe I'll look at this again later, when I actually have something to test any of it on anyway...
<sortie>
libgcc, at least statically, only needs the libc headers
<sortie>
I'm unclear if making libgcc_s.so (or what it's called) requires -lc
<klange>
Either it does, or specifically this ilp32 build of it does - given my "build gcc, build libc, build libgcc" script _seems_ to work on x86-64 I think it may actually be the ilp32 build?
<klange>
I don't know why it's explicitly looking for -lc, I can probably just try to multilib some garbage for it, but the better solution would be to tell it to stop doing that in the first place, I don't think I want an ilp32 multilib build? Is this something that's going to be problematic if I don't have it?
<bslsk05>
wiki.osdev.org: Hosted GCC Cross-Compiler - OSDev Wiki
<sortie>
If you follow this procedure, I'd be interested in the config.log for libgcc during all-target-libgcc
<sortie>
Whether it goes “wooah this compiler has DYNAMIC LINKING but I can't make executables”
<klange>
I can dig up some old logs from when I got it working on x86-64, which seems to have been reproducible across several re-runs on different machines.
<klange>
Most recently the Ubuntu install that is the secondary OS on my ThinkPad :)
<sortie>
I disabled libgcc multilib in my gcc port, although kinda tempted to restore it, so gcc -m32 and gcc -m64 becomes a cheaper cross-compiler than building the full thing
<klange>
I'm not even sure where it's getting enabled in this case, possily because I copy-pasted some BSD's aarch64 configs for binutils and it's figure it out from what the ld can do?
<klange>
And/or it's just the default for aarch64 and I need yet another option somewhere in the config scripts to turn it off.
<sortie>
The multilib is explicitly configured in the gcc config
<sortie>
Well, maybe not explicit, may have implicit defaults, but it's in the gcc config, I remember changing it. gcc 5 tho
<klange>
There is a rather lengthy default set of options for aarch64-*-* or whatever, compared to x86.
<klange>
I guess they assume anything targetting it is modern and robust and wants all of these options by default, so it saves spaces in the file to default on.
<klange>
Anyway, I have _a_ gcc that seems to work enough that I can finagle it into producing things, so maybe I can manage to get a kernel binary built - do we have a viable aarch64 barebones or something similar yet?
dormito has joined #osdev
<klange>
maybe I should just turn off all the shared libgcc stuff for this, I only even bothered figuring out on x86-64 because it allowed libstdc++ to be built shared as well, which was helpful for reducing the distributable size of C++ stuff even though I don't personally do any C++ stuff.
<sortie>
Yeah libgcc is so small it's fine to do it static
<sortie>
Although I would be very interested in the right way of bootstrapping it shared since that has some pretty wide implications potentially
<klange>
I should... run my x86-64 toolchain build script again and see if this is actually correct...
<klange>
The longest part of this is that I pull gcc and binutils from git.
<klange>
An even if I try to reduce clone depth it is a _lot_ of objects...
<klange>
When did I build this docker container, was it before or after I removed the '--with-newlib' hack...
<klange>
It's entirely possible when I installed this on my ThinkPad I ran the script, it failed somewhere and I didn't really notice, and then I ran it again and later steps having been completed made earlier steps pass...
<klange>
We'll find out in a few hours while this takes forever to pull.
ids1024 has quit [Ping timeout: 240 seconds]
paulusASol has quit [Quit: Bridge terminating on SIGTERM]
hgoel[m] has quit [Quit: Bridge terminating on SIGTERM]
happy-dude has quit [Quit: Bridge terminating on SIGTERM]
paulusASol has joined #osdev
ids1024 has joined #osdev
happy-dude has joined #osdev
hgoel[m] has joined #osdev
<klange>
ahhhhhhhh, lol
<klange>
Okay, so libgcc_s definitely wants -lc, and libc.so definitely wants crtbeginS.o/crtendS.o, but install-target-libgcc doesn't actually care that libgcc hasn't been built and will happily install the crt*'s that _were_ built
<klange>
and then we can install libc.so... and go back and build libgcc_s for realsies...
<klange>
then I build an empty libm, then I go back and build libstdc++
<klange>
Sounds like I might want to gate it behind %{!shared:%{!symbolic:-lc}} or something similar
ElectronApps has quit [Remote host closed the connection]
ElectronApps has joined #osdev
pony has quit [Ping timeout: 256 seconds]
pony has joined #osdev
Bonstra has joined #osdev
<klange>
That does seem to have helped in the aarch64 build, but I think I still need a LINK_SPEC flag to select ABI to get the linker to produce the right final output when producing the multilib libgcc, or as said earlier, turn that off
Bonstra has quit [Quit: Pouf c'est tout !]
iorem has joined #osdev
isaacwoods has joined #osdev
Bonstra has joined #osdev
<klange>
I'll revisit this on a rainy day, let's go back to frontend stuff... toast notifications and wiget stuff...
ahalaney has joined #osdev
nyah has joined #osdev
srjek has joined #osdev
srjek has quit [Ping timeout: 256 seconds]
nur has quit [Quit: Leaving]
j00ru has quit [Ping timeout: 258 seconds]
j00ru has joined #osdev
ElectronApps has quit [Read error: Connection reset by peer]
jeramiah has quit [Ping timeout: 265 seconds]
nur has joined #osdev
<vin>
I was thinking of concurrent approaches to read variable sized files into a buffer in memory. If I divide the buffer evenly amongst the cores available (Ex 32 GB by 32 threads) then each thread can read 1 gb from the file but if the file size is smaller then there will be wasted cores but if the file is larger and not of multiple of buffer size - cores are wasted again.
<vin>
Another approach would be to divide the file evenly amongst the thread but this can create arbitrary chunks of data that haven't been read yet.
<zid>
or just.. every thread mmaps the file
<zid>
what do cores have to do with reading files into memory anyhow, do you have storage that's faster than your cpu? O_O
<zid>
If you have any 50GB/s ssds lmk
<vin>
zid: I am striving to make use of all the cpu resources when possible.
<vin>
And yes there are persistent devices that give you 40GB/s now
<zid>
how are you triggering this? knocking cpus out of their idle loops if they're in one? I can't see how it'd be beneficial to resched cpus that were actually doing something
<zid>
I only have two memory ports so using more than 2 cpus to do this would do nothing, probably matters a lot more on multisocket systems
<zid>
(as well as 2 cpus being enough to saturate my memory bw anyway)
<gog>
if the underlying storage device has bus mastering all you do is schedule the transfer into the buffer
<gog>
if a CPU is just spinning on I/O it's not really doing useful work
<vin>
The CPUs are ideal - i have thread pool initiated to do this very task. Also right now I am on a single socket machine with 2 iMCs as well but from my previous experiements I have been able to get 30 GB/s by utilizing all 32 threads to read data from device.
<zid>
I can get 30GB/s with one cpu, is this one of those weedy many core xeons but they're all 1GHz
<vin>
zid: 30 GB/s from your persistent device?
<zid>
if the device supported that, sure
<zid>
I can memcpy to vram and stuff at that speed
Brnocrist has quit [Ping timeout: 255 seconds]
<vin>
I have intel Optane with Xeon 5218.
<zid>
ooh optane, nice
<zid>
you'll need the threads to hit the million iops those can handle, but not for raw throughput
<vin>
Wow but how did you get 30 GB/s with a single cpu though?
<zid>
because memory is fast?
<zid>
specs on a random 2011 xeon are 80GB/s memory bw
<zid>
rep movsb can do 30GB/s
<vin>
shouldn't your ssd also be equally fast? I meant I got 30 GB/s copying data from persistent store to DRAM
<zid>
I said "If I had a device that could do that, yes"
<zid>
only thing I have that'd take those speeds externally are gpu
<zid>
you're just filling buffers either way though
<zid>
weird, ark doesn't have a memory bw figure for the 5128, it's bloody.. six channel though? dang
<vin>
Okay... so yes I was looking for a technique that won't waste cores irrespective of the file or buffer size.
<vin>
Yes 6 channels
<zid>
It's not the amd trick is it, multiple dies?
nismbu has quit [Ping timeout: 265 seconds]
<vin>
No it's single
<zid>
nope, just an absolutely enourmous single die, heh
<zid>
amd's trick is to claim their cpus are "8 channel" but it's 4 cpus and some glue
<zid>
so you can only get 2 channel speeds
<sham1>
Hi all!
<vin>
i guess for now I will just assume the files are equal or a multiple of buffer size -- that way it's most efficient concurrently.
<jimbzy>
I thought you might get a kick out of that ;)
<gog>
lol
<gog>
needs an amp and a subwoofer
<jimbzy>
Hah!
<jimbzy>
I could do that.
<zid>
no spinning rims?
<jimbzy>
zid, That's what my wife said, too.
<zid>
Not trying anywhere near as hard
<jimbzy>
Then she said, "Don't you freaking dare..."
iorem has quit [Quit: Connection closed]
<GeDaMo>
No Mr Fusion? :P
nismbu has joined #osdev
regreg has joined #osdev
Brnocrist has joined #osdev
tacco has joined #osdev
dennis95 has quit [Quit: Leaving]
yomon has joined #osdev
Burgundy has quit [Ping timeout: 268 seconds]
mahmutov has joined #osdev
mahmutov has quit [Ping timeout: 256 seconds]
maurer has quit [Quit: WeeChat 1.5]
mahmutov has joined #osdev
Bonstra has quit [Quit: Pouf c'est tout !]
Bonstra has joined #osdev
dormito has quit [Quit: WeeChat 3.1]
dormito has joined #osdev
dutch has quit [Quit: WeeChat 3.0.1]
GeDaMo has quit [Quit: Leaving.]
dutch has joined #osdev
srjek has joined #osdev
devcpu has joined #osdev
solar_sea_ has quit [Ping timeout: 265 seconds]
dormito has quit [Ping timeout: 252 seconds]
gog has quit [Ping timeout: 240 seconds]
immibis has joined #osdev
tacco has quit []
mahmutov has quit [Remote host closed the connection]
mahmutov has joined #osdev
PapaFrog has quit [Ping timeout: 256 seconds]
<NieDzejkob_>
argh. what would it take to control the backlight brightness on a laptop? can I count or hope for something in the ACPI tables, or do I need to write a chipset driver?
<zid>
the acpi bytecode crap can probably handle it for you
<zid>
you might need to read the windows versions and translate though
<zid>
often you end up with WINDOWS { do fancy reads and writes to i2c bus } LINUX { nop }
<NieDzejkob_>
ugh wtf I didn't look at acpi close enough yet to know that they can ship per-os implementations. Why would the host os ever matter?!
<zid>
how they've configured the devices, one assumes, I've not looked deeply into it either
regreg has quit [Ping timeout: 252 seconds]
grange_c has quit [Ping timeout: 255 seconds]
jeramiah has joined #osdev
yomon has quit [Remote host closed the connection]
Burgundy has joined #osdev
grange_c has joined #osdev
ahalaney has quit [Remote host closed the connection]
Burgundy has quit [Ping timeout: 256 seconds]
mhall has quit [Quit: Connection closed for inactivity]