<immibis>
now that you mention it I am sure the 40gbps infiniband NICs in my desktop computer do not have enough PCIe speed for that. Not that it matters because the only infiniband communication there is to do in my house is those nics between each other
<immibis>
at some point i thought it would be cool to learn about infiniband, never did...
dutch has joined #osdev
<zid>
ironically better off with an amd machine than an intel one to actually have those pci-e lanes available
<geist>
ooh making progress. allocated a MSI, got exactly one interrupt, so i think that's progress
<kazinsal>
nice! I should bang out MSI support this weekend
<kazinsal>
(I say, having previously said "I should bang out x86-64 support this weekend" and "I should bang out user mode this weekend" etc etc)
<geist>
not sure why i'm not getting a second irq though, but i dont see any particular special support needed for MSI vs legacy IRQ
<geist>
the manual tends to clump MSI as being virtually identical to legacy mode
<kazinsal>
yeah, that's odd. I was under the impression that MSI had no real EOI mechanism other than whatever the device's own internal EOI is
<geist>
yah
<kazinsal>
probably can't hurt to blat an EOI to your local APIC anyways
skipwich has quit [Quit: DISCONNECT]
skipwich has joined #osdev
skipwich has quit [Client Quit]
CryptoDavid has quit [Quit: Connection closed for inactivity]
<geist>
yah that'swhat i'm wondering
<geist>
tracing through the qemu code now
<kazinsal>
been thinking about the latency costs of legacy PIC routing vs. MSI in a virtual machine and it must be a significant amount of overhead per interrupt
dude12312414 has joined #osdev
<geist>
yah thats it. blatting an EOI did it
<geist>
hrm.
<geist>
i vaguely remember some talk about auto eoi, so maybe there's some feature there
<gog>
pretty sure the host controller of whatever device is configured for just writes to an IRR on the LAPIC
<gog>
so it makes sense that you'd EOI to the LAPIC too
<kazinsal>
for MSI the hypervisor really just needs to write the MSI data to the right address and the LAPIC emulation will take over (which may be virtualized by the host CPU? not sure)
<zid>
there's an auto eoi in the lapic isn't there
<kazinsal>
for PIC emulation it must be spending hundreds of cycles per interrupt just fiddling with bits inside the emulated PIC
<geist>
yah also with x2apic you can use an MSR to EOI it (if needed) which should be at least simpler for a VM to interpret
<geist>
and/or fully emulate
<kazinsal>
probably ends up being a lighter vmexit even to do the classic LAPIC stuff than the PIC stuff
<geist>
reading the qemu code the MSI quite literally ends up packaging up an interrupt and doing a memory bus write over to the lapic code, which can detect that it's an MSI
<geist>
so it pretty much emulates more or less the Real Thing
<gog>
nice
<kazinsal>
yeah, that's got to be pretty simple and quick then
<geist>
but, yeah i vagely remember some auto eoi logic somewhere
<geist>
i actually haven't really written a real lapic driver here, so trying to avoid doing so
<kazinsal>
likewise
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<kazinsal>
anything I need to be quick is going to be PCIe, so will support MSI and MSI-X
<kazinsal>
anything I don't need to be quick can sit on the PIC and take a few extra microseconds per second of my CPU time
<geist>
hmm, yeah i dont see any reference to autoeoi, so not sure where i saw that
<geist>
looks like you gotta EOI the lapic, which of course makes total sense. I'm not sure how it'd work otherwise
<geist>
also it being edge triggered means you should EOI first before running the handler, since the act of running the handler will clear the state and you might lose an edge
<geist>
gotta finally start adding that infrastructure too
<geist>
though now that i think about it, it does beg the question: why dont you have to EOI the lapic if you're using a PIC based interrupt? EOIing the pic is sufficient
<geist>
there must be some behind the scenes mechanism where the emulated PIC tells the ioapic in passthru mode to ack the lapic
<kazinsal>
the PIC emulation probably does it yeah
<geist>
kinda makes sense. the intel manual talks about the APIC busses and there is an EOI message out on it
<gog>
i'm guessing the PIC holds the INTA line if the IMR isn't set for that vector
<gog>
if it is it might pass it through?
<geist>
i thn it's because the ioapic is sitting there pretending to be a PIC
<gog>
OR the IMR has to be ffh
<gog>
that too
<geist>
and it knows how to tell lapic what to do, since they both speak apic
<geist>
but sice MSI bypasses all that machinery you gotta go right to the source to tell it EOI
<clever>
the source being things like the pci-e card? where it will MSI again to signal the condition being cleared?
<geist>
well, source is a bad use there, source in this case is the lapic, which isnt' really a soure
<clever>
ah, more like the irq controller that manages that irq#
CaCode has quit [Ping timeout: 268 seconds]
<geist>
clever: in this case the interrupts are edge triggered, so the e1000e will set an interrupt when the ICR (interrupt condition register) goes from 0 to !0
<geist>
and in that case it fires an edge triggered MSI and moves on. reading the ICR clears the bits to 0, and thus arms it for the next transition
<clever>
and you want to clear the condition in the lapic, then handle things, and if a new event occurs between you handling and returning, that sets a new condition flag in the lapic
<clever>
makes sense
<geist>
yah the lapic is latching the edge so you have to EOI it so that it knows to move on to the next thing
<geist>
kazinsal: was just looking in the zircon code and noticed the paravirtualized EOI thing. forgot about that
gog` has joined #osdev
<geist>
it's a feature of the KVM paravirt interface. basically you do an atomic set of a shared memory between you and the hypervisor so you can EOI without actually a vmexit
gog has quit [Ping timeout: 256 seconds]
<clever>
and how does the host then notice that and do the real EOI?
<clever>
or is the lapic emulated, and the host just queries that atomic the next time an edge occurs?
<geist>
i think that's it. would have to read it, i didn't write that code but i remember it existing
wand has joined #osdev
<clever>
i can see how that atomic var, could be the raw "irq triggered" state for the emulated lapic
<geist>
it's one of these cases where it's a sloppy, lazy interface between the guest and the host but it avoids extra work
<clever>
if the value is !0 when control hits the host, jump to the interrupt handler
<geist>
possible it wont advertise if if the host has fully emulated lapic (apic-v, etc)?
<clever>
set it to !0 upon any event
<geist>
since that might do a better job
<clever>
and the guest sets it back to 0
<clever>
then you just need to actually interrupt the guest
<geist>
it seems the mechanism is for the guest to set it back to 0, yes
<geist>
so it's possible the value in it is the current handling IRQ or whatnot
<geist>
our code doesn't seem to care, it just atomic sets it back to 0
<clever>
i'm also wondering, is this kvm (the linux /dev/kvm api) or qemu's kvm implementation?
<geist>
probably a bit of both, but i bet most of this mostly handled in the kernel's kvm
<geist>
the split between the two is blurry in places
<bslsk05>
pastebin.com: something called a udev rebuild and my current mere was clobberedthair be mons - Pastebin.com
freakazoid343 has joined #osdev
<geist>
oh no the mere was clobbered!
scoobydoo has quit [Ping timeout: 240 seconds]
scoobydoo has joined #osdev
ajoberstar has quit [Remote host closed the connection]
ajoberstar has joined #osdev
radens has quit [Quit: Connection closed for inactivity]
ajoberstar has quit [Client Quit]
ElectronApps has joined #osdev
sdfgsdfg has quit [Quit: ZzzZ]
Electron has joined #osdev
ElectronApps has quit [Ping timeout: 240 seconds]
mahmutov has quit [Ping timeout: 268 seconds]
CaCode has joined #osdev
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
<kazinsal>
geist: ever seen a motherboard chipset on a PCIe card before? trying to make heads and/or tails of a card that claims to be an intel C620 chipset on a stick.
<kazinsal>
unfortunately I don't actually *have* one to test things on
<geist>
on PCI yes
<geist>
that was a fairly common thing: get a PC on a card that you could stick in <some non PC thing>
<geist>
stick in your unix workstation so you could run DOS stuff, etc
<kazinsal>
that's what's interesting though, it doesn't claim to even have a CPU on it
<geist>
ah
<kazinsal>
just a chipset
<kazinsal>
it seems to be claiming to be useful as an IPsec/SSL coprocessor
<kazinsal>
so I'm wondering if you can just bang commands into a BAR to talk to it as if it's just any other PCIe device
<geist>
yah probably the idea is to just drive the chipsets crypto accellerator parts directly
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
skipwich has joined #osdev
dude12312414 has joined #osdev
<hbag>
its almosy 6 am and im drunk and stoned this sounds like a perfectly reasonable time to read the osdev wiki
<rustyy>
there is nothing wrong with sleep)) wiki can wait, it is not going anywhere
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
<geist>
well... that's debatable
<kazinsal>
the wiki is not going anywhere, provided we've sacrificed the sufficient amount of goat's blood for the week and provided that chase is around to read his emails if we haven't
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
sdfgsdfg has joined #osdev
skipwich has quit [Quit: DISCONNECT]
<Belxjander>
Computational Limits ?
<Affliction>
kazinsal: I hear AMD's AM4 southbridges are "just another PCI device"
<kazinsal>
oh yeah, most chipsets that provide functionality like that are
<Affliction>
which kind of makes sense, since an AM4 CPU is apparantly self-sufficient without one (the ABxxx boards)
<kazinsal>
I just found this one interesting as it's a chipset on a stick so you can use add a 100 Gbit/s QuickAssist accelerator to an existing machine
<Belxjander>
hrmmm
<Belxjander>
so what "PC on a card" devices are on the market which can be slotted into a PCI backplane ?
<Belxjander>
not PCIe
<Belxjander>
might be kind of fun to try getting one of those working in my PPC machine
rustyy has quit [Quit: leaving]
<bradd>
Belxjander: I have an old sunpci-II which is pci. think a a6 cpu or somesuch
<Affliction>
hm, legacy PCI does have card/card communication, could provide interesting clustering possibilities?
<bradd>
(i.e. its old)
<Affliction>
but when I said 'PCI' above, I meant PCIe.
<bradd>
havent used it in years, but with solaris, you can run the card in a graphical window
gdd1 has quit [Ping timeout: 268 seconds]
rustyy has joined #osdev
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<Ameisen>
the avr-gcc backend has an annoying issue. They made r1 the 'zero register' - it's always reset to zero if used so that it holds zero. However, I'm guessing that this is being done as a very last step in the backend, because it's overly aggressive.
<Ameisen>
You'll see a multiply, r1 is cleared, then another multiply, then another clear.
<zid>
sounds about right
<Ameisen>
I need to look into the gcc optimizer more; I cannot imagine that this cannot be propagated as a property of some sort instead, so that it's cleared after use only if it isn't going to be clobbered again - that is, cleared after the _last_ time it may have been written to, prior to being read.
<clever>
i would have assumed it gets cleared once in the crt0 before main(), and then never written to again
<clever>
and as long as your .S files co-operate, everything just works
<Ameisen>
crt0? main? in avr?
<zid>
It's not on the c level though clever
<clever>
but if its in the clobbered registers of the ABI....
<zid>
it's being used as a sort of assembly macro register like mips has
<Ameisen>
it's an avr-gcc specific-thing
<Ameisen>
they basically said "let's make r1 the zero register because it improves codegen overall"
<Ameisen>
but I think that they literally just jammed it in as part of the mul instruction and such
<Ameisen>
which clobbers r0
<Ameisen>
r0:41
<Ameisen>
damn typing. r0:r1
<Ameisen>
the naive solution would be, I guess, to scan through the generated assembly and try to remove extraneous things such as that, but that strikes me as suboptimal
<Ameisen>
Oh, they do the same thing in ISRs as well - r1 is always zerod, _even if r1 isn't used in the interrupt_
<Ameisen>
clever - certain instructions, such as mul, put their result into the r0:r1 register pair. They opted for r1 to be the 'zero register'.
<clever>
that just sounds like a bad choice
<Ameisen>
I concur.
<Ameisen>
I would have gone with r3.
<Ameisen>
higher registers are generally allocated first, so they went with a very low register... but they went with one where the instruction _always_ uses it as an output.
<Ameisen>
but, even in that case, their implementation is very naive and just generates a lot of intermediate clears even when unnecessary.
<clever>
what was that special term...., related to functional programming
<clever>
ah, ssa, single static assignment
<Ameisen>
well, I suspect that the hard-wire logic used for architectures like MIPS in GCC can be repurposed for that for AVR, the question is how do you introduce something into the IL that says 'by the way, this needs to be cleared after this instruction before use or before the function returns'
<clever>
this feels like a job for SSA, and removing some static register assignments
<Ameisen>
Probably. The main issue is that it's sort of an inversion of the optimizer's normal logic here
<Ameisen>
It usually doesn't have to reset a register after an instruction.
<Ameisen>
Though there are cases in other architectures where that is necessary, so it might make sense to look at how they're doing it.
<Ameisen>
clearing status flags or such
<Ameisen>
or, rather, marking r1 as 'dirty' or such.
<clever>
yeah
<Ameisen>
I have no idea what llvm-avr is doing in some cases, of course. It adds a clr r25 to every function.
<clever>
it feels like you should first reduce the code down to an SSA form, and then begin deciding on opcodes to complete each step
<Ameisen>
well, that's the job of the optimizer, generally.
<clever>
and for each opcode, have a set of restrictions on which registers can be used, and what registers they dirty
<Ameisen>
Ideally I wouldn't have to reimplement a ton of GCC's innards.
<Ameisen>
I suspect it can already do this
bradd has quit [Ping timeout: 250 seconds]
<clever>
the overall design of llvm does sound like an appealing one
<Ameisen>
It is; it's just that avr-gcc has had a LOT of work put into it by Atmel.
bradd has joined #osdev
<Ameisen>
it's basically not being worked on anymore, but llvm-avr has a lot of catching up to do
<bslsk05>
www.stephendiehl.com: Implementing a JIT Compiler with Haskell and LLVM ( Stephen Diehl )
<clever>
this is a series of blog posts on how to write your own compiler in haskell
<clever>
everything from parsing, generating an ast, and then turning it into llvm-ir, so you can compile it at runtime and execute it
<Ameisen>
oooooh, I think I know why llvm always does clr r25 in these tests.
<Ameisen>
It's a stupid reason.
<Ameisen>
They defined, in their ABI file, that the r24:r25 pair will be the return register pair
<clever>
ah
<Ameisen>
so even if your return value is 8-bit, it still has to clear r25
<clever>
so its the upper 8bits of the return value
<Ameisen>
yeah. Even if the return type shouldn't have 8 upper bits
<Ameisen>
GCC didn't define their return that way
<clever>
personally, i would ignore r25 if i'm not expecting a 16bit result, and i would also leave r25 as undefined
<Ameisen>
GCC actually has the return register be the same as the first argument's, so a function that just returns the value is just 'ret'
<Ameisen>
I don't think that llvm-avr's backend is that intelligent yet
<Ameisen>
part of the problem in both GCC and LLVM is that... neither compiler really understands the register pairs in any meaningful sense, so actually defining how they work is a lot of hacks
<clever>
i'm seeing the same kind of issue in trying to figure out how to properly support the vector core on the VPU
<clever>
at the simplest level, the vector register bank is just an uint8_t[64][64]
<clever>
when you refer to a vector of data, you supply the x and y coords, a direction (row or column) and a bit width
<Ameisen>
AVR register pairs aren't too different in concept from x86 Rl/Rh, except that they are basically reversed - you normally access them via the _sub_register, with the pair being for specific use-cases
<clever>
for example, a row of data at 0,0 refers to cells 0,0 thru 0,15 (always vectors of 16)
<clever>
a column of data at 0,0 instead refers to cells 0,0 thru 15,0
<clever>
following along so far?
<Ameisen>
sounds like GPU registers
<clever>
yeah
<clever>
where things get more complex, is with 16bit vectors
<Ameisen>
doesn't show the zero register stuff though
<clever>
a 16bit row at 0,0 will get the lower 8bits from 0,0 thru 0,15 but the upper 8bits from 0,16 thru 0,31
mahmutov has joined #osdev
<clever>
so its very much like AH + AL == A on x86
<clever>
at the cost of halving your registers, you double the register size
<Ameisen>
x86 has the advantage that it has had a lot more work put into its backend on both compilers for some reason :D
<clever>
and a 32bit row at 0,0 will combine 4 vectors of 16, starting at columns 0, 16, 32, and 48
<clever>
so right there, you have 2 problems
<Ameisen>
I remember reading in an article from the '80s that x86 is just a fad and that m68k is the future, so I dunno what those compiler developers are doing.
<clever>
if you are working on a mix of 8bit and 16bit data, you need to keep track of which pairs of vectors overlap, and not do the wrong thing
<clever>
if you are working with columns and rows at the same time, then it winds up using the 0th element from 16 different vectors
<clever>
ignoring the column feature will massively harm the performance of certain algos
<Ameisen>
I _think_ you can actually define that behavior with LLVM's register definitions
<Ameisen>
though it would be annoying
<clever>
also, the repeat logic, makes things extra hard
<clever>
instead of saying that you have a row at 0,0
<clever>
you can say you have 32 rows, at 0++,0
<clever>
it will then operate on the entire block from 0,0 to 31,15
<clever>
so its basically a vector of 512 elements now
<clever>
due to that, you cant just blindly assign names to each coord, and treat them like normal registers
<clever>
because you can take any set of 1/2/4/8/16/32/64 consecutive vectors, and treat them as one bigger vector
<clever>
so now you have to write rules, saying that 1/2 can be paired, 3/4 can be paired, 5/6 can be paired, for every power of 2 from 2 to 64
<clever>
Ameisen: i think the root problem here, is that this vector core is too good, from what ive seen in other code (like arm neon), there are relatively few vector registers, so its an extremely load/store heavy job
<clever>
Ameisen: but the VPU has so many registers, you can load the entire dataset at once, and then compute without any load/store operations
<Ameisen>
for me, the main issue is that AVR isn't exactly fast, so the few instructions that get added are surprisingly detrimental
<Ameisen>
on a positive note, a ruggeduino came in, so I can experiment without starting fires
<Ameisen>
sram module as well (adding 512 KiB, only accessibly via banking)
<clever>
the VPU runs at 500mhz, and most opcodes take ~1-21 clock cycles, so its screaming along compared to an AVR
<Ameisen>
the atmega2560 has additional external addressing pins, and a dedicated banking register
<Ameisen>
AVR's standard clockrate is 16 MHz, though the atmegas can generally run at 20
<Ameisen>
though it's also 8-bit, so it's not only much slower, but takes way more instructions to do things
<clever>
yep, ive read the atmega128 datasheet cover to cover before, years ago
<Ameisen>
I'm slowly working on designing a 'proper' OS for AVR
<clever>
ive still got an atmega in the other room, handling ds18b20's and managing my furnace
<Ameisen>
though it's already going to be probably running programs in the 100s of KHz range
CaCode has quit [Ping timeout: 256 seconds]
<Ameisen>
somehow the atmega datasheet is more complex than MIPS documents in many cases
<Ameisen>
it's very strange.
<Ameisen>
probably because it's describing a specific implementation rather than a specification
<clever>
the arm docs can get well into 2000 pages long
<Ameisen>
for certain 'coprocessor' implementations over SPI.. because anything I plug in will be so vastly faster than AVR...
<Ameisen>
I'm pretty sure that I don't really need to do full SPI synchronization (like acknowledgement of data and such)
<Ameisen>
by the time AVR gets to writing the next byte, the other processor will have already received, processed, done stuff with, and waited an eternity.
<Ameisen>
so the AVR should be able to just pump data out as fast as it can.
<Ameisen>
I was just loosely considering how using something like an ARM Cortex M4 or 7 would work as a floating-point or advanced integer math coprocessor.
<clever>
heard of the rp2040 MCU?
mctpyt has quit [Ping timeout: 256 seconds]
<clever>
125mhz dual cortex-m0+, ~256kb of ram
<clever>
up to 16mb of flash
<Ameisen>
not quite as 'fun' as implementing an OS onto AVR, though.
<Ameisen>
m0 still isn't super powerful, but waaaaay more capable than avr8
<Ameisen>
and dual proc... heh
<j`ey>
I mean, it's not meant to be super powerful :p
<Ameisen>
it's super powerful relative to the other thing.
<Ameisen>
:D
<Ameisen>
"Here is a full multitasking operating system with memory protection running on an abacus"
<Ameisen>
I more want the IO functionalities of a chip like that.
the_lanetly_052 has joined #osdev
<Ameisen>
I've had my own avr-gcc fork for a long time, but getting these things working properly is rather important in this case
<j`ey>
why not upstream stuff?
<Ameisen>
there's already going to be inline assembly everywhere, because I was completely unable to get GCC to output optimal code for basic functions representing basic operations.
<Ameisen>
because there are no active AVR maintainers on GCC
<Ameisen>
they all left because the GCC maintainers were actively hostile towards them
<j`ey>
oh
<Ameisen>
That's apparently why __flash and other ISO/IEC TR 18037 things aren't supported in g++
<Ameisen>
because the extension was defined for C and not C++, the g++ maintainers rejected any work on it
<Ameisen>
because, you know, gcc and g++ have _no_ extensions of their own
ThinkT510 has quit [Quit: WeeChat 3.3]
bradd has quit [Ping timeout: 240 seconds]
sdfgsdfg has quit [Quit: ZzzZ]
bradd has joined #osdev
ThinkT510 has joined #osdev
Affliction has quit [Quit: Read error: Connection reset by beer]
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
Affliction has joined #osdev
lg has quit [Ping timeout: 260 seconds]
GeDaMo has joined #osdev
xing_song1 has joined #osdev
xing_song has quit [Ping timeout: 240 seconds]
xing_song1 is now known as xing_song
CaCode has joined #osdev
lg has joined #osdev
sdfgsdfg has joined #osdev
mahmutov has quit [Ping timeout: 256 seconds]
Burgundy has joined #osdev
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
sdfgsdfg has quit [Quit: ZzzZ]
gog has joined #osdev
gog` has quit [Ping timeout: 252 seconds]
bauen1 has joined #osdev
amazigh has quit [Quit: WeeChat 2.8]
bauen1 has quit [Read error: Connection reset by peer]
xing_song has quit [Read error: Connection reset by peer]
xing_song has joined #osdev
dutch has quit [Quit: WeeChat 3.3]
amazigh has joined #osdev
mahmutov has joined #osdev
freakazoid343 has quit [Ping timeout: 260 seconds]
bauen1 has joined #osdev
dennis95 has joined #osdev
bauen1 has quit [Read error: Connection reset by peer]
CaCode has quit [Quit: Leaving]
pretty_dumm_guy has joined #osdev
xing_song has quit [Read error: Connection reset by peer]
xing_song1 has joined #osdev
xing_song1 is now known as xing_song
Electron has quit [Remote host closed the connection]
the_lanetly_052 has quit [Ping timeout: 245 seconds]
xing_song has quit [Remote host closed the connection]
mahmutov has quit [Ping timeout: 240 seconds]
xing_song has joined #osdev
mahmutov has joined #osdev
nj0rd has quit [Quit: WeeChat 3.3]
nj0rd has joined #osdev
dennis95 has quit [Quit: Leaving]
amazigh has quit [Quit: WeeChat 2.8]
bauen1 has joined #osdev
joomla5 has joined #osdev
<joomla5>
Why are functions like malloc and free even needed? If each process has access to the entire virtual address space then why can't an OS just let a process use all the addresses without allocating them first? I guess that way paging will take a huge amount of space on disk and context switching become s a nightmare?
<gog>
that and the program has to be picking blocks of memory that aren't in use by some other part of the program so there needs to be a way to track that and it makes more sense for it to be in a library than every program doing its own heap
amazigh has joined #osdev
<gog>
and malloc() and free() in *nix-like systems are library functions rather than system calls, whereas brk() and sbrk() are system calls that adjust the top of the heap
<gog>
or the top of the data segment rather
<kingoffrance>
yeah, its really just to c89 or not to c89 IMO
<kingoffrance>
you can do anything, and provide an interface to make other people happy...or not
<gog>
ok the man page says that sbrk() is a library function that invokes brk() as a system call
<gog>
for Linux using glibc
<joomla5>
makes sense
<kingoffrance>
yeah netbsd wanrs mixing with malloc etc. may be "non-portable" i think that is a fancy way of saying "all bets are off"
<kingoffrance>
*warns
bauen1 has quit [Read error: Connection reset by peer]
<zid>
How do you free, if you demand fault
<zid>
'just call free' defeats your entire argument :p
<gog>
munmap()
* gog
mmap()s zid to her address space
<zid>
dirty bugger as usual
<j`ey>
youre not her type zid :P
<zid>
Are you saying my willy is too big? :(
<j`ey>
im saying it exists at all
<gog>
nah i'm good with that
<zid>
I wanna make a joke but idk how it'll be taken so I can't, rip
<gog>
the key is gender, not parts :p
<zid>
I kinda like the parts ngl
<gog>
i'm gonna go cook dinner before i get myself and zid banned
<gog>
:p
<zid>
I didn't know it was possible to get banned
<j`ey>
.. mart?
<zid>
so whatever you were about to say would have been AMAZING
<immibis>
joomla5: actually the OS only cares about mmap, you do have to tell the OS you want to use pages before you use them. Of course someone could design an OS that automatically maps pages on first use
<zid>
except you still need to free them
<GeDaMo>
You could unmap on lack of demand :P
<zid>
That's actually a real thing my friend did to fix an OOM issue with the software stack their product was running
<zid>
It was running some leaky java app so he just scanned the heap every now and then and free'd anything that hadn't re-set its dirty bit in a while
ajoberstar has joined #osdev
wootehfoot has joined #osdev
<gog>
not so much amazing as too far past PG-13 :p
<zid>
That's what amazing means
FishByte has joined #osdev
<gog>
oh i see
libc has joined #osdev
<libc>
hi. what would give me better performance in software raid 0 configuration
<libc>
more cpu's or less cpu's with more speed ?
<zid>
measure it
<immibis>
are you asking a linux question?
<libc>
it's not much of a linux question although the configuration will be used in linux os yes
<zid>
Did you measure it yet?
<libc>
zid: no i don't have two disks in my latop
<libc>
i want to make a better buy at Hetzner
<zid>
two disks? You could do that with literally anything
<zid>
a pentium 3
<libc>
never say i couldn't
<immibis>
how much are you going to be using your hard drives that it will actually matter?
<immibis>
i would generally estimate that more CPUs with less speed are good for server applications (many concurrent requests) and less CPUs with more speed are good for other purposes (including servers that don't do many concurrent requests e.g. minecraft)
<zid>
Less with more speed are always better, unless you run out of total cpu time available
<zid>
because latency
<gog>
for two disks in software raid 0 your performance bottleneck isn't going to be the CPU
<zid>
unless you have less than a pentium 3, as mentioned
<libc>
im reading that software raid doesn't actually provide faster access is it true ?
<libc>
and it makes sense
FishByte has left #osdev [Leaving]
libc has left #osdev [WeeChat 3.2]
<gog>
for two disks in a striped set possibly not
<gog>
oh
<gog>
they're gone
<zid>
it's always the webchat
<GeDaMo>
They're still in ##programming
libc has joined #osdev
<gog>
oh hi again
<libc>
ok i think it is faster
<zid>
Thank god
<libc>
i was wrong
<libc>
gog: miao !
<gog>
:o
<libc>
wow been al ong time
<libc>
ROFL
<gog>
do i know you
<libc>
yes
<libc>
you just don't know it et
<libc>
yet
<zid>
gog: You have weird friends
<geist>
what was faster?
<libc>
geist: oh hi
<geist>
hoy
<libc>
the expert is here at last
* geist
looks around, "where?"
<libc>
geist: i was trying to determine whether software raid actually increase performance over no raid single drive
<gog>
zid: most of my friends are queer and neurodivergent folk so yeah i can see that
<geist>
libc: sure
<zid>
gog: Do I count as neurodivergent, I don't like going outside and I know how computers work
<geist>
but depends on what you're measuring, but usually it's same or better for most metrics
joomla5 has quit []
<gog>
idk i'm not a doctor
<geist>
and of coruse depends on what kinda raid you're doing
<libc>
geist: i think it's better because I/O request is done without waiting for only one drive to come up with the info
<libc>
and since the load id now on the CPU it is faster
<libc>
hows my theory ?
<libc>
raid 0
<geist>
possibly. but depends on precisely what kind of raid you're doing
<geist>
we should st... okay
<zid>
speaking of raid, my favourite setup is to raid 1 a slow disk and a fast disk together, then use --write-mostly onto the fast disk
<zid>
it'll just slowly sync the changes to the slow disk, and you get full write speed
<geist>
raid 0 is indeed usually a bit quicker on most metrics yes
<zid>
raid0 is infact, n times as quick, assuming you didn't bottleneck somewhere
<libc>
even in software raid right ?
<geist>
absolutely
<zid>
software raid adds 0 overhead
<zid>
to raid 0
<geist>
software raid really doesn't mean much anymore, IMO. hardware raid nowadays usually just means some other cpu is doing it for you, but you get things like battery backup, etc
<zid>
hw raid only really matters for parity
<geist>
yah and even that a modern machine barely breaks a sweat
<zid>
which raid 0 doesn't have
<libc>
i want to check my theory..
<geist>
but. yeah raid 0 should be faster, but dont expect an amazing boost in performance
<libc>
the software raid 0 is faster because we don't need to wait for a single disk I/O for all the operations right ?
<zid>
raid 5 in software *can* be an issue if you're using super weak cpus with a *lot* of throughput, but you'd struggle to attach 128 SAS drives to a pentium 3
<geist>
it's possible random seeks may be a little slower since now you have to wait for sometimes the worst case of both disks to seek to the spot
<geist>
libc: not entirely sure what situation you're trying to describe there
<zid>
raid 0 is faster because your PC is a fuck load faster than a hard drive, so using two drives makes it save yoru file twice as fast, by saving each half to each drive
<zid>
that's it.
<libc>
im trrying to make sense how raid 0 is faster than no raid at all ( single drive )
<geist>
exactly, raid0 definitely can have faster throughput since you can now saturate two drives
<zid>
I have a 1GB file, I have a HDD with a write speed of 100MB/s. It takes me 10 seconds to save the file.
<geist>
okay, so the obvious one is that. you now can shove, say 100MB/sec to two drives in parallel
<geist>
zids got it
<libc>
not exactly parallel
<zid>
I have a 1GB file, I have two hard drives with a write speed of 100MB/s each. It takes me 5 seconds to save 500MB to both drives at the same time.
<zid>
two drives = twice as fast
<libc>
does 2 cpu's can do the writes ?
<libc>
the I/O
<libc>
?
<geist>
no. writes to drives are asynchronous
<gog>
oh
<zid>
you're on completely the wrong plane of performance
<zid>
100MB/s is *nothing*
<geist>
cpu queues a write, drive tells you when its done
<zid>
my cpu can do 60GB/s of writes.
<zid>
from 2010
<geist>
so with something like raid 1 the cpu can easily queue two simultaneous transactions as long as you're writing more than one stripe at a time
<geist>
and then wait for them to complete
<libc>
what i mean is this :
<zid>
It's less than a percent of what my cpu can do
<geist>
same with reading, you can queue two reads and wait for both disks to provide it
<libc>
you can't use two cpu's to increase I/O performance on raid since it's like encryption ... a data on one cpu is dependent on the other
<libc>
or this is wrong ?
<geist>
but all of these speed things assume you're transffering more than one stripe of data. if you're reading a single sector off your stripe, you get no benefit at all
<zid>
libc: I could encrypt it, decrypt it, make it dance the fandango, the problem is /hard drives are slow/
<immibis>
libc: CPU speed is practically irrelevant. the drive is the bottleneck, not the CPU... unless you are running some really fast SSDs at really fast speeds
<geist>
nope. multiple cpus has absolutely nothing to do with it
<immibis>
although we do have really fast SSDs at really fast speeds in consumer products now......
<zid>
My cpu is a *thousand* times as fast as my hard drive.
<geist>
or more specifically, multiple cpus has nothing to do with raid per se. keeping cpus in sync is a problem with multiple cpus in the first place, but raid doesn't change the picture
<immibis>
zid: what type of drive?
<zid>
western digital black
<zid>
hard drive, not ssd
<immibis>
coincidentally the same brand of drive I am using right now, and it maxes out at around 150 MB/s. Now, were you aware NVMe SSDs can achieve more like 3500 MB/s?
<geist>
libc: keep in mind that modern disk interfaces dont really use the cpu to move the data. cpus just direct the interface to start a transfer, and the interface uses direct DMA to move data around
<geist>
so it's really all about how many transactions you can queue per second, across all of the disks in the system
<zid>
immibis: Great, still 4% :P
<libc>
geist: hmm, ok
<immibis>
what geist said is correct. The CPU isn't sitting there babysitting the drive
rustyy has quit [Quit: leaving]
<immibis>
that used to be how it worked, like, decades ago
<geist>
a spinny hard disk can usually do a few thousand transactions per second, best case (1ms seek time literally gets you about 1000 possible seeks/sec)
<geist>
a modern SSD using something like nvme can do 100k or so, so it starts to get interesting, ecause seek times are zero
<libc>
so it just give an instruction of how data to read/write ?
<geist>
correct
<geist>
it's the OSes job to try to more efficiently queue up larger transactions and take advantage of the parallelism of things
<zid>
You might be generating the data to write though
<geist>
so if you read a block out of a file, the OS may go ahead and queue up the next 1MB of the file because it think you might read it
<geist>
vs reading/writing data exactly as the application requests it
<geist>
thats very slow
<immibis>
on an NVMe SSD it might be reasonable to not prefetch
<immibis>
although i am not sure what the latency is like, just the bandwidth
<geist>
exactly
<immibis>
it's probably only a small factor slower than reading from RAM. crazy stuff
<geist>
that's the OSes job, to know the relative performance of the devices and adjust accordingly
<zid>
Trying to think of a silly example of 'need a bigger cpu to raid two hdds together'. Arguging whether you should make 20 or 40 news newspaper factories to send your message attached to 2 pigeons instead of 1 pigeon.
<immibis>
anyone who follows Linus Tech Tips may note they did make a big deal about CPU speed in their file server... but they have like 50+ drives and 100Gbps speeds
<libc>
i have another question
<zid>
yea my cpu will quite happily deal with 128+ hard drives per core. or.. like 20 ssds
<immibis>
zid: also filesystem and network overhead. the CPU probably has to copy the data at least once to get it into a network packet
<immibis>
unless everything in the stack is very smart
<zid>
nod, that's why I don't like modern intels
<zid>
too many cores I can't use for anything useful, and even less memory bw than I have
<immibis>
network devices can certainly do scatter-gather I/O and large packets, but it relies on the software being smart enough
<libc>
assuming i have a file that is spread in raid0 between two disks
<libc>
how the system knows that it have the other part of the file in the other disk and how does it know where to start looking ( and even if it is actually the correct data )
<libc>
i mean all these searches must take time
<libc>
sry curiosity got to me today
<zid>
the raid software knows
<zid>
because you've set the disks up 'in raid'
<zid>
That's what it means to have done that
<immibis>
it's pretty much the same way the system knows where the file is on one disk, except on two disks instead
<zid>
you've told the raid software to make a raid array
<zid>
so ofc it knows
scoobydoo has quit [Read error: Connection timed out]
<immibis>
actually raid creates a "virtual disk" and then the filesystem (which keeps track of where the files are) doesn't even know that it's working on 2 disks instead of just 1
<libc>
so there must be an additional space overhead to query where the other data is right ?
<libc>
some database if you will
<immibis>
the filesystem does that
<zid>
It's almost certainly just a divide by 2
<zid>
because your file is now half the size and takes up half as many blocks on each disk
<libc>
immibis: hmm, ok thanks
<immibis>
and yes, the filesystem uses space overhead
scoobydoo has joined #osdev
GeDaMo has quit [Remote host closed the connection]
Oli has joined #osdev
<libc>
thanks all of you
<libc>
given your experience where you recommend to start reading for a beginner about the interplay between os and computer hardware ?
<zid>
I don't
<libc>
lol
<zid>
I recommend learning to write software, it will come naturally, then you will be able to *use* that information
<zid>
otherwise it's just factoids you don't really understand
<libc>
smart advice
Oli has quit [Quit: leaving]
<libc>
[[R
mahmutov has quit [Ping timeout: 256 seconds]
wand has quit [Remote host closed the connection]
wand has joined #osdev
mahmutov has joined #osdev
libc is now known as [[[R]]]]
[[[R]]]] is now known as Sauvinn
LostFrog has joined #osdev
PapaFrog has quit [Read error: Connection reset by peer]
Sauvinn is now known as libc
libc has quit [Quit: WeeChat 3.2]
sdfgsdfg has joined #osdev
bauen1 has joined #osdev
bauen1 has quit [Read error: Connection reset by peer]
nyah has joined #osdev
pretty_dumm_guy has quit [Quit: WeeChat 3.4]
mahmutov has quit [Ping timeout: 268 seconds]
bauen1 has joined #osdev
bauen1 has quit [Read error: Connection reset by peer]
vdamewood has joined #osdev
<Bitweasil>
libc, Tannenbaum's books are a good start...
biblio has joined #osdev
bauen1 has joined #osdev
<Belxjander>
Ameisen: the 68K CPU series had D0-7 A0-7 and F0-7 registers per core... A=Address D=Data and F=Float... where you could move data freely between A & D registers and memory... with the F registers being explicitly floating point
<Belxjander>
dammit...
<Belxjander>
old conversation again
<blockhead>
back in the day, my amiga hada 68040. good times. tried to learn assembler but never got the hang og it. spoiled by z80 on a CPM machine: simpler. :o
<blockhead>
s/og/of/
rorx has quit [Ping timeout: 268 seconds]
ajoberstar has quit [Quit: ERC (IRC client for Emacs 27.1)]
biblio_ has joined #osdev
biblio_ has quit [Remote host closed the connection]
biblio has quit [Read error: Connection reset by peer]
<moon-child>
Bitweasil: they left
bauen1 has quit [Ping timeout: 256 seconds]
zaquest has quit [Remote host closed the connection]