gildasio has quit [Remote host closed the connection]
gildasio has joined #osdev
Vercas has quit [Ping timeout: 240 seconds]
sonny has joined #osdev
X-Scale has joined #osdev
sonny has left #osdev [#osdev]
gog has quit [Ping timeout: 276 seconds]
zaquest has quit [Remote host closed the connection]
zaquest has joined #osdev
gamozo has joined #osdev
* gamozo
waves
<gamozo>
What's everyone working on?
<klange>
work
<Mutabah>
It is currently Work [TM] O'Clock.
<Mutabah>
But in non-work times, getting back into USB
<gamozo>
That sounds fun! What parts of USB are you lookin to do?
<gamozo>
I've never done NVMe before and I think that's on my mind
<Mutabah>
Polishing MSC and HID support, might see about adding a UHCI driver too (currently only have OHCI)
<Mutabah>
really should also try E/X
<Mutabah>
(I.e. USB2/USB3)
smeso has quit [Quit: smeso]
<gamozo>
I actually haven't done much USB beyond... looking through wireshark logs. Have the newer interfaces gotten nicer/better to use with software?
<Mutabah>
iirc, yes.
<Mutabah>
Although, the newer standards and backwards compat bring their own complexity
<Mutabah>
and offloading adds complexity
<gamozo>
Yeah, that's pretty fair. It's honestly pretty impressive how deep USBs backwards compat is at this point aha
<gamozo>
I still have ancient USB 1.1 devices I'll just plug in and... use
<Mutabah>
Most keyboards/mice are 1.1
<Mutabah>
no need for anything faster
<gamozo>
Huh, I wonder how much a USB 2.0 controller is compared to a USB 1.1 in terms of like, silicon/asic cost/complexity
<Mutabah>
well, it has a substantially faster clock
smeso has joined #osdev
<Mutabah>
probably not an issue nowadays, but if you have a pre-existing device controller, why redesign it
<gamozo>
Yeah, makes sense. I've always been really curious about the logistics of low-end chips
<gamozo>
I always love when I see a new like, 4000s series logic chip show up on the market. Always kinda neat to see why someone starts up fab of such a new basic chip.
heat_ has joined #osdev
heat has quit [Read error: Connection reset by peer]
_whitelogger has joined #osdev
sikkiladho has joined #osdev
heat_ has quit [Remote host closed the connection]
<sikkiladho>
can we cast a function pointer to void without getting the warning?
heat has joined #osdev
<Mutabah>
Depends on the compiler, and depends on the warning
<kingoffrance>
im assuming meant void *, not invoke function function and (void) return value
xenos1984 has quit [Read error: Connection reset by peer]
<kingoffrance>
"Due to the problem noted here, a future version may either add a new function to return function pointers, or the current interface may be deprecated in favor of two new functions: one that returns data pointers and the other that returns function pointers."
<kingoffrance>
did they ever do that? lol
<kingoffrance>
manpages (bsd and linux) say came from sunos
<kingoffrance>
so "posix" perhaps just followed sun
sonny has quit [Remote host closed the connection]
sonny has joined #osdev
srjek has quit [Ping timeout: 240 seconds]
sonny has left #osdev [#osdev]
heat has quit [Read error: Connection reset by peer]
heat has joined #osdev
heat has quit [Remote host closed the connection]
heat has joined #osdev
<kazinsal>
ha. I'm reading the source for the FreeBSD BPF JIT and it's even more simple than I thought it would be
<kazinsal>
it's just a two-pass compiler written with a bunch of macros that emit machine code
<moon-child>
tcc: 'you guys are getting more than one pass??'
<klange>
passes are for weenies, emit instructions directly as you parse
myon98 has quit [Quit: Bouncer maintainance...]
<moon-child>
yep that's basically what tcc does
<moon-child>
does have a separate pass for tokenising
<kazinsal>
yeah the only reason freebsd does two passes is because BPF's instructions are fixed length so converting jump offsets to work with emitted x86 code needs a second pass to fix up the offsets
<heat>
is it BPF or eBPF?
<kazinsal>
BPF, eBPF is a linux thing
<heat>
wouldn't be surprised to learn freebsd does it too :)
<heat>
I think linux does extensive verification to make sure it's not getting pwned by a bpf/ebpf program
<moon-child>
wiki sez windows has ebpf too
<kazinsal>
yeah, that's one of the downsides to how thoroughly extended eBPF is
* kingoffrance
.oO( TIL 0x8900 outputting 'S' 'h' 'u' 't' 'd' 'o' 'w' 'n' will shut down bochs? )
<kazinsal>
and yeah, there is a user-mode implementation of eBPF for Windows
<kazinsal>
think it has a kernel-mode driver component as well
<kazinsal>
a few years ago at BSDCan someone did a talk/paper on eBPF in FreeBSD but I don't think there's been much progress since then out of lack of interest
<heat>
surprising
<heat>
like half of linux networking is just eBPF strapped to the kernel :P
<kazinsal>
kind of makes me wonder how much of IOS-XE is implemented in hacky eBPF
<kazinsal>
since it's basically Cisco IOS as a series of daemons on Linux
<kazinsal>
but it's also got integrated docker and kvm-on-IOS support and stuff
<kazinsal>
so there's probably a good bit of eBPF hooking involved
<heat>
bpf + AF_PACKET?
heat has quit [Ping timeout: 252 seconds]
Likorn has joined #osdev
Jari-- has joined #osdev
nyah has joined #osdev
<geist>
oh hey so anywhere here like PCI?
<geist>
like anyone here think they understand it?
<geist>
found a fun thing at work with a Dell laptop. first time i've ever seen it (on x86)
<kazinsal>
as in the bus or as in compliance standards? :P
<geist>
as in think you've seen it all but are surprised
<kazinsal>
lay it on me
<geist>
seen a laptop with a second pci *segment*
<geist>
ie, 0000:00.0 + as you expect
<geist>
and then a *second* segment 1000:00.0
<kazinsal>
oh whoa
<geist>
(actually in this case it's curiously 1000:e1.0
<geist>
)
<geist>
nothing funny like thunderbolt (which would actually make sense)
<geist>
just another tiger lake root port + a single nvme device
<kazinsal>
well now that makes config space a bit more annoying
<geist>
right?
<geist>
it's valid, i just didn't honestly thing intel hardware had it in it
<kazinsal>
thank god for having a hojillion bytes of virtual address space
<geist>
i think all the seghmentation stuff is actually described in ACPI, etc
<geist>
i didn't check but presumably it's a seperate ECAM, though why it doesn't start over at bus 0 i dunno
<kazinsal>
yeah, you'd need to allocate another 256 megs of ECAM space
<kazinsal>
since I think you only get one segment group per ECAM
<geist>
i think so too
<kazinsal>
admittedly I haven't implemented ECAM
<geist>
possible they do something cheesy like say the ECAMs are on top of each other, so that they can get away with a single ECAM
Jari-- has quit [Read error: No route to host]
<geist>
and thus it's really a separate pci root port that somehow is considered another segment
<kazinsal>
yeah, I guess if your BDF numbers can fit in there nicely
<kazinsal>
but dang
<geist>
i dont personally have access to the laptop (Dell Latitude 5420)
<kazinsal>
that's neat
<geist>
but it's just bog standard Tiger Lake
<kazinsal>
...hold on, let me check what my work laptop is
<geist>
so had no idea you could configure it that way
<kazinsal>
ah, 5320
Jari-- has joined #osdev
<kazinsal>
still kinda curious though. one moment
<geist>
needless to say it messes up fuchsia's pci implementation, hence why was looking at it
<kazinsal>
argh, why did my laptop reboot
<kazinsal>
I may or may not have not saved things. damn sysadmin pushing stuff down over intune
<Jari-->
I reboot on aptitude upgrade
<geist>
i usually do only if some libs are floating around. i find em with 'sudo lsof | grep DEL'
<Jari-->
I should design my 32-bit custom Fat File System to work on 64-bit fat table.
<Jari-->
That shouldn't be hard from rewrite.
<geist>
that starts to get pretty excessive though right? since to really need it you'd need more than 4bil entryes, which is by definition already 4GB of disk space
<geist>
well, actually 16GB
<kazinsal>
grr, can't see PCIe segment group ID from wmic
<kazinsal>
but I do also have an interesting jump from bus 0 to bus 113
<geist>
i created a 2TB FAT32 test image and by the it's already got like a 256MB FAT i believe, pretty slow to scan
<Jari-->
geist: if I want to use modern disk drives
<geist>
kazinsal: yah that's the other thing this dell does because of thunderbolt
<kazinsal>
113:0:0 is something realtek, hrm
<geist>
all on segment [0000].... bus 0, bridge to bus 1-31, second bridge to bus 32-71 or something
<geist>
basically the two bridges are thunderbolt controllers so they reserve like 30 something busses up front
<kazinsal>
expresscard reader? I didn't know this thing had one of those
<geist>
and then there's a device at like bus 71
<kazinsal>
wait, no, it doesn't have an expresscard port. but it has an expresscard reader chip. nice market segmentation dell
<geist>
actually that's the same thing kazinsal. hex 71 is 113 decimal
<geist>
so it's likely this machine is set up the same way. does it have nvme? if so what bus number is it on?
<kazinsal>
I think it does, one sec
<geist>
that's whats on the separate segment on this one
<kazinsal>
NVMe is saying it's on 3:0:0
<geist>
ah
<kazinsal>
Windows is reporting it as "NVMe BC711 NVMe SK hynix 256GB"
<geist>
yah pretty standard
<kazinsal>
let me see if I can actually run a proper lspci type util on this thing without the corporate antivirus ratting on me
<geist>
BUSTED
<kazinsal>
the windows lspci port isn't showing me the segment ID unfortunately
<kazinsal>
but some interesting stuff here
<geist>
yeah?
<kazinsal>
a bunch of low level bridge information for that expresscard bridge
<geist>
possible the segmentation numbering is linux side, but i dont see how they can interpret it any other way
<kazinsal>
pretty much every device on this thing has a subsystem vendor of Dell (obviously) and the same subsystem device ID of 0A1F
<kazinsal>
interestingly this lspci port doesn't seem to understand 64-bit BARs well so kind of SOL on the info of those
<kazinsal>
I suspect Windows remaps them >4G
<geist>
yeah
<kazinsal>
also intersting is that there's an Intel Corporation Device A0EF at 00:14.2 that claims to be of the class "RAM memory"
<kazinsal>
"Tiger Lake-LP Shared SRAM"
<geist>
oh that's fun!
<geist>
how big is the bar for that?
<kazinsal>
doesn't say, there's two 64-bit non-prefetchable BARs that this port of pciutils doesn't grok
<geist>
was going to say look at the bridge, but sicne it's on bus 0 it is special
<kazinsal>
found a newer binary, let me see if this one will tell me
<kazinsal>
okay, this thing's saying there's no BARs. let's ask windows directly instead
<kazinsal>
CPU in question is an i5-1145G7 btw
<geist>
i thin that's the same as the 5420
<kazinsal>
alright so it looks like the device may be unconfigured in Windows, which is saying it's a "PCI standard RAM controller".
<kazinsal>
I don't know what the standard for that class is but there's no driver loaded for it so... who knows.
scoobydoo_ has quit [Read error: Connection reset by peer]
scoobydoob has joined #osdev
sikkiladho has quit [Quit: Connection closed for inactivity]
<kazinsal>
looks like each root port has 544 MiB of address space assigned to it
<kazinsal>
contiguously
<kazinsal>
ah, plus a bit extra underneath the root complex's address space
<kazinsal>
which is oddly also where the RAID "chip" and the card reader live oh no this topology just got a lot more complex
<geist>
interesting yeah
<geist>
i also remember seeing on this one something funny like a raid 'nvme' device on bus 00
<geist>
like it's a faked out raid thing that the cpu implements
<kazinsal>
interesting. bluetooth is hanging off XHCI
<kazinsal>
wifi is not
wand has quit [Ping timeout: 240 seconds]
wand has joined #osdev
Likorn has quit [Quit: WeeChat 3.4.1]
<mrvn>
kazinsal: 512MB + 32MB continously?
<kazinsal>
Yeah
Jari-- has quit [Quit: hmm]
SGautam has joined #osdev
Burgundy has joined #osdev
Starfoxxes has quit [Ping timeout: 260 seconds]
Vercas has joined #osdev
Vercas has quit [Client Quit]
Starfoxxes has joined #osdev
Vercas has joined #osdev
dh` has quit [Ping timeout: 246 seconds]
Starfoxxes has quit [Ping timeout: 248 seconds]
bliminse has quit [Quit: leaving]
Starfoxxes has joined #osdev
pretty_dumm_guy has joined #osdev
GeDaMo has joined #osdev
the_lanetly_052 has joined #osdev
bliminse has joined #osdev
bauen1 has quit [Ping timeout: 248 seconds]
the_lanetly_052 has quit [Max SendQ exceeded]
the_lanetly_052 has joined #osdev
the_lanetly_052_ has quit [Ping timeout: 248 seconds]
kingoffrance has quit [Ping timeout: 240 seconds]
kingoffrance has joined #osdev
gog has joined #osdev
oriansj has left #osdev [ERC (IRC client for Emacs 27.1)]
SGautam has quit [Quit: Connection closed for inactivity]
gamozo has quit [Ping timeout: 252 seconds]
gamozo has joined #osdev
bauen1 has joined #osdev
heat has joined #osdev
<heat>
geist, fuck yeah pci segments
<heat>
this proves I was right when I implemented full pci-e segment support
<heat>
haterz said it couldn't be done
crm has joined #osdev
orthoplex64 has quit [Ping timeout: 240 seconds]
<mrvn>
hah, he was just setting a challenge
heat has quit [Read error: Connection reset by peer]
heat_ has joined #osdev
dennis95 has joined #osdev
HeTo has quit [Ping timeout: 240 seconds]
HeTo has joined #osdev
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
sikkiladho has joined #osdev
Likorn has joined #osdev
craigo has quit [Ping timeout: 256 seconds]
eryjus has quit [Ping timeout: 248 seconds]
heat has joined #osdev
heat_ has quit [Read error: Connection reset by peer]
srjek has joined #osdev
sonny has joined #osdev
bradd has quit [Ping timeout: 240 seconds]
bradd has joined #osdev
sonny has quit [Remote host closed the connection]
sonny has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
sonny has quit [Remote host closed the connection]
the_lanetly_052 has quit [Ping timeout: 256 seconds]
pretty_dumm_guy has joined #osdev
sikkiladho has quit [Quit: Connection closed for inactivity]
SGautam has joined #osdev
bradd has quit [Ping timeout: 248 seconds]
bradd has joined #osdev
knusbaum has quit [Ping timeout: 256 seconds]
knusbaum has joined #osdev
dennis95 has quit [Quit: Leaving]
mahmutov has joined #osdev
the_lanetly_052 has joined #osdev
eryjus has joined #osdev
_eryjus has joined #osdev
eryjus has quit [Ping timeout: 256 seconds]
<gorgonical>
I was re-reading about OSes written in/on languages that use VMs, like Erlang. HydrOS is a concept that uses a microkernel in C and adds some built-ins to allow Erland code to interface with the hardware better.
<gorgonical>
But all of that assumes you have a working BEAM implementation on your platform, incl. a C runtime and library. How much is that "cheating?"
<bslsk05>
mirage/mirage - MirageOS is a library operating system that constructs unikernels (213 forks/1868 stargazers/ISC)
<GeDaMo>
I believe Squeak Smalltalk is written in itself but it can generate the C for the VM
<gorgonical>
mrvn: These unikernels target the hypervisor? Abstracts away some/all of the difficulty of hardware management?
Dyskos has joined #osdev
<mrvn>
gorgonical: yes, they just support the xen virtual hardware.
<gorgonical>
An interesting concept. I have a vague understanding that OCaml can have a more direct interface with the hardware. Isn't there a way to compile ocaml to native code?
<mrvn>
there is and that is what they do.
<gorgonical>
Fascinating
<gorgonical>
OCaml is such an interesting language
<mrvn>
You always need some glue in asm or C to connect higher level languages to the hardware. That much cheating is unavoidable.
<gorgonical>
I agree. I don't think it's cheating to use C e.g. to create your ASM stubs and to do very low things like load the GDT/IDT. But an entire C runtime to host a VM is not exactly what I had in mind
<gorgonical>
Mind you, the core logic of the OS kernel *is* written in Erlang. The process management, paging, etc. is all written in Erlang. So that's where the argument can be made, I think
<mrvn>
For ocaml there is a module ctypes though that handles the glue for you though. Can't remember if mirage uses that though.
<mrvn>
I think XEN helps you with the GDT/IDT too, as in you don't have to deal with that at all. You provide a separate entry point to XEN that it calls for exceptions and interrupts and such.
<gorgonical>
Wow
<gorgonical>
I haven't dabbled much in paravirtualized stuff like Xen
<gorgonical>
I mean, makes sense
<mrvn>
The xen paravirtual interface realy takes away tons and tons of the hardware and replaces that with virtual interfaces.
<gorgonical>
That's sort of the whole selling point of these virtual machines isn't it? I know at least a few projects that say explicitly it only works on QEMU under a specific profile
<mrvn>
And not having to emulate all the crappy hardware interfaces it's so much faster.
<mrvn>
Well, paravirtualized stuff is from a time before hardware VM support. It's kind of gotten lost in time now. The hardware has eliminated a lot of the inefficiencies that paravirtualized worked around.
<mrvn>
qemu uses kvm
<j`ey>
or hvf on macOS
<heat>
you can also take away the hardware in qemu
<heat>
use virtio everywhere
<heat>
kvm extensions (or whatever those are called) also exist
<heat>
kvm-clock and others
SGautam has quit [Quit: Connection closed for inactivity]
bliminse has quit [Quit: leaving]
wootehfoot has joined #osdev
bliminse has joined #osdev
<geist>
mrvn: or somewhat as i've mentioned before, riscv takes a different strategy and basically assumes paravirtualization is there always and thus the SBI firmware interface can be paravirtualized
vimal has joined #osdev
<mrvn>
xen paravirtualization abstracts the page tabels and such too.
<mrvn>
Why is there no BYTE_BIT in limits.h?
<geist>
yah. reminds me i should fiddle with that again
<mrvn>
-,h
<geist>
hmm, the compiler might provide that?
<geist>
is it a builtin #define?
<mrvn>
enum class byte : unsigned char {} ; since c++17
<mrvn>
it's what you should use for buffers
<geist>
right
<mrvn>
"A byte is only a collection of bits, and the only operators defined for it are the bitwise ones. "
<geist>
i think the advantage there is it doesn't promote or convert itself to int over using unsigned char
<mrvn>
yep. no arithmetic by accident
<geist>
but at least all of us that have dealt with arm are pretty well aware of unsigned char vs char
<geist>
unless you've had to deal with one of the few arches that defines it the other way you probably never would really bump into that weird edge case of C
<mrvn>
ppc too
<mrvn>
std::byte also has an effect on aliasing. A char * can alias anything in a function call while std::byte* can only alias other std::byte*.
vimal has quit [Remote host closed the connection]
xenos1984 has quit [Read error: Connection reset by peer]
<heat>
I don't get std::byte
<heat>
it's truly useless
<geist>
vs what?
<heat>
unsigned char?
<geist>
go back and re-read a bit of what mrvn wrote
<heat>
oops I can't read
<heat>
yeah good point
<heat>
anyway
<heat>
EMERGENCY
<heat>
NVIDIA IS PUBLISHING THEIR LINUX DRIVERS AS OPEN SOURCE
<bslsk05>
github.com: open-gpu-kernel-modules/kern_bus_gm200.c at main · NVIDIA/open-gpu-kernel-modules · GitHub
<klys>
...getting closer
wootehfoot has quit [Quit: Leaving]
<klange>
There isn't going to be anything gtx960-related in here that isn't tangential
<klys>
gtx9xx are maxwell
<klange>
The important stuff is only turing or later - that's RTX
<klys>
so will I have a working driver from this code? you seem like you may know already
<heat>
i think so
<heat>
it's not here just for show
<klys>
ok 960 is gm206
<klys>
the latest trouble I encountered when asking around at freedesktop/nouveau is that they can't control the fans
<klange>
> In this open-source release, support for GeForce and Workstation GPUs is alpha quality. GeForce and Workstation users can use this driver on Turing and NVIDIA Ampere architecture GPUs to run Linux desktops
<klange>
You are not going to get a working driver for a Maxwell device from this source release.
<klys>
I guess
<klange>
It's not even aimed at desktop users. It's for datacenter GPUs first and foremost.
<klys>
thanks klange
<klange>
Hopefully there's enough leftover stuff for those older GPUs that nouveau can work off of to fill in gaps.
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
bauen1 has quit [Read error: Connection reset by peer]
<Griwes>
You won't get maxwell because maxwell lacks the piece of hardware that's needed for this to be viable
<Griwes>
Also really glad I can finally talk about this :P
<klys>
uvm? icpu?
<klys>
now what feature do you mean to say
<klange>
they moved all the proprietary bits from software into a riscv core on the GPU, so of course they can release sources now, none of the really interesting parts are in there
<Griwes>
Yes, but it also means you need to do so much less for a driver to work
<klys>
this kind of thing isn't a `driver' all the way through. it's performing gl instructions on cores.
<klys>
so, it's an operating system, in a way
<klys>
the only thing it really lacks is a timer driven interrupt controller
<klys>
otherwise you could target the device
mahmutov has quit [Ping timeout: 240 seconds]
<klys>
graphitemaster's take ought to be insightful I'm hoping to hear from him
bauen1 has joined #osdev
joe9 has quit [Remote host closed the connection]
Burgundy has quit [Ping timeout: 252 seconds]
<graphitemaster>
it's the kernel mode driver, the gl driver is all userspace driver software which is still closed
<graphitemaster>
and yes, most of the special sauce has been moved into firmware which runs on risc-v cpus on the gpu itself and they just provide the firmware blobs now
<graphitemaster>
but the good news is that nouveau can now use the firmware and kmd source code to implement reclocking finally
<graphitemaster>
aside, nv also has an open source vulkan driver in the works (user space) so that may be released soonish
troseman has joined #osdev
Clockface has joined #osdev
bradd_ has quit [Ping timeout: 240 seconds]
mrvn has quit [Ping timeout: 240 seconds]
bradd has joined #osdev
jack_rabbit has joined #osdev
knusbaum has quit [Ping timeout: 276 seconds]
pretty_dumm_guy has quit [Quit: WeeChat 3.5]
craigo has joined #osdev
janemba has quit [Ping timeout: 256 seconds]
nyah has quit [Quit: leaving]
janemba has joined #osdev
Starfoxxes has quit [Ping timeout: 248 seconds]
Starfoxxes has joined #osdev
joe9 has joined #osdev
<geist>
heat: yah i've had to deal with this nvrm stuff before
<geist>
it's basically a full OS and hardware abstraction layer
<geist>
i wonder what the firmware looks like on the riscv cores