<bslsk05>
'Wolf Of Wallstreet Matthew McConaughey [FULL SCENE] [HD]' by OGF (00:05:59)
<zid>
mjg_: That's why you should have talked about graphics devices with me instead
bauen1 has joined #osdev
<fatal1ty>
mjg_: i want to see your gym boyfriend transcends that
<mjg_>
zid: i have no clue about graphics, devices or otherwise
<zid>
same tbh
<mjg_>
ok in that case let's engage in a heated flamewar
<mjg_>
pick a vendor you like
<zid>
I think my best bet is just to keep using the one I am using, bochs? Then use double vertical res + scrolling to double buffer it
<zid>
then think of something to do wrt tearing
<zid>
triple buffering maybe..
<mjg_>
ye this is a common rookie mistake, you should do the [redacted] instead
<fatal1ty>
rofl
<fatal1ty>
indeed
<zid>
only problem with triple buffering is that I don't have enough vram to do it on the gpu so I'd have to memcpy :D
<zid>
could do vga_mem_mb though by the looks of it to up it
<fatal1ty>
you also could buy a new gpu
<fatal1ty>
by the looks of it
<zid>
or stick to 1280x1024
justHaunted is now known as DeliriumTremens_
xenos1984 has quit [Quit: Leaving.]
Jari-- has quit [Ping timeout: 252 seconds]
<fatal1ty>
zid: what if i told you that you and i will work on something one day ?
wootehfoot has joined #osdev
<fatal1ty>
ok enough with foolilng around
<fatal1ty>
i got a job to do
<fatal1ty>
cya
<Matt|home>
Okay. It has been a very long time since I read up on this, but I'm doing a project with binary files right now. If I remember correctly, processes get loaded into memory and are given mapped virtual addresses which correspond to real physical addresses, at least on x86. Does each process have access to the entire memory space, and how does it pull information from other processes? For example: say you have two separate C programs
<Matt|home>
running, each using a char variable stored in their respective memory spaces on the stack. Those chars are invisible to each process right? Each process is given it's own stack that can't interact with the other?
<fatal1ty>
no
<fatal1ty>
some memory space are restricted
<Matt|home>
like kernel space
<fatal1ty>
yes
<fatal1ty>
and read only
<fatal1ty>
.text and stuff
<Matt|home>
But you can attempt to access every single numerical address on the computer with a program. It'll just throw an error if you try to access it
<fatal1ty>
you can attempt to fly and pretend that you are a bird, but it probably end bad, yeah ...
<GeDaMo>
Each process has its own address space
<GeDaMo>
It has a page map which maps virtual addresses to physical memory
<fatal1ty>
common dude i got this ...
<Matt|home>
Let me try asking it this way: Regardless of if each process has it's own address space, you can still act as though you have the entire address space at your disposal. so addresses 0 to a billion or whatever on a 4 gigabyte system
<fatal1ty>
and then you guys always complain that im useless here ..
<GeDaMo>
Yes
<GeDaMo>
If you try to access virtual address which isn't mapped, it causes a page fault
<mjg_>
your address space is not necessarily 4G on a 32-bit arch
<Matt|home>
right. im trying to keep it simple
<fatal1ty>
it is on x86
<Matt|home>
the processes CAN interact with each other - they can clone, get PIDs stuff like that
<fatal1ty>
and they can share
<mjg_>
you can share memory with mmap for example
<GeDaMo>
The kernel can change the page map and its possible for more than one process to have the same physical memory mapped into their address space
<Matt|home>
If each process acts as though it has access to the entire address space, is there a way to determine what your actual accessible address range is within the process? Or do you just hope that you don't page fault if you try to access something out of that range
<fatal1ty>
the compiler and your OS usually make sure that you will not page fault
<fatal1ty>
unless you try to access dangling pointers
<fatal1ty>
in a given context
<GeDaMo>
You request memory from the kernel but it's up to you to avoid accessing memory you haven't requested
<fatal1ty>
Matt|home: i really hope you gonna do something with this information
<fatal1ty>
as i couldn't ...
<fatal1ty>
it just terms for me
<Matt|home>
okay. So scenario: 2 concurrent running processes, A and B. A accidentally tries to access a variable B has somewhere in .rodata or whatever. address 'abcdefgh'. what happens. kernel silently maps that virtual space elsewhere? or throws page fault
<GeDaMo>
A can't access B's address space
<Matt|home>
or is it impossible with how mapping works
<fatal1ty>
they both have a different B variables and
<fatal1ty>
or A
<fatal1ty>
Matt|home: i think you meant to say concurrent threads
<GeDaMo>
A has 0..Max and B has a separate 0..Max
pie_ has quit []
vancz has quit []
<Matt|home>
okay. im sorry im having trouble wrapping my head around this. If you have a virtual address: 0x00000001 for two seperate processes, how are they mapped to two different memory regions
<GeDaMo>
Page table
<Matt|home>
If i remember correctly, the page table is accessed with the first four bits of the address, and the last four is the offset
<GeDaMo>
Each process has a table where virtual addresses are mapped to physical addresses
<fatal1ty>
Matt|home: the MMU will map them physically somewhere else
<GeDaMo>
When the kernel switches between processes, it swaps the page table too
<Matt|home>
right. so that necessarily means that the more processes you have running, the smaller the accessible address space each process will have
<fatal1ty>
yes
<GeDaMo>
No, each process has a full virtual address space
<fatal1ty>
they be accessible but reading from disks will occur more often
<fatal1ty>
as physical memory is being used
<GeDaMo>
Virtual memory doesn't have to involve swapping out to disk
<fatal1ty>
there is a special page that reads from disks but i don't know much about this
<Matt|home>
sorry im just really dumb, i'll try to look for a visual representation or something later cuz im having trouble understanding how this works in practice. alright. here's what i was leading up to
elastic_dog is now known as Guest8329
<fatal1ty>
GeDaMo: but are you seeing me osdev'in in the future ?
elastic_dog has joined #osdev
Guest8329 has quit [Ping timeout: 255 seconds]
vancz has joined #osdev
pie_ has joined #osdev
DeliriumTremens_ is now known as justache
<Matt|home>
If I load a binary file into memory with mmap, it'll have an entry point - the address at which program execution theoretically begins. but because of shenanigans, it's not the actual file offset or some such. for ELF binaries for example, it gives the "virtual address"
<Matt|home>
i assume you have to do operations to convert that to get an actual file offset?
<fatal1ty>
you don't load a binary file with mmap
<GeDaMo>
You can
<Matt|home>
you totally can. im working on a binary parser right now
<fatal1ty>
im standing corrected
<fatal1ty>
Matt|home: can i join your journey ?
<fatal1ty>
with the parser ?
<Matt|home>
basically , my question is: if you change the virtual address to point elsewhere why wouldn't that give you an accurate file location
<GeDaMo>
You can mark pages in the page table to say they are allocated but not mapped to physical memory
<Matt|home>
ah
<fatal1ty>
the pointing can point to a different table and not to a specific file
Burgundy has left #osdev [#osdev]
<GeDaMo>
If the page table has a virtual address mapped to physical memory then an access works with no problem
nanovad has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
<GeDaMo>
If it's not mapped to physical memory then it causes a page fault and the kernel can decide what to do about it
<GeDaMo>
E.g. map it to physical memory, load it from disk etc.
<fatal1ty>
GeDaMo: what happens if a file is not completely loaded to memmory, and i try to access a part in the file that it doesn't, what causes the os to fetch the relevant ( and how does it know what IS the relevant part ) of the file ?
<fatal1ty>
?
<mjg_>
i'm pretty sure this is all explained in your fvourite os textbook
<Matt|home>
i'll read up on page tables soon
nanovad has joined #osdev
<mjg_>
with driagrams 'n shit
<Matt|home>
mjg_ : i'm flippin through it right now
<fatal1ty>
i mean some part of the file is loaded and some doesnt'
<fatal1ty>
i forgot that case ..
<GeDaMo>
fatal1ty: the file start would be at an address so it would be loaded to that address plus the file offset
<fatal1ty>
if i load two files in the same process ... how does the os know if the virtual address im accessing is part of the first file or the second ?
<GeDaMo>
Page table
<GeDaMo>
Each file will be mapped to consecutive virtual addresses
<GeDaMo>
So file A is at address X .. X+A.length and file B is Y .. Y+B.length
<fatal1ty>
but technically A can be so long .. that it will reach Y
<fatal1ty>
if X+A.length == Y
<fatal1ty>
how does the os know to fetch a part from the first file .. or i rather meant to access the other file.. which is on Y
<fatal1ty>
this is indeed confusing
<fatal1ty>
i guess that the length of the file is known even though it's not completely loaded to memory
<GeDaMo>
You access memory at an address between X and X+A.length, that corresponds to part of file A
<fatal1ty>
other wise another file could be technically mapped to somewhere between X and X+A.length
heat has joined #osdev
<GeDaMo>
That would be a mistake
<heat>
mjg_, does freebsd have stackdepot?
<fatal1ty>
is every address that are not currently loaded to memory between X+A.length would mean to fetch the part from disks ?
<GeDaMo>
Yes
<fatal1ty>
if so why not just to load the entire file ... if we need so much these addresses ?
<mjg_>
heat: no
<GeDaMo>
Because that would use physical memory
<fatal1ty>
right ...
<fatal1ty>
thanks
<heat>
mjg_, so you guys don't stack traces for the allocation/free? that seems kinda shitty
<mjg_>
what?
<heat>
unless you found an INGENIOUS SOLUTION
isaacwoods has joined #osdev
<mjg_>
for alloc/free tracing the allocator artifically bumps the size to accomodate the stacktrace
<heat>
stackdepot is what llvm/linux call for the thing where they store stack traces
<heat>
oh
<mjg_>
so it hangs around the buffer
<heat>
they say it's a bit inefficient
<mjg_>
who said it is efficient. it is good enough for real world debug runs though
<heat>
because the stacks will be mostly the same
<mjg_>
not fit for production for sure
<heat>
i'm 250% sure they built ASAN and KASAN to be production-ish-ready lol
<heat>
the quarantine has smaller percpu quarantines
<mjg_>
*production*?
<mjg_>
wut
<heat>
yeah idk
<mjg_>
i don't know kasan cost, but it has to be non-trivial
<heat>
it seems that they care about minimal overhead
<mjg_>
albeit i see how one could justify it
<heat>
maybe to catch more bugs
<mjg_>
if there are spare cycles, i do think it is a worthwhile addition
<mjg_>
but then again, is this something they run on phones or other battery-powered devices?
<mjg_>
or just their servers
<heat>
how else could you justify percpu queues? percpu -> faster -> more chance of getting a nasty race -> yay?
Burgundy has joined #osdev
<mjg_>
heat: ez. whatever other solutions they have tried might have been too slow for use even with debug
<heat>
nah. they would be usable
<mjg_>
at the end of the day you want debug to be good enough to be used during normal development
<heat>
your kernel can have huge big locks and still work nicely and boot to desktop (see openbsd)
<mjg_>
which interestingly is not always achieved
<mjg_>
well man
<mjg_>
if i want to debug races on something bigger than a 2010 laptop
<heat>
you'll lose a bunch of performance sure
<mjg_>
i can't have something which will floor perf
<heat>
i kinda wonder how well the pcpu magazine works with KASAN
<heat>
it'll alloc faster, sure, but the effectiveness of getting objects back and not needing to lock the slab cache goes away
<mjg_>
tradeoffz
<heat>
this is making me wonder if general CoW page table support is worth it
<bslsk05>
'Urban Assault - Drop The Bass' by Underground (00:05:01)
<energizer>
what are some good opinions to have about garbage collected languages in operating system implementation?
<epony>
none good opinions about that
<epony>
the OS kernel virtual memory is the real garbage collection
Burgundy has quit [Ping timeout: 248 seconds]
<epony>
having that in each program instance is double overhead
<mjg_>
energizer: can you clarify the question
<epony>
it should be (nearly) zero overhead
<epony>
so the GC is out in the trashbin
<mjg_>
you mean gc'ed kernel, userspace, something else
<energizer>
i mean Mezzano
<epony>
so.. from faulty concept to a particular implementation as a justification for "beliefs in fairy tales"
<energizer>
mjg_: let me clarify your question then - what would it mean to have a gc language "in userspace"?
<epony>
was it not your question to have some opinions?
<energizer>
epony: yeah but mjg_ asked for more specifics before answering, which seems fair enough
<epony>
about garbage collection as a last resort to make it "apply" at least somewhere?
<epony>
it seems like a failed concept for system implementations, you should rather check what virtualisation runtimes are doing (the likes of jits and jvms)
<epony>
the purpose of an OS is to handle resources of the machine efficiently and with minimum overhead and provide facilities (services) for their management
<epony>
if collecting resources used and filled with previous data is considered efficient..
<epony>
you should check how efficient Lisp is for kernel implementations too, in that line of crazy thinking