<bslsk05>
github.com: lk-overlay/bootcode.ld at master · librerpi/lk-overlay · GitHub
<zid>
I just use boot.o (.text); *.o (.text)
<clever>
in this example, the previous stage expects a dumb .bin file, and the entry-point is always 0x200 into the file
<zid>
same shit different method
<clever>
and the vector table happens to be 0x200 in length
<clever>
so i have the vector table, then .text.start, .text*, and then whatever
<zid>
but requires you to jmp over your multiboot header to avoid running it if you're letting it live first in .text bear in mind
<zid>
if you're not being loaded like an ELF with an entry point, anyway
<zid>
I like my entry point to just be the first byte so that it works either way
<clever>
thats not an option for either of my vc4 modes
<clever>
the .bin mode must have the entrypoint 0x200 into the file, but i could nop-sled my way to it
<clever>
and the .elf mode expects valid elf headers, so those are always in the way, and with an entry-point, who cares?
<clever>
but it is true for my arm mode, which expects a normal arm vector table, complete with a reset vector, at the front of the binary
<zid>
I'd just have .vector : {} .text : {} in that case
<zid>
so that the first byte of .text is still the entry point
<clever>
yep
<clever>
which is what bootcode.ld above is doing
<clever>
i also dont like relying on the file being called boot.o, that doesnt feel safe, and what if another boot.o comes along with an unrelated purpose? paths?
<bslsk05>
github.com: rpi-open-firmware/Makefile at master · librerpi/rpi-open-firmware · GitHub
<zid>
You make it sound like every makefile does %.o : %.c on `find / -name "*.c``
<clever>
its less that, and more what if some future developer creates a second boot.c, and both boot.S and boot.c have to be linked in
<clever>
not due to a dumb find command, but by choice of a dev
<zid>
that already doesn't work
<zid>
before the name is even i mportant
<clever>
boot.o and cmd/boot.o
<clever>
they dont collide when subdirs come into play
<clever>
but do they collide when the linker script says boot.o is special?
<zid>
if they're in the different dirs, how are they colliding
<zid>
your makefile is doing find ../ -name "*.c" again?
<clever>
i dont know if linker scripts check the dir part
dude12312414 has joined #osdev
vdamewood has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<zid>
to rephrase it so it's easier to understand, I have like 20 files called main.c exporting main() in some projects (tools, etc) and I've never once accidentally fucked up and used the wrong filename in the wrong output
<clever>
thats not what i'm saying
<clever>
what if you need to include both boot.o(boot.c, the entrypoint) and cmd/boot.o(cmd/boot.c the command) in the same link
<zid>
It is what you're saying though, you just maybe don't realize it
<zid>
Then i've fucked up the link and included main.c from two things.
<clever>
they implement different things, and are supposeed to be in the same .elf at the end
<clever>
they implement different things
<zid>
They implement different things, but are *never* supposed to be in the same .elf at the end
<clever>
why?
<clever>
this is a kernel where all of the commands are in the same .elf as the kernel
<zid>
and they can all have a boot.o just fine
<zid>
what they cannot do, is cross the streams
<zid>
and link *their* boot.c into *my* link
<zid>
same as if I had multiple main.c
<zid>
the one for booting is in boot/, if someone sneaks something from ../kernel/submodule/net/irc/boot.c into that link, it's *fucked*, regarldess of whether the filename was semantically relevent or not
<clever>
and thats why i dont want the name boot.o to have a special meaning
<bslsk05>
github.com: u-boot/boot.c at master · u-boot/u-boot · GitHub
<bslsk05>
github.com: u-boot/boot.c at master · u-boot/u-boot · GitHub
<clever>
it is perfectly valid to link both of those boot.c files into the same .elf
<clever>
one is arch specific boot code, one is a generic boot command for the REPL
<clever>
the arch boot.c is involved in booting, and then at a much later time, running cmd/boot.c
sjs has joined #osdev
<zid>
I'm going to make you a promise. I will *never* do ld -T../../../../../cmd/link.ld
<zid>
in order to swap the paths of the files
<zid>
nor will I name a file stdio.h
<zid>
but you probably already believed me on that one
<clever>
that reminds me, i'm not linking against lua-aux, because it needs junk like fopen which i lack
<clever>
but i am re-implementing functions from that sub-library, and copying the existing api
<clever>
so the average viewer may look at the code and think i'm using aux, but i'm technically not
<clever>
i'm just using the same function names, because it serves the same purpose, and the existing docs do a perfect job of explaining what it does
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
dude12312414 has quit [Ping timeout: 268 seconds]
dude12312414 has joined #osdev
dude12312414 has quit [Client Quit]
jafarlihi has joined #osdev
vdamewood has joined #osdev
<griddle>
How is everyone this weekend?
<jafarlihi>
Fucked up, can't find things to contribute
<gog>
got some clothes, some mochi
<gog>
bought my wife a jacket
<gog>
groceries
<griddle>
Sounds productive
<gog>
trying to figure out what is going on with my job
vinleod has joined #osdev
<gog>
it's a mess
<gog>
i was told that i would likely be getting a new contract next week for a new position
<gog>
but it has to be before friday
vdamewood has quit [Killed (tungsten.libera.chat (Nickname regained by services))]
vinleod is now known as vdamewood
<gog>
if i don't have a new contract by friday i have to walk
<vdamewood>
Trouble with work?
<gog>
the project i'm on is dead and they gave us notice. i'm supposedly being retained in a new position but the client hasn't said when they want us to stop work
<gog>
so i'm scheduled to work next week, but my contractual last day is thursday
* vdamewood
gives gog a fishy
<gog>
we all agreed we're not working friday unless the notice is rescinded
<gog>
everybody on the team
<gog>
because that would be against the CBA and we could sue
<vdamewood>
Throw the boss in a volcano!
<gog>
thinkin about it
<gog>
he skuled off at 3pm on friday rather than talk to us face to face
<gog>
he made a promise to me but he's not giving me a lot of confidence
<vdamewood>
Promises aren't worth the paper they're written on.
<gog>
yeh for real
<gog>
iv'e already applied for a bunch of other jobs in the mean time
sprock has quit [Ping timeout: 268 seconds]
<mjg>
lol
<mjg>
this reminds me a little bit of oracle which axed a lot of people with a pre-recorded message in a conf call
jafarlihi has quit [Quit: WeeChat 3.6]
SpikeHeron has quit [Quit: WeeChat 3.6]
<griddle>
I've been playing with a new nerd rectangle (planck keyboard). It's mostly a toy right now. Programming is impossible w/o the muscle memory of where symbols are
<clever>
griddle: i get messed up if some of the keys are off by even half a cm, lol
<griddle>
Currently at 28wpm w/ rust code on monkeytype.com
<clever>
my desktop and laptop keyboards ive gotten used to, but the rf keyboard for my media center is a pain, in multiple ways
* geist
yawns
<geist>
good afternoon folks!
<griddle>
howdy
<clever>
afternoon
<zid>
92 first try
<geist>
how has the last few days been? was at a work event most of the time. i hope everyone as nice and civil with each other
<geist>
and oses were devved
<griddle>
i have not written code in my spare time outside of firmware for this new keyboard lol
<griddle>
pretty sad
<zid>
109 at C
<griddle>
w/ symbols or nah
<zid>
how do I change it
<griddle>
make sure punctuation is on in the top right
<griddle>
punct and numbers
<zid>
ah I see it
<griddle>
I'm doing rust because it's got the most non ascii
<zid>
It just adds random puncutation lol
<zid>
return! main, EOF.
<griddle>
that is valid rust code
poyking16 has quit [Quit: WeeChat 3.6]
<griddle>
just throw !#[] at it till your code compiles
<\Test_User>
"make your own language that errors if any valid ascii character is used"
<zid>
90 with random puntuation
<griddle>
I can understand the appeal of this keyboard, but man is there a learning curve
<griddle>
it forces you to type zxcv w/ the right finger now lol
<zid>
have you considered
<zid>
a full ISO keyboard
<griddle>
space cadet keyboard
<griddle>
7 layer keys including "greek"
<\Test_User>
256-key keyboard and you must memorize the keycodes for everything you want to type
<griddle>
a 16 key board where you just type out the hex
<griddle>
that cat has a better life than anyone ive ever met
<gog>
boat cat boat cat
<zid>
cats and boats and cats and boats
<zid>
that's my new techno song
<gog>
nice
<junon>
So learning about virtual address translation and seeing that the page table indices are encoded in the virtual addresses. Is this specified in the manuals or is this an implementation detail? I've not read about this before, certainly not on osdev I don't think.
<gog>
it's totally ont he wiki
<gog>
and it's specified in the manuals
<junon>
:o wat, I've not read about it, feels like I've read all the pages on this on the wiki haha. Somehow missed this.
<zid>
both
<junon>
It's blowing my mind, I had no idea.
<zid>
it's an implementation detail, and is in the manual of the implmentation :P
<mrvn>
They aren't encoded in the virtual address. What's a page table?
<bslsk05>
pages.cs.wisc.edu: Operating Systems: Three Easy Pieces
<junon>
Right, referring to x86_64, sorry
* gog
encodes mrvn into a virtual address
<griddle>
they definitely go into it better than the wiki does
<zid>
You can also just view it as a trie
<mrvn>
junon: beware of 5 level page tables
<zid>
with 512 nodes at each level
<mrvn>
zid: radix tree
<zid>
prefix-tree
<mrvn>
Can I make a tree where nodes have no parent pointer and still have iterators using constant space and without altering the tree to remember where to go next?
<junon>
During a walk you mean?
<mrvn>
yes
<junon>
Only if your tree is bounded in size.
<mrvn>
then everything is O(1)
<junon>
Iteration is not O(1) even in bounded size.
<junon>
You don't magically iterate everything all at once.
<mrvn>
sure it is. it needs at most 4 billion steps, or whatever the bound i.
<mrvn>
is
<junon>
bounded != constant
<junon>
just means upper bound.
<junon>
You can still have a short count, thus the iteration complexity is still O(n)
<GeDaMo>
Wouldn't the amount of space needed be related to the height of the tree?
<mrvn>
GeDaMo: yes
<junon>
Yes.
<mrvn>
junon: O() talks about the cost when n -> infty. If t's bounded you hit the bound and then everything is constant.
<junon>
Depends on how you look at it. You're technically correct, but IMO conveys the wrong idea.
<mrvn>
junon: practically everything is bounded by the cpu only having limitzed memory and address space but that part you ignore.
<junon>
E.g. a search is not constant time IMO, it's still `N`.
<jjuran>
mrvn: Right, Big O applies to big N
<mrvn>
junon: no, constant time. Takes at most <bound> instructions.
<zid>
big O is a scam by big CS to sell you more N
<junon>
I disagree with the assessment tbh unless the algorithm is optimized to account for bound N.
<junon>
Otherwise it's still N.
<junon>
The bound is irrelevant.
<mrvn>
junon: a binary search in an array <= 1000 elements never needs more than 10 steps.
<junon>
But we're arguing semantics.
<mrvn>
junon: 10 steps is constant time.
<mrvn>
so whenever someone asks about O() notation it's only meaningful without bounds.
<junon>
I understand your viewpoint, I just disagree that the complexity of the algorithm itself changes due to an implementation detail, especially the size of the set.
<junon>
It's still O(whatever (log n I think)), it's just that in this particular case you can guarantee no more than 10 steps.
<mrvn>
junon: that's the ugly part. No bounds also means your integer have arbitrary size, kind of. But still you consider addition to be constant time.
<GeDaMo>
Store your tree in a heap
<mrvn>
GeDaMo: that only works for some balanced trees
<zid>
junon: is it the same algo still?
<junon>
zid: My assertion assumes so. I would also assume you could make some optimizations time-wise if you can add guarantees to the underlying datastructure, such as fixed bound, etc. Certainly size complexity can be made constant in many cases.
<zid>
I like the idea of calling trivial n, 1
<zid>
but whether big CS agrees idk
<mrvn>
junon: As said, as soon as O() comes into play you have to consider the algorithm without bounds.
<zid>
a bit like astronomers only caring about magnitudes
<zid>
so 4e7 * 4e7 is 1e14 not 1.6e15 or whatever
<junon>
Yeah. I'm personally not a fan of big-O because it's often used to convey incorrect information. Should be used as a tool, not as an assertion. It's the difference between "binary sorts for this data structure are guaranteed to occur in a fixed window of time" vs. "binary sorting is O(log n)". The context differs.
<mrvn>
zid: Look up how to sort big arrays of integers on a massively parallel system: If n < 1000000 sort on a signle core, that's a trivial problem. If larger split into sqrt(n) chunks and sort on sqrt(n) cores.
<zid>
yea agreed
<zid>
plus in actual programs, 99% of algos are swamped by the constant
<mrvn>
Sure it's theoretically the fastests sort there is. But will your ever use it? >1000000 elements is no biggy, that is one step in the divide&conquer needed 1000 cores. The next step would be at 1'000'000'000'000 using 1'000'000 cores.
pretty_dumm_guy has joined #osdev
<zid>
like I wouldn't be surprised to see someone describe a web request as being O(n) in terms of speed, scaling with the size of the web-pae
<junon>
To this day I don't think I've ever implemented bubble sort for any particular task at hand in my 12 years career-wise and 20+ years of actually writing code. Yet every company needs to make sure I can do it off the top of my head in order to be deemed good enough.
<mrvn>
zid: a web page needs O(n) requetss.
<zid>
I use bubblesort religiously
<dh`>
are we doing this argument again?
<zid>
If it needs a better sorting algo, I'm not the one you should be hiring to do it.
<junon>
Same with big-O. I worked at Big Tech for a while and rarely did we need to actually, practically worry about complexity lol.
<mrvn>
junon: but I bet you have used it a lot. Ever used qsort?
<mrvn>
or std::sort
<junon>
Yes of course, used it many times.
<zid>
bigO is like being able to calculate the expected velocity of your potato cannon
<zid>
most programmers are just using 'the tube they have' and 'the fuel they have' or whatever
<zid>
and that's 100% correct
<mrvn>
junon: for n < 6-32 (depending on lib) it uses what is in effect bubble-sort.
<GeDaMo>
Big-O is a theoretical measure of complexity, not a predictor of real world performance
<zid>
The engineering constraints matter the most, in terms of mind-space
<zid>
and in turn, you THINK about things in terms of engineering constraints
<junon>
Yeah I knew that. Didn't mean bubble sort was bad, just that I've never implemented it from scratch before, because I've never needed to.
<zid>
I've never gone into a project and thought about the complexity of the algs I am using, I just naturally don't do dumb shit
<mrvn>
junon: When you do big data the O() notation becomes more relevant. But that's basically all known solved problems like binary search. So nobody cares about the complexitiy, only optimizing the performance of the solution.
<junon>
Yep.
<mrvn>
zid: so you just use std::list everywhewre not caring that std::vector would give better random access (or vice versa)
<zid>
If I am *aware* there is a critical alg thingy in play, then I can do research or whatever
<mrvn>
junon: you should, it's a right of passage. :)
<junon>
Most of the big data stuff we did at Big Tech ended up being how to make the already-created data frameworks faster inside of our prod datacenters. We didn't actually re-create e.g. Spark and the like.
<zid>
because even in the "big O matters" case, the constant overhead is *still* the most relevent thing
<zid>
red black and B and crap all have the same semantics, what matters is *specifically* how they interact with your cache and stuff
<mrvn>
zid: but there you can gain a lot by switching algorithms or data structures. O(n^1.8) or O(n^1.7) matters when n is in the billions.
<junon>
Bottlenecks were typically cache misses, HBase being cruddy (or poorly configured), being failed over half the time, or waiting on other teams to unblock us, etc. That's not factored into big-O lol.
<zid>
yup
<zid>
The constant term still dominates, if it doesn't then it's a bug :P
<jjuran>
mrvn: std::list doesn't just have worse complexity for many things, but higher coefficients since you're often calling the memory allocator, so it's worse even for small N
<junon>
then cache coherency with list, too, which also isn't factored into big-O
<mrvn>
But mostly you have stuff like sorting or binary search where everything is O(n log n), no changing that on the big scale. Then you can only tweak the constant factor.
<jjuran>
Also locality
<junon>
cache coherency == locality, no? or am I misusing terms
<zid>
junon: I've got 100 things I need to insert into this list, better spend 4 seconds warming up an AWS instance with a special O(1) insert database!
<mrvn>
junon: coherency is between threads
<zid>
coherence is about making sense
<zid>
locality is about distance
<zid>
as in, incoherent
<zid>
or non-local
<junon>
Oh, then I meant locality, jjuran is right haha
<junon>
I always second guess myself with you lot, probably a good thing.
<jjuran>
junon: I made that same mistake when I interviewed at Google :-P
<junon>
:D something I'll never do again. Both my interviewers were insanely rude.
<jjuran>
One of my four onsite inteviewers was seemingly disdainful to the point that I assumed it was a deliberate interviewing tactic to see how applicants would respond
<bslsk05>
'Sorting Algorithms: Speed Is Found In The Minds of People - Andrei Alexandrescu - CppCon 2019' by CppCon (01:29:55)
<zid>
jjuran: Are you sure you're not just hideously ugly?
<mrvn>
.oO(Lets do something stupid, do more work, maybe that will be faster)
<jjuran>
But I applied again the next year anyway. That time, my first phone interviewer called me 38 minutes late
<jjuran>
And then he started getting rude
<jjuran>
zid: I'm well aware of my current psoriatic outbreak, thanks
<mrvn>
jjuran: maybe they were looking for people that could work with the asshole sun of a boss.
<mrvn>
s/sun/son/
<zid>
You should hire a stunt double and see if the interview goes better, then sue
<zid>
retire on those sweet discrimination bucks
GeDaMo has quit [Quit: There is as yet insufficient data for a meaningful answer.]
opal has quit [Ping timeout: 268 seconds]
<junon>
So, in order to allocate the entirety of a 48 bit address space under x86_64's 4 level page table structure, you'd need 513 GiB just to store the page tables?
<mrvn>
junon: also consider that half the address space is commonly used for the kernel and the other half for processes. So half of those 513GB are per process.
<junon>
yeah makes sense, was just curious about the math of it all
<junon>
in the unrealistically extreme cases
<mjg>
48 is the lower end anyway
<mjg>
see la57
<junon>
yeah ice lake et al
<junon>
I assume that's exposed via cpuid bit and opted into, no?
<geist>
yeah, there's a la57 cpuid bit
<mrvn>
junon: yes, 5 level page table
<geist>
and you have to set a CR4 bit to enable it
<geist>
side note: just learned this and it was a bug in zircon (and still is in LK).
<junon>
:D neat
<geist>
there is a cpuid node up in the 8000.0000 space that gives yuo the *max* supported physical and virtual bits
<geist>
was interpreting it as the *current* virtual bits
<geist>
so on a la57 cpu it reports 57, even if you're using 48 (or 32 bceause x86-32)
<geist>
up until la57 since about 2003 an x86-64 capable cpu always reported 48
<mrvn>
or 36
<geist>
no actually. it's *virtual* bits
<geist>
AMD added it with x86-64
<geist>
(hence why it's up in the 0x8000.0000 space)
<geist>
there's another field in it that's physical, and it has more interesting values, basically
<geist>
and that's wher eyou get 34, 36, 40, etc
<mrvn>
do you actually care about the physical bits?
<junon>
How do you go about optimizing switching implementations for things in that case? Anything insofar as dispatch for memory accesses would cause a branch for everything, no? Do you instead swap out pages in the kernel to change implementations?
<geist>
mrvn: not generally, but there may be cases where you do
<junon>
Otherwise you have to recompile for specific feature bits, making the kernel non-portable between AMD64 in this case, right?
<geist>
junon: for what specially? most of these features are simply if statements
<mjg>
it can be ifunced and/or hotpatched if you take the time to do it
<geist>
unless it's huge in which case you might patch the kernel, but i persionally thing that's only useful for big ticket items, like memcpy implementations
<junon>
Right but since this is a hot path doesn't that slow things down?
<j`ey>
junon: Linux patches itself on boot for things like this
<mrvn>
junon: compared to invlpg the cost of a single if(la57) is negible.
<geist>
in the grand scheme of things maybe, maybe not. but in general if statements to choose two blocks of code are not too bad
<geist>
exactly
<mrvn>
and thigs like page faulot ha
<mrvn>
page fault handler you can set to the right function
<geist>
but things that are very critical like memcpy i think make mroe sense, because you do frequently call them in cases where the if may be significant
<geist>
compared to the cost of the tests fr multiple versions
<junon>
Interesting!
<mrvn>
junon: something like la57 you can savely do via configure + make. It's not like you will run into it everywhere.
<junon>
So are things like memcpy often exposed via the kernel? Or do you mean something like glibc will do the patching for different implementations?
<junon>
mrvn: yeah good point
<geist>
junon: i'm talking about for in-kernel memcpy
<mrvn>
junon: ls.do does that patching
<mrvn>
ld.so
<geist>
though yes, you can also export it via a vdso
<junon>
geist: ahh okay
<geist>
though glibc/etc is usually sufficient
<geist>
thats a case where user space can do its own work for that, so probably not worth making it a kernel/user abi thing
<mrvn>
junon: gcc/clang also support function overloading by cpu flags and hot patch the right code in at use.
<junon>
ahh is that how they do dynamic dispatch? It's actually patching the executable?
<mrvn>
Has anyone tried using that function overloading feature in kernel code?
<junon>
I didn't know the implementation specifics, I figured it was just a jump table.
<mjg>
mrvn: freebsd uses ifunc in the kernel if that's what you mean
<geist>
probably a load time patch of all teh jumps (for user code)
<mjg>
linux injects jump
<mjg>
s
<geist>
kernel stuff generally does something like 'copy the chosen implementation of X into a sled of nops set aside for it'
<geist>
that's the hard core, zero runtime cost version
<geist>
the overhead is you have to at compile time reserve a space that's >= max(sizeof(all implementations))
<mrvn>
mjg: not ifunc I think. It's something like __attribute ((march="sse4"))
<geist>
downside is it's a little bit of a pain to debug crashes in taht area
<mrvn>
mjg: then you get that function when your cpu supports sse4
<mjg>
sounds like ifunc
<mjg>
except they provide their own resolver
<geist>
yah i looked into what gcc does there and it was pretty much exactly what you think
<mrvn>
mjg: yeah
<junon>
Couldn't you in theory generate code that is all aligned to the same base address and then switch out the pages? Instead of copying?
<geist>
it's a virtual that they provide. it uses cpuid the first time it hits it (or in some init routine) that then sets it
<mjg>
even so, selecting by availability of an instruction set is too primitive
<geist>
junon: ptrobalby, but then you have page overhead. probably not worth it
<mrvn>
junon: you can. easy to do with per-function-sections.
<mjg>
glibc has gazillion special cases for uarchs
<mjg>
dealin with their specific problems
<junon>
geist: What do you mean with page overhead?
<mrvn>
junon: drawback is you have to pad all implementations to the same length.
<geist>
well i mean you lose space with internal fragmentation. ie, a 8 bytre function uses a page
<mrvn>
geist: you can put more than one function per page
<junon>
Ah true.
<geist>
except the point was to swap it per function
<geist>
anyway, yes tyou could, but copying is just as good
<mjg>
that said, i'm in favor of kernel-exported memset et al specifically for userspace
<junon>
Yes but they still have to have the same entry points for each implementation.
<geist>
and doesn't futz around with special cases for particular pages
<geist>
yah exactly
<mrvn>
you generally have a lot of functions with the same condition for swapping, e.g. all sse2/sse3/sse4 code
<mjg>
key selling point is containers -- you udpate the kernel, keep the old shitters around and magically get other imrpovements
<geist>
yep
<junon>
geist: don't you still have the issue with internal fragmentation even with the copy, though?
<geist>
we use it in the zircon kernel for a few things: specialized memcpies, specialied user_copy routines, etc
<junon>
You still have to allocate max(impl1, impl2, ...) bytes to copy
<mrvn>
mjg: do you use memset for anything but bzero often?
<geist>
sure, but it's less than a page
<junon>
I suppose it's just that you can then pack them sub-page
<junon>
is that what you mean?
<geist>
yes
<geist>
mostly i wouldn't want to do it because it sets up a special case for particular pages
<geist>
which may require some hackery, or making sub mappings, etc
<geist>
based on how the underlying VMM works
<junon>
Right okay, makes a lot of sense.
<geist>
one of those things where the special cases get harder as your VMM gets more sophisticated
<geist>
but is probalby trivial when getting started because there's probably nothing that's really tracking what ismapped where
<mjg>
mrvn: normally no
<mrvn>
The biggest problem with providing something like memcpy() by the kernel is that the next generation cpu might need more code. So you kind of have to call memcpy() via a function pointer because you can't give it a fixed address.
<mjg>
mrvn: i do use it for poisoning stuff with debug
<mrvn>
mjg: except there i want memset64 to set a 64bit value instead of repeating a byte. :)
<geist>
yah, darwin back in the day when i was working on it (circa 2005) *did* provide memcpy/bzero/etc via their version of the vdso. it was a BSD thing
<geist>
uh called what there was a name for the equivalent of the vdso
<mjg>
geist: bsds don't do it though :-P
<geist>
something page
<geist>
mjg: yeah but they may have back in 2005
<mjg>
geist: what they did do is share code during build time
<mjg>
oh? i don't recall anything of the sort, will have to double check
<geist>
possibly arch specific too
<mjg>
i'm pretty sure i had seen the amd64 state at least from the get go on freebsd
<mjg>
and it did not do it
<geist>
in this very specific case the kernel provided a pointer to the optimzied routines for ARM
<geist>
and PPC
<mjg>
anyhow i support the idea as noted previously
<mrvn>
geist: would make sense for some of the support function in libgcc too. Provide a CPU optimized udivmod3()
<geist>
i'm going blank on what it was called. something like zero page, but it wasn't
<geist>
but the idea was the entry points to the kernel (and support code) is at a fixed address. i think just below 0
<mrvn>
geist: lets call it vdso equivalent
<geist>
and it had a fixed table of things
<geist>
the just below 0 was neat on arm, becaus eyou could basically do a `mov pc, #constant negative value`
<mrvn>
AmigaOS has the exec library at 0x4 and that points to the library base with function pointer at egative offets.
<mrvn>
geist: nice, yes.
<mrvn>
Note to self, place vdso at 0xFFFF...F00000
<mjg>
that's slighly bad because normally it will fit MAP_FAILED accesses
<mjg>
as in mmap failes but they roll with it nayway
<mjg>
fails even
<geist>
yah we specifically didn't do that in zircon so it's easy to hard assert in teh code that no kernel page is asked to or is mapped as user accessible
<mrvn>
mjg: that's -1. not even pointer aligned.
<geist>
as a second layer of defense against programmer error
<mjg>
mrvn: so?
<mrvn>
mjg: -1 will never be a valid function call offset. So no problem.
<mjg>
buf = mmap(...); crap = buf[0]; will try to ead from 0xff....ff
<mjg>
if you have this mmaped for vdso-like purposes it "works"
<mrvn>
mjg: you can not map the last page.
<mrvn>
mjg: or set MAP_FAILED == 0
<mjg>
you can't set to 0 as then you disallow mappings at 0 which some of the stuff depends on
<mrvn>
Really, why support mapping a page to 0 at all? use nullptr as failure.
<mjg>
(mostly exploits :P)
<mrvn>
mjg: Rule 1 for my kernel: Don't do something stupid just because legacy software expects something.
<mjg>
well if you give up on playing unix, sure
<geist>
yah i'd make it impossible to map at 0. zircon for example sets 16MB as the max lower limit of a process
<mjg>
but you are not unix, are you
<geist>
we are not
<geist>
actually i think we had to move it back to 2MB because of linux compatibility
<mrvn>
definetly not. I'm not POSIX or even C compatible.
<mrvn>
no char * strings.
<mrvn>
geist: I think in linux the limit is 64k.
<geist>
depends per arch, but yeah 2MB is what we negotiated down to
<mrvn>
2MB matches one huge page so that makes sense.
<geist>
we can change it at any time, since it's not hard coded into user code
<mrvn>
(x86_64)
<geist>
yah i had picked 16MB becase at one point it was the superpage size on arm32
<geist>
so i was trying to find the largest smallest superpage across all the arches i knew at the time
<mrvn>
geist: setting it to something mid page table would be a bit wasteful.
<geist>
sure, but t's not
<geist>
since it's a multiple of 2MB
<mrvn>
just agreeing with you
<mjg>
do you also have stack gaps?
<mrvn>
mjg: every allocation has a guard page
<mjg>
i mean in zircon
<geist>
yes, very much so
<mjg>
how big
<geist>
i dont know off the top of my head. the kernel does not give a shit, but user space adds its own pads
<mrvn>
I have this hardcoded in my address space allocator. every allocation adds a guard page. It's impossible to map something adjacent to the previous mapping.
<geist>
and no it doesn't work the way you think it does. the zircon VMAR (virtual memory address range) is AFAIK unique to zircon
<mrvn>
geist: can you mapp with fixed address to not have gaps?
<geist>
one of our innovations that i think is kinda neat
<mjg>
geist: is this described somewhere public?
<geist>
mrvn: it is possible to, but you need the proper capabilities to do so
<geist>
mjg: totally. basicaly the giest is a VMAR is a handle based object that represents a range of an address space (page aligned, etc)
sprock has joined #osdev
<geist>
it can be subdivided into smaller VMARs with less rights
<geist>
but a vmar handle is what you map vmos (memory objects, bag of pages) into
<geist>
so every process starts off with a single large VMAR that represents the entire address space
<geist>
you can just use that if youw ant, or you can subdivide
<geist>
so you can, for example, allocate say a 64MB vmar for a stack, and then put a 64k mapping in it
<geist>
and now nothing else can map into that vmar unless they have a handle to it
<geist>
it effectively reserves the space for the holder of the vmar
<mrvn>
geist: I do that for peripherals and irqs.
<geist>
the rights you get on a vmar also dictate things like 'can i put a X mapping in here' or 'can i map with SPECIFIC_ADDRESS permissions or do i get a randomized mapping'
<mjg>
nice
<mjg>
finally a not-unix
<geist>
the vmar model has proven to be pretty powerful since we can use it for a variety of cross process and other more interesting
<geist>
also there's some fun security bits: a loader can, for example, carve out a vmar early on, map all the text/data segments and then literally throw away the handle
<geist>
now it is actually impossible to modify those mappings because you dont have the vmar
<geist>
it effectively locks down the address range
<mjg>
by throw away you mean "close" it?
<geist>
aaaand since vmars are a handle they can be given to another process. so you can build a completely separate loader
<geist>
close it yes
<geist>
you can in the zircon mode completely externally load a process without having a thread in it
<geist>
but it's definitely more moving parts. so i can't say it's easier to use
<mjg>
do you have fork + exec or somethign more akin to posix_spawn?
<geist>
latter
<geist>
fork/exec would be fairly difficult to do
<geist>
there is zero intrinsic notion of inheriting anything from your parent
<geist>
there's not even a parent/child process relationship
<mjg>
oh?
<geist>
the kernel does nto care. it's not the kernels job to provide that
<mjg>
when you do a not-unix you don't mess around, do you
<geist>
there's also no concept of user in the kernel
<geist>
no intrinsic rights
<geist>
it's all about if you have a capability of doing something you can use it
<mrvn>
geist: the child/parent don't get a connection of some sort when you start the child?
<geist>
absolutely not
<geist>
this allows us to build completely separate out-of-process loaders or runtimes
<geist>
you can go ask a random server somewhere to build me a process for some sub task
<mrvn>
then how does the parent give the child extra permissions?
<geist>
and it doesn't get assigned to the server, because there's no notion of parent/child at that layer
<geist>
ah, now you're onto it
<geist>
a) there's no parent. b) the 'requestor' has either capabilities to do something or they dont
<geist>
and by virtue of that can either build a new process or not. they also can and probably should reduce the access of the child, or ask a service for extra permissions (net stack, file system access, etc)
<geist>
which exist as IPC channels to various services
<geist>
what there *is* is a notion of a job object
<geist>
a job object contains one or more process, and zero or more sub jobs
<mrvn>
but I want the new process to have an IPC channel back to the process that initialited the creation
<geist>
kernel doesn't construct them, kernel doesn't really care who makes them, they only care that someone asks for a creation of a job or a process has a job with the appropriate rights to do that
<geist>
and then you can do things like kill an entire job and all its children. so it allows user space to build a notion of parent/child if it wants, by constructing jobs and then putting processes in it
<geist>
so the relationship is entirely user space driven
<geist>
mrvn: when you create the process you give it an IPC endpoint that you then feed it copies of handles it needs to get started
<mrvn>
still doesn't allow communicating with the "parent"
<geist>
it's a care package: here's everything you need
<geist>
that's the process bootstrap on zircon: a new process starts off with one thread and an open IPC channel to *something*
<geist>
and that IPC channel had better give it more things to get started
<mrvn>
geist: ok, so you do have a parent/child relation in that one IPC endpoint. :)
<geist>
no. thats a thing you *could* do
<geist>
there's no intrinsic parent child
<geist>
there's process and 'whatever probably made the process'
<geist>
it may or may not be the 'parent' in the classical sense. the parent may have (and in fuchsia probably do) go ask a superserver to make it a proces son its behalf
<mrvn>
geist: think surrogate mother :)
<geist>
in which case the superserver has the perms to do it, builds a process, adds it to the requestor's job tree, then hands the master IPC endpoint to the requestor. or doesn't
<geist>
at that points it's all handles to things with capabilities on the handles to do things
<mjg>
so for the sake of argument let's say i'm running a shell server for d00d3z to irc from based on zircon
<mjg>
and one of them is misbehaving
<geist>
tye zircon ekrnel is very much about having the tools t build something without the policy of how to do it baked into the kernel
<mjg>
how do i spot everything that person is running?
<geist>
they'll be under a job tree you probablyk built
<geist>
which is pretty akin to a process group or a session group
<mrvn>
geist: can you create a job that doesn't have a parent?
<geist>
and you can whack that if you have a handle to it
<geist>
mrvn: no. jobs are intrinsically created from a parent job
<geist>
ie you have a handle to a job with the right to create more out of it
<mrvn>
sounds like jobs are your processes and everything else are just threads.
<geist>
and at the beginning of time when the kernel starts, there is a root job
<mjg>
so what are perf realities of zircon?
<geist>
mrvn: if you want to apply that model, that is a valid way of using it
<mjg>
sounds like a lot of extra work for commonly performed ops
<mrvn>
mjg: it runs a lot faster than all the kernels that don't run on the hardware zircon supports?
<geist>
mjg: it'll probably not run classically posixy thigns terribly efficiently
<geist>
since a lot of posix is tuned for lots of fast process creation/teardown
<geist>
zircon is very much not designed to go after that model
<geist>
but we currently on fairly ow end devices run 500-600 threads across 100 processes or so without much trouble
<mjg>
that much is a given
<geist>
the main artifact of all of the design is it's highly asynchronous. which has it's upsides and downsides
<mjg>
but let's say you want to run a webserver from zircon
<mjg>
how does that look like syscall-wise vs linux
<geist>
probably could work that fairly well
<geist>
it'd look like a completely different universe
<mjg>
i mean in syscall counts
<geist>
it is a microkernel afterall, since you're talking to a netstack primarily
<geist>
100% depends on what your server -> netstack IPC looks like
<mjg>
from getting a new client, say giving them a file and waving goodbye
<geist>
hard to say. i'm not trying to be evasive, but as i am saying it totally depends on precisely the implementations of those user space components
<geist>
and i honestly dont know
<mrvn>
Most processes I've managed so far was 1966080.
<geist>
if your answer you're looking for is it's more than linux? yup
<mjg>
geist: np
<geist>
one cannot open() -> splice(fd, socket) -> close()
<mjg>
that's what i was after
<geist>
but then a lot of these operations are parallel and async, because it's not a sync model
<geist>
so its hard to compare
<geist>
like, for example, you could if you wanted to build a user space model where the netstack and the fs servers directly communicate
<geist>
and get a zero copy from the fs buffers right into the ethernet segment
<geist>
there's no mechanism that forbids any of that
<mrvn>
geist: do you have provisions for the FS to ask the netstack how much space to leave for the IP header?
<geist>
or monolithically build a user space that gloms netstacks and fses into the same thing
<geist>
mrvn: i'm sure we dont, but my point is you could if you wanted to
<geist>
as fuchsia currentlyus tands it'd i'm sure totally suck a being a web server, but that's not the design criteria right now
<geist>
i was giving a kernel centric answer because mjg asked me 'you want to run a webserver from zircon'
<geist>
and zircon being the kernel means 'all of user space is fair game'
<mrvn>
geist: i've kind of split in half there. Should the buffer be in it's own page so no information from the header leaks. Or pass on the page with the whole ethernet frame + offset + lent for the part the FS should access?
<geist>
i'd tend to say 'net serverg ets accesss to the vmo that backs the file, and so it can scatter gather directly into the packet'
<geist>
in which case the whole header stuff doesn't matter becase you rely on hardware to assemble the packet
<mrvn>
geist: for sending, sure. receiving is harder.
<geist>
yep
<geist>
but between the netstack and the web server process (if they were separate address spaces) you could build a fairly efficient shared memory scheme
<mrvn>
Can modern NIC hardware split the data into header + payload?
<geist>
but that's probably a one copy at least
<geist>
not that i know of (splitting header + payload)
<geist>
at least base don what i know about e1000, and e1000 being kinda the defacto standard for good modern ethernet nics
<geist>
it dumps everything into a pre-allocated power of 2 buffer
<mrvn>
Odd though. Would be a trivial extension of the fragment gatherthing and merge frames stuff.
<geist>
i think its' all aobut blazing speed
<mrvn>
maybe some feature to make the frame a fixed size?
<geist>
by power of 2 buffers the hardware can instantly compute how many of them it needs so it reserves them upfront while it's processing packets and reassembling them elsewhere
<geist>
(vs hat i've seen for lots of RX queue based nics like the realteks where you have a queue of RX buffers of size N)
<geist>
where N can vary between buffers and/or not be a power of 2
<geist>
it was a problem when i wrote an e1000 driver for LK since i had been up until then assuming i could set up a 1500 byte RX buffer, but e1000 wants 1024 or 2048
<clever>
from what ive been able to clean from the genet driver, both rx and tx rings are made up of chunks of flags+bytes
<clever>
and the flags define if this chunk is the start/middle/end of a packet
<clever>
so that gives you scatter/gather, but the scatter side doesnt have any clear splitting controls, just cut whenever it does fit
<geist>
yep. that's fairly standard
<geist>
it does mean the hardware has to get a packet, figure out how big it is, and then go walk the RX queue to find enough buffers to hold it, then dma it out
<geist>
the e1000 one having fixed sized power of 2 packets means it can instantly know how many it needs and just bump the queue forward that many
<clever>
i'm assuming it would have a fifo on the rx hardware, of the next 2-3 rx buffers
<clever>
and can just immediately dma out to those, as it receives bytes
<clever>
and a seperate task would read the rx ring and top up that fifo
<geist>
in the case of anything like the e1000 with TSO it's keeping huge fifos because it's actually reassembling packets and handing the kernel large ones
<clever>
i have yet to see how the genet flags rx chunks as consumed
<geist>
so it accumulates fairly large piles of data internally before deciding to turn them into a host packet
<geist>
probably a standard head/tail pointer it pushes forward
<clever>
yeah, if you have hardware defragmentation, that requires much bigger buffers
<geist>
sometimes it's not consumea s much as a software sets a bit that says 'owned by hardware'
<geist>
but yeah nic drivers are fun. i generally recommend them as the first serious driver someone should write (after fairly trivial ones like uarts or ide)
<geist>
they can be fairly complex, but it's a fairly constrained problem
<clever>
the last NIC driver i read, was an ancient 100mbit pci nic
<clever>
for the tx side, there was 4 registers to set the addr, and 4 registers to set size+flags
<geist>
and really it's trwo different drivers, since most nics are effectively a RX and TX path which are almost separate
<clever>
there was no tx ring, just 4 MMIO to queue up a max of 4 packets
<geist>
yep, that was almost certainly a rtl8139
<geist>
which was the defacto 100mbit cheap card
<clever>
yeah, from that family
<mrvn>
geist: for 100MBit you can even wire up the TX and RW wires to different cards.
<geist>
the RX ring on the 8139 is a single up to 64K memory segment that it just tosses packets down, back to back, with a little header
<clever>
i was looking into it, because my upload pipe randomly stalled to a total halt
<clever>
but it turned out to be a pppoe problem
<geist>
so setting up RX is basically a pointer to the 64K + a head + tail register and a go bit
<clever>
the tx side of a linux NIC driver, is a weird mix of polled and event driven
<clever>
by defaults, its in an event driven mode, the network stack just calls the driver, "here is a packet, send it"
<clever>
but, once those 4 hardware buffers are full, such events are just a waste of cpu
<clever>
so the driver flips things into a poll based mode, where the "tx done" irq will ask the network stack for the next packet
<clever>
so it becomes entirely driven/throttled by the tx-done irq's
<clever>
the problem, is that the pppoe virtual NIC, claims to have a 1 packet buffer
<clever>
every time you transmit a packet on the PPPOE link, it goes into polled mode, and then switches back later
<clever>
and the bug, is that it didnt
<clever>
so linux just stopped trying to tx packets
<clever>
*dead*
<clever>
and the only reason it wasnt a fatal bug, is that pppoed (a userland proc) was using write() on a char dev, to send pppoe ping packets
<clever>
bypassing the network stack
<clever>
and those un-jammed it, every 20 seconds
<clever>
geist: i was never able to fix the root problem, but guess how i "fixed" it?
<mrvn>
ping more?
<clever>
mrvn: ding ding ding!
<clever>
i set it to ping every 1 second :P
<geist>
word
<clever>
but the funny thing, is that prior to discovering it was a linux bug, i had put all of the blame on windows
<clever>
every time the bug occured, windows would mass drop EVERY SINGLE tcp connection
<clever>
linux, would just shrug its shoulders and recover
<junon>
Are page entries' bit 6 set when bit 3 is high? same question for 5/4 respectively.
<clever>
so its obviously a windows problem :P
<geist>
here's a fun thing i learned recently: windows has no intrinsic support for VLANs in the net stack
<geist>
it relies on the ethernet driver creating separate virtual vlan interfaces
<mrvn>
junon: do you think we know every bit in the page table by role?
<clever>
weird
<junon>
mrvn: because you are all gods
<junon>
6=dirty, 3 = write through
<geist>
worse it's not that windows doesn't see packets with a vlan tag set, it's that it sees *all* of them
<junon>
and 5 = accessed, 4 = disable cache
<geist>
so it thinks its on all vlans (if you enable them at the switch at least)
<mrvn>
junon: don't see why they should be related at all
<clever>
geist: linux is similar, the master interface just strips the vlan tags, but you can create a sub-interface
<clever>
a lot of tools (tcpdump) also strip the vlan tags
elastic_dog has quit [Ping timeout: 244 seconds]
<clever>
wireshark doesnt, but wireshark kinda needs X
<clever>
and the vlan's are only at my router
<clever>
and i keep loosing the special flags needed to capture vlan tags
<mrvn>
clever: the router drops the vlan tags when it forwards
<geist>
clever: right, but linux will discard anything that's not the main vlan in the untagged case
<geist>
i basically use a few, mostly to separate work from not work, but i also have for example an ipv6 only one and a test network to hack say LK net stack in
<clever>
the ONT was originally a router (it has 4 gigabit ports) + voip (2 phone ports) + fiber modem
<clever>
but the ISP lobotomized it, only 1 ethernet and 1 phone port work, and it has no config
<clever>
vlan 34+34 comes out of that remaining ethernet port
<clever>
vlan 35 is the internet service, dhcp gives you a public ip, NAT, done
<clever>
vlan 34 is the tv service, dhcp gives you a 10.x.y.z ip, now you need to NAT with 2 uplinks, multi-cast routing, and QoS tagging
<clever>
i never got the tv service working thru linux
<clever>
but, i discovered that if you try to set the isp router to pppoe mode, it does pppoe over vlan 35, lol
<clever>
that stops it from stealing the dhcp lease, so 2 routers can share 1 modem
<clever>
i then run a bloody pppoe server on vlan 35 of the nixos router :P
<clever>
and it bounces off switch1, and feeds internet to the isp_router!
<geist>
yah interesting it's those particular vlans. must be some sort of semi standard, or a vendor specific thing
<geist>
since last time i had fiber it was precisely the same
<clever>
if i just `tcpdump -i enp4s2f0`, i get traffic for both VLAN's
<clever>
or i can `tcpdump -i lan` to get just one
<geist>
okay, time to shut down my server and install a TPM module
<geist>
figured why not, they're like $10
elastic_dog has joined #osdev
<clever>
geist: that reminds me, ive been wondering what protocol a TPM uses to talk to the firmware on the motherboard
<geist>
theres a spec, and it's pretty big
<geist>
it's really the other way around. how does firmware/oses/etc talk to a TPM
<clever>
i would assume there is some kind of DH exchange to prevent mitm?
<geist>
there's a large ass spec for how to do it
<geist>
yeah but it's much more complex than i thought
<mrvn>
big enough to hide the NSA backdoors
<clever>
semi related, ive also had to look into digital tv CAM module stuff
<geist>
some of the complexity is a bunch of attestation to know you're actually talking to the TPM, etc
<geist>
instead of some MITM
<clever>
mpegts has a couple tables in special streams, that tell you how many channels are in the mpegts stream
<clever>
and then what PID each stream is on (audio, video, crypto)
<clever>
the crypto packets, are then shoved off to the CAM as opaque blobs
<mrvn>
clever: your cam doesn't talk x264?
<clever>
and the CAM then returns a decryption key every now and then
<clever>
the host then decrypts the audio/video streams
<clever>
and the fun part, is that those opaque blobs, contain a mix of firmware updates, package/billing updates, and dynamic re-keying stuff
<clever>
so they can OTA update the CAM cards
<clever>
and i had seen a blog post, about how a sat tv company, shipped a multi-stage virus, as 100's of innocent looking updates
<clever>
and only when all of the bits where assembled, did it instantly brick all of the cloned cards, that where being used to steal sat service
<clever>
so the crackers didnt know what it was, until it was too late
<clever>
but, i had also found a research paper on cracking the whole system, skipping a few laters
<clever>
the h264 compression sometimes has large chunks of nulls, and now you have a known-plaintext attack
<clever>
generate a rainbow table, for every possible key the CAM can output, and how a block of nulls would encrypt
<clever>
then you can just lookup the key in the table
<clever>
but it needed something like 1tb of storage for the whole table
<clever>
so i never bothered attempting it
<mrvn>
clever: bad encryption then
<clever>
i think the intent, was to have this layer re-key every minute
<clever>
and use the QAM to decrypt the new key
<clever>
and if it takes over a minute to lookup the key in the table, you can never get realtime playback, even if you can crack it
<clever>
enless you have an array of machines, each cracking a different 1 minute chunk, and youll still have a major lag
<clever>
i had looked into that, because my iptv service, is h264+mp3 over mpegts over rtmp over multicast udp
<mrvn>
clever: looking up a key in a 1TB B+tree takes miliseconds
<clever>
this paper was some 20 years old
<clever>
so try doing that with 2002 tech :P
<mrvn>
clever: 12ms seek time, maybe 10-20 reads. No problem.
<clever>
ah right, index on the ciphertext, that could work
<clever>
the other bit, is knowing when the ciphertext is even a block of nulls, which is just waiting until you see a block of ciphertext repeat
<mrvn>
clever: that's why you add an IV in every block.
<clever>
yeah, thats what they where missing
<clever>
even WEP had IV's, lol
<clever>
the CAM standard was meant to allow you to buy a tv subscription (cable/rf/sat), and get a CAM module, which you can then plug into any compatible box
<clever>
and boom, it just works
<clever>
but my service had no hardware CAM, i assume it was a software component within the win-ce env
<clever>
if i subscribed to the right multicast group, i would get the whole channel streaming in instantly
<clever>
but without the CAM, i had no way to decrypt it
<mrvn>
clever: I'm still annoyed amazon streams viedo without the original audio.
<clever>
?
<mrvn>
I want to watch english films in their original english and not dubbed.
<clever>
ah
<clever>
ive seen other sites being even more stupid, the player doesnt support audio tracks
<clever>
so the dub is added as another season :P
<clever>
so when you finish watching s1, it starts playing the "next" season, S1 again, but in another language!
<Griwes>
why would they make it be another season instead of another series
<clever>
Griwes: ive seen other sites do just that
<clever>
everybody has their own hack, and almost nobody supports multiple tracks properly
* Griwes
is rather happy that Plex does it at least somewhat right
<clever>
and even netflix isnt perfect, some of the audio tracks are region locked
<mrvn>
seems to work fine with amazon, when they have them
<clever>
so depending on where you live (or VPN), your audio track selection differs
<clever>
Griwes: yeah, plex just works, my only complaint is that some of the errors are rather opaque, and its closed-source
<clever>
earlier today, the plex client was convinced it had an indirect connection to the server
<mrvn>
clever: so all they do with the region lock is annoy users and tempt them to pirate films instead of pying.
<clever>
because it couldnt connect to 172.105.98.20 (thats not my lan)
<clever>
any attempt to play a show, just resulted in a vague error
<clever>
mrvn: yeah, when "shut up and take my money" doesnt work, people just pirate
<Griwes>
yeah there's some fiddliness to teaching plex about non-trivial network configs
<clever>
and similarly, me and my dad where trying to test out 5.1 audio on a blu-ray earlier this week
<mrvn>
"Sie können dieses Video momentan
<mrvn>
an Ihrem Standort nicht ansehen."
<mrvn>
grrrrr
<clever>
every time we changed a setting, we had to sit thru 3 minutes of trailers and "please dont steal this"
<Griwes>
but once you get it it's Mostly Fine
<clever>
and it wouldnt let you skip
<Griwes>
lol
<clever>
just shut up and play the damn movie :P
<Griwes>
that sounds like the perfect way to make people steal it tbh
<clever>
yep
<clever>
the other problem, is that the case for the bluray claims 5.1
<clever>
but the HUD in the player says 2.1
<clever>
which is it?
<mrvn>
Griwes: all the stolen versions have those trailers cut off. So only paying customers get to see them.
<clever>
the only way i know to get a clear answer, is to buy a bluray drive for a pc, and make my own private rip of the disk
<clever>
to see exactly what is on the disk
<mrvn>
Griwes: dirty stinking pirates, all those paying customers. shame on them.
<Griwes>
clever, it's clearly 7.2 because you obviously have to sum those numbers
<mrvn>
clever: maybe be HUD doesn't have the "is safely encrypted" but set in the cable so you can't play the high quality content.
<mrvn>
s/but/bit/
<clever>
then give such an error!
<clever>
dont lie to the user!
<clever>
mrvn: its also been a major pain to get digital 5.1 working, nothing i have tried has worked
<clever>
the only thing ive gotten to work so far, is analog 5.1, 3 headphone->rca cables out of the desktop, plugged into 6 rca jacks on the sound system!
<mrvn>
clever: try streaming video from your mobile to your PC
<clever>
if i set the hdmi port to 5.1 mode in pavocontrol, and run speaker-test, then only front-left and front-right can be heard on the tv
<clever>
if i then run an optical cable from the tv to the sound system, same thing, only 2 channels
<clever>
if i run optical directly from the desktop to the sound system, then it can only ever be configured for stereo
<mrvn>
Steaming video under Linux: "nc tv 1234 < file.avi". Streaming on handy: oehm, maybe, if you muy this apple plugin and an appple tv, or not, pay and we will see
<clever>
there is supposed to be a passthru mode, where the raw digital audio in the file, gets shoved right out the hdmi/optical port
<clever>
and the pc does absolutely no changes to the bits
<clever>
but it doesnt work on anything i do
papaya has joined #osdev
<mrvn>
clever: can't do that. passthru would violate the secure domain.
<mrvn>
You could passthru to a player that doesn't honor the region code or something
<clever>
thats handled at the disk layer usually
<clever>
so you cant even get the ac3 stream until you decrypt
<clever>
and then that ac3 gets ran over hdmi, with the usual hdcp
<mrvn>
clever: only players that are certified not to pass it on after decrypt are given keys for that
<clever>
and thats why piracy wins :P
<clever>
it just works
<mrvn>
I mean, youi paid for the content. YOu aren't allowed to watch it if you leave the country.
<mrvn>
I'm paying for streaming services and still watch pirated because it just works better a lot of the time.
<clever>
oh, that reminds me
<clever>
i used a VPN to try to login on netflix from america a few weeks ago
<clever>
it just lied to my face, and claimed the password was wrong
<clever>
even when i just changed it
<papaya>
I've never had that happen before, lol
<mrvn>
clever: amazon always tells me the video is not available when I use firefox instead of firefoy-esr.
<mrvn>
funny thing is: it plays the first second and then gives the error.
<clever>
that reminds me of the fun i had trying to get multicast to work
<clever>
because of the multicast nature, every cablebox in the town is getting the same h264 stream
<clever>
and it would take ages to "tune" waiting for a keyframe
<clever>
so the cable box cheats, and fires up a tcp stream, where a transcode box starts the stream with a keyframe, and gives you a private encode
<clever>
and after ~30 seconds, the cablebox switches from tcp to multicast udp
<clever>
and the tcp shuts down
<clever>
as a result, you cant know if you configured multicast correctly, until you watch several minutes
<mrvn>
sucks. that will be hard to work with under linux
<clever>
and i never got multicast forwarding to work right
<mrvn>
me neigther
<mrvn>
maybe that's why
<clever>
while investigating the isp router, i did some fancy stuff
<clever>
i setup 2 laptops, 1 to emulate the modem/isp end, and 1 to act as a LAN member
<clever>
so i could then shove traffic thru the router, in both directions, and RE its routing tables
<mrvn>
clever: I did that with one laptop and 2 nics.
<clever>
because it has 2 uplinks and some subnets get routed over a different vlan
<clever>
but, i discovered a nasty surprise
<clever>
there is a 3rd vlan, that is BRIDGED into the private LAN side
<clever>
in theory, the ISP can route that vlan to a virtual machine in a rack
<clever>
and that VM is essentially on the inside of my house, past the firewall the router would have offered
<clever>
mrvn: would you agree to that?
<mrvn>
do you want to watch TV?
<mrvn>
or pirate?
<clever>
since my dad moved out, i basically never use the tv service
<clever>
originally, i was trying to get mythtv setup, to DVR stuff
<clever>
but its far easier to just pirate things, then pay for the service :P
<dh`>
do you really think the ISP's router can be trusted to be a firewall?
<dh`>
any isp's router?
<mrvn>
or pay for the service and then not feel bad about priating
<clever>
dh`: exactly why it was serving its own private subnet, just for the tv's
<clever>
and i had my own router for my own systems
<gog>
when you pay for TV service you're technically buying a license to view any media that's offered on the service
<gog>
¯\_(ツ)_/¯
<clever>
i technically still am paying for tv service
<clever>
even though all of the tv hw is unplugged
<clever>
its a bundled package, tv+internet+phone, and every time we try to drop features, the price goes up :P
<gog>
yeh
<mrvn>
clever: making sure you don't access the TV costs more than the TV
<gog>
the most efficient way to allocate resources...
<clever>
mrvn: the UK is even weirder, you need a license to even own a tv, even if you dont have any active service
<clever>
they even make claims about having a "tv detector van" and they drive up&down streets looking for unlicenses tv's
<clever>
that may have worked in the old crt days, with all of the rf leaking out of those beasts
<clever>
but not anymore
<mrvn>
clever: in DE you used to have to pay per radio/TV receiver. Then they added computers. Now it's a flat fee per household because seriously, a household without radio, tv, smartphone?
<clever>
heh
<zid>
you do not need a licence to own a tv
<gog>
the TV license in iceland is charged with your yearly taxes whether you own a TV or not
<gog>
most of the revenue goes toward rúv
<mrvn>
finances the states propaganda channels. I mean independent public news.
<gog>
yes
FreeFull has quit []
<clever>
the other interesting details about my old tv service
<clever>
the "dumb" cable box had basically no internal storage, on boot, it would download firmware over encrypted http, not https
xenos1984 has quit [Read error: Connection reset by peer]
<clever>
the http headers included an hmac, and the http body was a large block of base64'd ciphertext
<clever>
on its own, the dumb box could only do livetv
<zid>
lmk if you feel like making up more facts about the uk :p
<clever>
the "smart" cable box had a sata hdd in it, and also did the firmware over http
<clever>
and the "smart" box acts as both a buffer for pausing livetv, and as the DVR master
<clever>
if a dumb box pauses livetv, the smart box starts recording, and then the recording streams from start->dumb
<bslsk05>
en.wikipedia.org: TV detector van - Wikipedia
<zid>
not enforced at all
<zid>
That's why some countries add a tax to the devices
<papaya>
They can detect that a TV is nearby picking up broadcast signals...?
<zid>
not since CRTs stopped being a thing
<mrvn>
Like blnak CD/DVD are taxed to support poor starving music/film atrists. But then you aren't allowed to actually put music on them.
<clever>
> The first detector was introduced in 1952. It operated by detecting the magnetic field, rather than any radio signal, of the horizontal line-scanning deflection within the cathode ray tube.
<zid>
CRTs aim the beam with huge electric fields that are fairly detectable
<papaya>
Ah, I see... so more like they detected some sort of radiation from a CRT? But what of CRTs used for other purposes like computers, games, etc.
<clever>
so it was less that it detected you watching a broadcast, and more, that a crt was running
<zid>
then you say fuck off, either way
<zid>
whether you're using it for TV or not
<zid>
they have no actual powers
<papaya>
Sorry if these are dumb questions, I'm legit just... dumbfounded by learning this
<zid>
they bascially just trolled poor coal mining towns
<zid>
looking to find people they could guilt
<clever>
pay the fee or suffer the fines!
<clever>
we have a van that can detect liers!
<clever>
ooooo, fear us :P
<zid>
Pay a private company a million a year, if they can get receipts worth more than that, gg
<clever>
> A 2013 study was conducted on television emissions detection by Markus G. Kuhn.[11] This found that emissions from modern sets were still detectable, but that it was increasingly difficult to relate these to the received signal, and thus to correlate a set's emissions with a particular licensed broadcast.
<mrvn>
clever: does that even work with TFTs anymore?
<clever>
mrvn: there are other attacks, against the tuner, not the display frontend
<clever>
there is RF wizardry, where you mix a pure sine of a given freq, with the raw antenna signal, to shift the desired channel up/down in freq, to line up with a special band-pass filter
<clever>
and that pure sine leaks out of your tuner, and can be detected
<clever>
which reveals exactly which channel your tuned to
<clever>
> Firstly the emissions are simply lower, owing to modern standards for EMI and the increasingly enforced compliance with EMC standards.
<clever>
but modern EMI regulations, demand that your leakage be far weaker
<clever>
so the tuner is properly shielded now
<clever>
that RF leakage also works on am/fm radios
<papaya>
This is all so bizarre to me.
<clever>
even modern SDR techniques still do the same rf wizardry in analog hw
<clever>
and leak what band they are tuned to
<papaya>
This is more perplexing than learning that eastern Canada packages milk in bags.
<clever>
i grew up on bagged milk :P
<papaya>
Are you in Canada or is this a common thing elsewhere too?
<clever>
atlantic canada
<mrvn>
clever: exists in DE too
<mrvn>
or at least did till 2010 or so
<papaya>
I remember my first time in Canada, in Toronto, and seeing milk bags
<papaya>
and just accepting... yup, they really do exist...
<papaya>
had never seen before tho
<clever>
oh, and here is something that may just melt your brain, ever had a donair?
<mrvn>
Do you have Fla? Pudding in 1l milk cartons.
<zid>
speaking of fluids in bags, wine bottles are heavy and don't pack well, so wine is shipped in shipping containers in giant bladders
<papaya>
Never had a donair but I've had a poutine
<clever>
never heard of fla
<clever>
papaya: do you at least know what a donair is?
<zid>
then they bottle them at the other end
<papaya>
clever: nope
<zid>
I assume it's a variant spelling of doner
<clever>
zid: not just spelling, the recipe is also changed
<bslsk05>
en.wikipedia.org: Doner kebab - Wikipedia
<papaya>
oh is it greek food?
xenos1984 has joined #osdev
<clever>
> It did not catch on with the public, so in 1972[24] he modified the customary pork and lamb recipe by using spiced ground beef, Lebanese flatbread, and inventing the distinctive sweet donair sauce made with condensed milk, vinegar, sugar, and garlic.
<zid>
man, I want a doner wrap now
<clever>
papaya: based on it, but the recipe was modified to suit the locals
<clever>
my uncle ran a pizza/donair place for a while, and it was interesting to see how it was made
<clever>
a massive metal tube, with a shaft down the middle, completely packed with ground beef mixed with spices
<papaya>
I'm not really a big fan of the greek food here
<clever>
then the whole solid log, was shoved into a freezer until solid
<clever>
and then loaded onto the rotary roaster
<gog>
oh som döner kebab would slap rn
<clever>
and bits shaved off as needed
<zid>
elephant leg <3
<clever>
the whole freezing step lets you use ground meat, rather then a raw leg