<clever>
klange: that makes me wonder, how can a blurry alpha work? at say an opengl level
<klange>
When you have pixel shaders and can do really fancy stuff, some blurs will adjust the kernel - the radius of the blur, specifically - along with the alpha channel so things get less blurry.
<moon-child>
huh interesting
<moon-child>
I'm having trouble imagining why that would be better than the alternative
<clever>
re-reading the related parts of the v3d docs...
<klange>
The masking I'm doing is on-or-off with the click threshold for the window, which is a client-specified alpha level.
<klange>
That's... not great. It works fine for the click bit, but it results in weird edges around windows with shadows, since the shadows are just opaque enough for most reasonable click thresholds.
<clever>
if i'm reading this doc right, a pixel shader on the rpi, can only read the exact pixel its drawing over and no neighboring pixels
<klange>
But the window switcher doesn't have shadows, so it only suffers minor issues around the roudned edges
<clever>
so you have no way to get the nearby pixels and blend them together with a kernel
<clever>
the texture reads however are a bit more flexible
<moon-child>
presumably your source is not the same as your destination?...
<klange>
a pixel shader that can't ready another texure is not a pixel shader
<clever>
the pixel shader can read from 2 different sources
<clever>
1: the exact pixel its meant to draw to (for alpha)
<klange>
you render everything behind the blurred object into a texture, and then use that as the source for the blur
<moon-child>
I'm actually kinda surprised you can read any part of the target, it makes sense to provide that capability, though, if it's cheap/free
<clever>
2: a texture, with a controllable xy coord
<clever>
ahh, yeah, so your using a blurry texture, rather then a blurry alpha
<clever>
that explains things
<klange>
This is how I did everything, and I did this way because I was explaining how to do it in the forum thread ;)
<klange>
In 3D graphics, you don't even do the optimizations that make things interestingly...
<moon-child>
opengl allows you to sample from multiple texture. So how does the pi handle that? It supports opengl, right?
<klange>
Every single frame you draw _everything_, back to front, behind the blur-behind object.
<clever>
moon-child: yeah, the texture setup data (resolution, image addr in ram) are stuffed into the uniform list
<clever>
moon-child: when you specify a texture XY coord, the hardware will automatically fetch the resolution/addr from the uniform fifo
<klange>
Then you use that texture as the source for a blur, mix with the object, and spit that out onto the final texture...
<clever>
so if you want 2 textures, then you just put 2 sets of info into the uniforms
<klange>
which then becomes the blur source for any blur-behind objects in front of _that_...
<klange>
until finally it's the actual final output for the screen
<clever>
but there is no seek, and it pops for each texture pixel read
<clever>
so if you want to read a 5x5 grid of pixels from a texture, you must supply that as 25 textures in the uniform list
<klange>
In software 2D shit, it's necessary to do clipping so you aren't redrawing everything every time, but the trick with blurring is you need to draw a bit more than the actual clip region to account for overhand from the blur radius
<klange>
but then when you do the final blit to screen you need to update only the actual clip regions
<klange>
If you don't do these things you end up with artifacts around the edges of update regions
<clever>
oh, and the scaling kernel itself, i would also put into the uniforms, interleaved with the texture info
<klange>
My first pass at doing blur behind cheated. It abused the fact that my graphics library only does scanline clipping.
<klange>
So it did only horizontal blurring.
<klange>
It's not a terrible visual effect, though it is very clearly not full blurring.
<clever>
> but the trick with blurring is you need to draw a bit more than the actual clip region
<klange>
This revisit fixed that. It's doing a proper box blur with a nice wide radius, x and y, and does all the clip expansion to account for it.
<clever>
this reminds me of scaling bugs on 2d as well
<clever>
if you have an image with alpha, and a harsh transparent white on the border, a snap from 0% alpha to 100% alpha
<clever>
the scaling code will blend that "transparent" white with the colored visible pixels
<clever>
and cause an artifact around the edges
<clever>
and for that reason, you want the "invisible" pixels around the edge of the mask, to match the color of the visible pixels
<clever>
and i can see that also applying to textures a 3d core reads
<clever>
klange: the other problem i can forsee with what you said, is that i basically have to render the entire scene twice, once with everything behind the blur, then a second time, with the overlaid polygons, and now i'm reminded of all of the glass+water bugs in minecraft
<klange>
The best option is to not.
<klange>
But you don't always have the luxury of being able to do a straight painters algorithm in 3d-space...
jjuran has quit [Ping timeout: 272 seconds]
<klange>
If you can, you don't have to render anything twice, just juggle some intermediary buffers :)
<clever>
another issue i can see, is that for each pixel, you want to read a 5x5 pixel block centered on it, then write to the pixel
<clever>
but now youve modified that pixel, and it will damage the 5x5 read for the next pixel over
<clever>
so you need 2 buffers, a source&dest
<clever>
so you cant just draw the 2nd polygon over the existing buffer
<klange>
Yes.
<gog>
did you roll your own support for pixel shaders?
<clever>
you need to copy as you blur, and then for pixels not blured, you need to copy without bluring?
<clever>
or alpha the blurred thing over the original
<gog>
or is it in software?
<klange>
Nah, just render into a new texture, then blit that texture atop the main one when you're done.
<klange>
That's fast on most things, right? Simple alpha blit? Hell, should be doable with 1-bit alpha...
<clever>
so you render everything behind to buffer1, then you blur part of buffer1 into buffer2(with alpha), then copy buffer2 back onto buffer1, respecting that alpha mask?
jjuran has joined #osdev
<clever>
yeah, 1bit alpha works ther
<clever>
e
<clever>
but in a lot of cases, there is latency of switching from gpu to cpu rendering code
<clever>
i would probably instead just let buffer2 be the final buffer, and have a different pixel shader do a non-blurry copy from buffer1->buffer2
<clever>
and let the depth-test decide which pixel shader to run
<clever>
but ive not attempted to run 2 different pixel shaders with interleaving depths
<clever>
i can think of a few niave ways the hw can render things, and problems each would cause
<clever>
for example, if i have 2 pixel shaders, and some polygons for each shader
<clever>
and then i draw everything with shader1, the everything with shader2, respecting the depth-test the whole time
<clever>
then transparent things on shader1 wont show polygons on shader2
<clever>
and boom, you have the glass not rendering water bug from minecraft
<clever>
yeah, the first image is the background layer, 2nd is the overlay with a blurry alpha, 3rd is then the 2 mixed together
<clever>
if 1st&2nd are both textures, then i can see how it can be rendered, but you need to first isolate every polygon that should be in the 1st image
<clever>
for a 2d compositor, thats easy enough, but for a 3d scene...
<clever>
i'm also not exactly sure what the rpi hw would do for a given multi-shader configuration, i would need to create some control lists with 2 shaders, and see what happens
tsraoien has joined #osdev
[itchyjunk] has quit [Quit: Leaving]
tsraoien has quit [Ping timeout: 244 seconds]
lg has quit [Ping timeout: 256 seconds]
xhe has quit [Ping timeout: 272 seconds]
zaquest has quit [Remote host closed the connection]
lg has joined #osdev
gxt has quit [Remote host closed the connection]
xhe has joined #osdev
zaquest has joined #osdev
toluene has quit [Read error: Connection reset by peer]
toluene has joined #osdev
<geist>
wait what happened? klange was talking about some graphics thing and it somehow devolved into clever mentioning out raspberry pi would do it in hardware?!
<geist>
crazy! this never happens
<geist>
i'll have to bring up how to do it in riscv in the mmu
<clever>
lol
<Mutabah>
I'd show off my fancy USB stack, but currently at dayjob
<clever>
geist: was just trying to understand how opengl would do such effects, and it was less that the v3d isnt capable, and more that i was just thinking about it from the wrong direction
<clever>
and the same limitations exist on other gpu's
<geist>
Mutabah: you should make it your day job!
<geist>
clever: i know, i didn't really even read it
<geist>
i'm just being snarky
<clever>
and yeah, do do that a lot
<Mutabah>
geist: It was tempting (see comment yesterday about getting semi-targeted google recruitment spam), but I like the separation
<geist>
word
<Mutabah>
And it'd likely require moving to Sydney, and no thank you
<geist>
the sydney folks are good people. aah yeah and that
Vercas has quit [Remote host closed the connection]
gog has quit [Quit: byee]
Vercas has joined #osdev
ajr has joined #osdev
liz has joined #osdev
blockhead has joined #osdev
bauen1 has joined #osdev
Vercas7 has joined #osdev
matt__ has joined #osdev
matt__ is now known as freakazoid333
Vercas has quit [Ping timeout: 268 seconds]
Vercas7 is now known as Vercas
<chibill>
<clever> "about 9 minutes, not as bad as..." <- Also depends on video length, As sometimes a 1080p video that 15 min long takes like 10 min to process SD (360p) and well over a few hours to get 1080p processed.
CryptoDavid has joined #osdev
<zid`>
Mainly depends how deep the queue is when you upload
<mrvn>
Obviously a 1m video will finish processing faster than a 1h video.
dude12312414 has joined #osdev
<geist>
yah that sircmpwn person used to hang out here
<geist>
i think they were legit. sometimes did some stuff
poprostumieciek has joined #osdev
<j`ey>
geist: that's ddevault
<geist>
but they dissapeared after a while
<ddevault>
aye, I'm sircmpwn
<geist>
didn't they also do the z80 stuff or was that another sir*?
<ddevault>
I wasn't around while I wasn't really doing any os hacking
<geist>
well anyway, i definitely recognize the name
<netbsduser`>
apparently he is inventing his own microkernel now
<netbsduser`>
should be interesting to see how it turns out, devault seems to have a reasonable track record
<Arsen>
I'm expecting it to run wlroots on release!
<geist>
yah looks like it
<Arsen>
(not on any actual foundation, but still...)
<geist>
i wonder if they'll ever come back again?
<Arsen>
he spoke a second ago
<ddevault>
I am right here -_-
<geist>
doedsn't seem that sircmpwn has logged into libera
<j`ey>
im so confused
<Arsen>
geist check your ignore list :^)
<geist>
j`ey: is it the heat?
<geist>
make sure you drink lots of water
* kof123
watches geist and netbsd user make whooshing sound above
* geist
bats at the air above their head
<j`ey>
we're past the heatwave now
<geist>
pretty!
<ddevault>
if you ask me for my opinion of kernels written in C++ I'm likely to end up on some ignore lists as a result
<Arsen>
aw that's a shame, managarm's pretty nice ;)
<geist>
(no it was a really bad attempt at some sort of joke that fell flat)
<geist>
mostly the 'oh i remember sircmpwn i didn't realize you changed your name'
<geist>
but yeah didn't sircmpwn do z80 stuff or was that another sir*?
<ddevault>
aye, that was me
<geist>
i think TI calculator
<netbsduser`>
poor Fuchsia
<geist>
ah okay.
dude12312414 has quit [Ping timeout: 268 seconds]
<kof123>
yeah it sucks there's no fuchsia devs here, very insightful netbsduser
<geist>
personal pet peeve, whining: rediscovernig someone i remember from a while ago, turns out they're here all the time they just changed their nick
<geist>
total whining, everyone has a right to change their nick, but it messes up my internal narrative of who is who
<geist>
has happened countless times
<raggi>
How many nicks are you generally signed in with at once? :-p
dude12312414 has joined #osdev
<geist>
ooooh ho good point
<geist>
2. but that's a work vs home seperation, and i decided to stop using the work one for precisely that reason
<geist>
it was confusing to people, though it's confusing in an opposite way that it doesn't match my work nick, but that's because i couldn't get @geist at work
<ddevault>
ddevault is the concatenation of my first initial and last name
<ddevault>
so if you're making that argument... :)
<geist>
heh, yeah
<geist>
nah that was jut me whining, also i literally woke up like 20 minutes ago. i have coffee now so my brain is working
<geist>
also for kicks i left one of the windows opened last night to see if it'd cool off the house. a) yes, it did b) some mosquitoes got in and bit me c) my allergies are flaring up
<geist>
so that was a Bad Idea
<ddevault>
mosquitoes are the actual worst
<geist>
i figured the screens on the windows would help, but i think they're a bit holy
<bslsk05>
gist.github.com: Example of calling into the kernel vdso directly · GitHub
<j`ey>
getpid isnt one on linux
<j`ey>
at least on arm64
<mrvn>
klys: Why do you pass "clock_gettime" to vdso_sym and then never use it?
<mrvn>
auto name = dynstr + sym.st_name;
<mrvn>
if (strcmp(name, "clock_gettime") == 0) {
<mrvn>
char* dynstr = 0; vs. void *ret = NULL;? and line 27 is missing `if (!dynstr) exit(1);`
<mrvn>
or return NULL
terminalpusher has quit [Remote host closed the connection]
<gorgonical>
Is it possible to use rr in conjunction with qemu somehow to record an entire run of a kernel?
<gorgonical>
Sometimes when diagnosing hairy bugs I find myself re-running things a few time and breaking at various points to inspect state, then when I get an idea I have to re-run and go break there, etc.
<mrvn>
you can make it print everything it executes
<mrvn>
the really nice thing would be if it would record changes, e.g. r0: 423 -> 23432 so you can jump in at any time in the trace without having to reconstruct the full state from the begining.
<zid`>
make a core dump with gdb if you just want global state, or do a full cpu trace if you want every state
<mrvn>
zid`: even with a full trace you still have to search backwards to the last time "r0" was writtent to to get the old value.
<mrvn>
(afaik)
gildasio has quit [Remote host closed the connection]
<gorgonical>
right. rr gives you a gdb-like interface to explore the trace and you can go to an arbitrary point in the trace
<gorgonical>
That could be really, really useful if you could e.g. capture emulated state under qemu
gog` has joined #osdev
gog has quit [Read error: Connection reset by peer]
<mrvn>
Everybody know sites like leetcode.com where you are given small probelms to code and then it tests your code? I'm kind of wondering how hard it would be to set up something like that for kernel dev. "Here is the specs for the UART, write a driver that does pollin I/O", "Now initialize FIFOs" "Write an interrupt driven driver"
<mrvn>
I feel you learn far more from a tutorial when you actually have to write the code. And solve little problems youself instead of just getting swamped with the solution.
toluene has quit [Read error: Connection reset by peer]
toluene has joined #osdev
dude12312414 has quit [Remote host closed the connection]
dude12312414 has joined #osdev
dude12312414 has quit [Remote host closed the connection]