michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
Marth64 has joined #ffmpeg-devel
mkver has quit [Ping timeout: 260 seconds]
LainExperiments has quit [Quit: Client closed]
Kei_N_ has joined #ffmpeg-devel
Kei_N has quit [Read error: Connection reset by peer]
Sirtsu550 has joined #ffmpeg-devel
Labnan- has joined #ffmpeg-devel
frankplow_ has joined #ffmpeg-devel
J_Darnley has joined #ffmpeg-devel
Arsen_ has joined #ffmpeg-devel
thresh_ has joined #ffmpeg-devel
pengvado_ has joined #ffmpeg-devel
Sirtsu55 has quit [Quit: Ping timeout (120 seconds)]
Arsen has quit [Quit: No Ping reply in 180 seconds.]
jdarnley has quit [Quit: ZNC 1.8.2+deb2+b1 - https://znc.in]
thresh has quit [Remote host closed the connection]
gnafu has quit [Remote host closed the connection]
philipl has quit [Remote host closed the connection]
pengvado has quit [Remote host closed the connection]
gnafu has joined #ffmpeg-devel
philipl has joined #ffmpeg-devel
LainExperiments has joined #ffmpeg-devel
thilo has quit [Ping timeout: 272 seconds]
thilo has joined #ffmpeg-devel
LainExperiments has quit [Quit: Client closed]
LainExperiments has joined #ffmpeg-devel
pheo has quit [Ping timeout: 276 seconds]
LainExperiments has quit [Quit: Client closed]
realies has quit [Quit: ~]
realies has joined #ffmpeg-devel
LainExperiments has joined #ffmpeg-devel
Mirarora has quit [Ping timeout: 252 seconds]
Mirarora has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
Marth64 has quit [Quit: Leaving]
Marth64 has joined #ffmpeg-devel
<Marth64>
how to set container level cropping metadata? (matroska)
LainExperiments has quit [Quit: Client closed]
<jamrial_>
Marth64: add an AV_PKT_DATA_FRAME_CROPPING entry in stream->codecpar side data
<Marth64>
jamrial_: thx
Raz- has quit [Ping timeout: 260 seconds]
stupidoo has quit [Quit: Client closed]
System_Error has quit [Remote host closed the connection]
^Neo has quit [Ping timeout: 252 seconds]
Raz- has joined #ffmpeg-devel
System_Error has joined #ffmpeg-devel
jamrial_ has quit []
Martchus has joined #ffmpeg-devel
Martchus_ has quit [Ping timeout: 265 seconds]
System_Error has quit [Remote host closed the connection]
DodoGTA has joined #ffmpeg-devel
System_Error has joined #ffmpeg-devel
Traneptora has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
dliu has joined #ffmpeg-devel
dliu has quit [Client Quit]
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
ngaullier has joined #ffmpeg-devel
mkver has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
abdu has joined #ffmpeg-devel
abdu has quit [Ping timeout: 240 seconds]
abdu has joined #ffmpeg-devel
^Neo has joined #ffmpeg-devel
^Neo has quit [Changing host]
^Neo has joined #ffmpeg-devel
jdek has quit [Quit: Connection closed for inactivity]
^Neo has quit [Ping timeout: 264 seconds]
abdu has quit [Ping timeout: 240 seconds]
abdu has joined #ffmpeg-devel
rvalue- has joined #ffmpeg-devel
rvalue has quit [Ping timeout: 252 seconds]
rvalue- is now known as rvalue
q66 has left #ffmpeg-devel [WeeChat 4.3.5]
abdu has quit [Quit: Client closed]
jamrial has joined #ffmpeg-devel
^Neo has joined #ffmpeg-devel
Everything has joined #ffmpeg-devel
Arsen_ is now known as Arsen
<fflogger>
[newticket] fildens: Ticket #11430 ([ffmpeg] ffmpeg 7.x There are no speed and time statistics if there are data streams in the SRT stream) created https://trac.ffmpeg.org/ticket/11430
pross has quit [Ping timeout: 276 seconds]
CryptoLeader has joined #ffmpeg-devel
<Lynne>
of the many things lavf lacks, a big one is ignoring any packets with timestamps less than the previous packet's
<JEEB>
if that was true then mpeg-ts wouldn't wrap-around for the API client
<JEEB>
oh wait
<JEEB>
*ignoring*
<CryptoLeader>
extereme easy to add bsf that drops such packets
<Marth64>
or duplicate
<JEEB>
sorry, I misread that sentence at first
<Marth64>
packet with dupliacte PTS
<CryptoLeader>
less or equal
<Marth64>
ya
<Lynne>
the issue is that clients generally cache packets
<BtbN>
Lynne: forgejo will not build your vulkan stuff btw
<BtbN>
I originally wanted to use my images from the FFmpeg-Builds, but had to add nodejs to them first...
<BtbN>
Building with them is also not suuper straight forward
<Lynne>
yeah, I know, I'm glad to see fate works fine on MRs
<Lynne>
we can add runners which support vulkan, though
<BtbN>
Are there even fate tests for it?
<BtbN>
Building vulkan does not need a runner with a GPU
<BtbN>
Keep in mind runners are not ephemeral, so they rack up constant cost
<BtbN>
And I think you also need to fully trust them
<Marth64>
I recall in GitLab you can make ephemeral dockerized runners, is this not possible in Forgejo?
<Marth64>
(this was 7 years ago when I ran an instance, I am sure times have changed)
<BtbN>
Marth64: I didn't find docus for it for Gitlab either
<BtbN>
You can with self-hosted github runners.
<BtbN>
*docs
<BtbN>
the problem is also, if you turn the runner off, it won't stay up to date, OS wise
<Marth64>
BtbN: I'll dig up some old notes to see if anything comes back to me. Finally have some FF time this weekend so I can play around later today
<Marth64>
makes sense though
<BtbN>
The runner stuff also always runs in a container (not doing that is possibly, but insanely risky). And accessing GPUs that way is hard and annoying
<Marth64>
oh I get what you mean now. dumb me. the physical runner still needs a GPU
<Lynne>
no fate tests for vulkan, yet
<BtbN>
Hetzner does have GPU instances
<BtbN>
but they're prices
<Lynne>
the only part that makes sense to test is ffv1_vulkan
<Lynne>
llvmpipe is pretty good
<BtbN>
if they only boot up only on-demand, it's manageable tho
<BtbN>
But "boot up" means Create entire VM
<BtbN>
cause just a shut down VM still costs money
<Lynne>
you can use lxc which should make it easy to access GPUs, though its allegedly less secure than docker
<BtbN>
docker can do GPU stuff, at least nvidia has tooling for it
<Marth64>
It can't be some box say that I host in a basement and maintain? I can pass my spare GPU through docker
<Marth64>
as an example
<BtbN>
It needs quite speedy upload and download speed, other than that it can be near anywhere
<Marth64>
my upload sucks but I can carve out maybe 15Mbps up. my download is gigabit
<Lynne>
upload is basically text-only, isn't it?
<BtbN>
no, it uploads all artifacts and cache
<Marth64>
I can carve out more upload with qos depending on how often it needs to run. GPU is a Quadro P2200
<BtbN>
I need to read more about the security implications of the runners
<BtbN>
You are effectively letting random people execute code on there
<BtbN>
And containers are not perfect
<Marth64>
thx for the heads up
<CryptoLeader>
vf_pf2pf.c with more unrolling of step and offset now have SIMD produced by compiler, this will be 10816 unrolled special functions, not sure but filter .o object size will be pretty fat >10MB
<haasn>
CryptoLeader: I'm a bit worried that the new swscale approach is a bit dead in the water unless we do something to change the optimization situation
<haasn>
because you get 10x perf dif with AVX and vectorization enabled vs off
<CryptoLeader>
you can not get SIMD asm without manual work
<BtbN>
Can just build the same file multiple times and then runtime-detect the best version
<haasn>
I remember in the past people were very not happy about me adding vectorization pragmas to my files
<haasn>
yeah
<CryptoLeader>
the vf_pf2pf.c does not need any non-standard pragmas, to trigger auto-vectorization from compiler
<haasn>
CryptoLeader: ffmpeg compiles with -fno-tree-vectorize
<CryptoLeader>
yes, but clang works
<haasn>
ofc solution can be to use clang instead but all distros are still sticking with gcc
<Marth64>
BtbN: thx for the whole Forgejo setup overall. based on your findings if still desirable, I can run it in a proxmox VM (with GPU passthrough) and VLAN it away but ideally access is limited to maintainers
<BtbN>
Well, it'll send PRs there
<CryptoLeader>
haasn: steal work from x86 asm gurus from dav1d project
<haasn>
I could ofc also use intrinsics but as I understood, the only thing ffmpeg people hate more than compiler optimizations is intrinsics
<CryptoLeader>
if you go with intrinsics path, would there be any benefits vs current sws and zimg/zscale
<JEEB>
haasn: I think the no-vectorize can be disabled per-lib if needed
<JEEB>
if swscale wants to take auto-vectorization into use, I think that's fair n' all
<ePirat>
BtbN, you could only run nightly jobs or so on that runner tho, not MRs
<BtbN>
true, can make special labels for those
<jamrial>
CI needs to run MRs
<BtbN>
Forgejo has one very stupid thing I noticed: It cancels all previous runs on a new push
<BtbN>
that makes sense for PRs, but not for master
<CryptoLeader>
also i9-14900HX here gives same performance results with SIMD code and zero SIMD code, probably because of many cores...
<CryptoLeader>
almost same*
<Marth64>
I'll get the box up this weekend so you can play with it. I need it anyway to test a VBI patch someone sent up, its got a TV tuner in it. I frankensteined the rig in a new case, just needs a CPU fan
<Marth64>
I'll send you the VM access info when I can
<BtbN>
https://bpa.st/JDNA this is how I bring up runners on Hetzner VMs. Obviously all the Hetzner-Network-Stuff won't apply.
<Marth64>
its a beater box so as long as nothing shady is happening (batch stuff limited to certain labels is fine), I'm good
<BtbN>
you'd need to change the label and figure out how to expose the GPU to docker
<haasn>
okay with explicit loop unrolling/vectorization pragmas it is only 2x slower than with AVX enabled
<Marth64>
that I can do
<BtbN>
ideally it's just installing the nvidia tools and passing --gpu as option
<haasn>
on default (generic) code
<haasn>
so I propose to just compile it twice, once with AXV and once with just MMX/SSE/whatever
<CryptoLeader>
haasn: or take auto-generated compiler simd and improve/steal it into nasm code files
<haasn>
maybe
<haasn>
but I wanted to commit this first without SIMD
<CryptoLeader>
when rescaling and bitdepth conversion, what is better? do scaling before bitdepth conversion or after?
<haasn>
forcing loop unrolling gets it down to 60% slower
<haasn>
CryptoLeader: better for what? performance? are you using dithering when scaling at low bit depth?
<haasn>
normally you want to expand to a higher intermediate bit depth, do scaling, then dither down to the desired output depth
<haasn>
the intermediate bit depth should be above the output depth
<CryptoLeader>
cool, will remember that. currently filter does not do scaling/and other conversions, but will be hopefully added later in same or different filter...
<CryptoLeader>
idea is to have support for every unscaled direct pixel format conversion, and use intermediate processing step once scaling is needed...
<CryptoLeader>
and for that intermediate step I'm very interested in your LUT approach, even with/for XYZ colorspace!
<Compn>
you should make it future proof
<Compn>
by allowing the user to specify the max bits , 1024x1024x1024x1024
<Compn>
i mean by not limiting yourself to 10bit 12bit 16bit
<Compn>
in your intermediate step
<CryptoLeader>
Compn: there is 26 step&offset combinations for current pixel formats in lavu pixdesc
<Compn>
'16 bits should be enough for anyone'
<CryptoLeader>
26*26*(endian big/little)*(8bit+16bit+float32+float16+32bit int) bunch of functions to do conversions
<CryptoLeader>
yes, there are some unused functions, because of not existing pixel format
<CryptoLeader>
but that is future proof for next pixel formats added in future
ngaullier has quit [Remote host closed the connection]
CryptoLeader has quit [Quit: Client closed]
CryptoLeader has joined #ffmpeg-devel
<haasn>
Compn: my swscale-ng design is future proof by being highly templated, we can trivially just compile the same template at a higher bit depth
<haasn>
if somebody really needs more than 32 bits internal precision
Everything has quit [Ping timeout: 252 seconds]
<michaelni>
anyone has a copy of jpeg-ai draft (6048-1?) ? if so iam interrested, google failed finding a downloadable one
<CryptoLeader>
project canceled
<mindfreeze>
michaelni: check DM
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
CryptoLeader has quit [Quit: Client closed]
CryptoLeader has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
LainExperiments has joined #ffmpeg-devel
tufei has joined #ffmpeg-devel
tufei has quit [Ping timeout: 264 seconds]
<fflogger>
[editedticket] Gyan: Ticket #11430 ([ffmpeg] ffmpeg 7.x There are no speed and time statistics if there are data streams in the SRT stream) updated https://trac.ffmpeg.org/ticket/11430#comment:1
CryptoLeader has quit [Quit: Client closed]
abdu has joined #ffmpeg-devel
LainExperiments has quit [Ping timeout: 240 seconds]
ccawley2011 has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
ccawley2011_ has joined #ffmpeg-devel
ccawley2011 has quit [Ping timeout: 252 seconds]
System_Error has quit [Remote host closed the connection]
ccawley2011__ has joined #ffmpeg-devel
ccawley2011_ has quit [Ping timeout: 264 seconds]
System_Error has joined #ffmpeg-devel
HarshK23 has quit [Quit: Connection closed for inactivity]
Warcop has joined #ffmpeg-devel
Mirarora has quit [Quit: Mirarora encountered a fatal error and needs to close]
Mirarora has joined #ffmpeg-devel
abdu has quit [Quit: Client closed]
Mirarora has quit [Quit: Mirarora encountered a fatal error and needs to close]
Mirarora has joined #ffmpeg-devel
ccawley2011__ has quit [Ping timeout: 272 seconds]