michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
<cone-947>
ffmpeg Niklas Haas master:59c39a79cafd: tests/swscale: rewrite on top of new API
<cone-947>
ffmpeg Niklas Haas master:3edd1e42b93e: tests/swscale: add a benchmarking mode
<cone-947>
ffmpeg Niklas Haas master:04ce01df0bb2: avfilter/vf_scale: switch to new swscale API
<Lynne>
gratz
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg-devel
mkver has joined #ffmpeg-devel
<llyyr>
freedom from swscale mines?
snoriman has quit [Quit: WeeChat 4.4.2]
<haasn>
not quite yet, but at least now we can add more features without convoluting the spaghetti more
<llyyr>
neat
<haasn>
swscale won't become good until we dismantle the legacy scaling code altogether
<haasn>
but that won't happen overnight
<Lynne>
self-modifying mmxext asm should be getting scared right about now
<JEEB>
haasn: btw I did get as far as seeing that the log transfers were not in H.262 (MPEG-2 Video) 2000 ed, and got then added in 2007. so I guess they were started being added to MPEG-4 stuff between 2000 and 2003.
<haasn>
funky
<haasn>
I think I'll just treat them as a strange legacy SDR curve with a hard-coded contrast ratio, and to define them as an OETF only
<haasn>
the main question here is basically how to appropriately scale scene relative values
<haasn>
I was thinking about scaling them such that a value of 1.0 corresponds to diffuse white
<haasn>
so e.g. HLG would go up to a value of 12.0
<haasn>
and PQ would go up to 59.5208
<haasn>
(= OOTF_inv(10000))
<haasn>
or perhaps we should scale it such that OOTF_inv(203) == 1.0
<JEEB>
reminds me that there was some effort of trying to define HLG for still images which apparently had a different definition or something. since there was a thing in the doc archive that was requesting on comments whether that separate definition would bork something for video people
<haasn>
now I'm suddenly not sure how the scaling works there
<haasn>
seems like 203 nits actually maps back to 5.896698068785654 under this definition
<haasn>
a bit awkward
<haasn>
if you look at the way they technically define the OOTF, it's scaled such that 1.0 corresponds to the maximum possible camera exposure (1.0)
<JEEB>
yea
<haasn>
but I want an operation like OETF(OETF_inv(E)) to just make sense (tm) for a mixture of SDR and HDR curves
<haasn>
and give the right result
<haasn>
in a way, the PQ reference OOTF is still defined in a way that assumes a 100 nits reference display
<JEEB>
yea
<haasn>
which makes me wonder if it's just a legacy of the days before 58% was standardized as the sdr white
<JEEB>
yea compare the 2309 etc dates compared to BT.2100
<JEEB>
or was it 2390
<haasn>
now I wonder if they released a revised OOTF anywhere
<haasn>
or if we should ignore the spec and just cook our own
<JEEB>
does the HDR software reference suite that some corps made have any examples for that, I wonder?
<haasn>
it seems s-log and v-log are also defined in such a way that 0-1 is the assumed maximum signal range in linear light
<haasn>
and the reference white level is just all over the place
<haasn>
hmm
<haasn>
then maybe let's follow that definition as well
<haasn>
JEEB: do you know which ITU standard introduces the 203 value?
<JEEB>
I think we went through this back in the day, and it was one of the reports
<JEEB>
2390 or one of the related ones
<JEEB>
2017 ed of 2390, R-REP BT.2390-3
<haasn>
ah yes
<haasn>
or else 2408
<JEEB>
that is when the number 203 appears in 10.1.2.3 Scaling
<haasn>
and they suggest literally inserting a constant of 2.0 in between the BT1886 EOTF and the PQ OETF
<haasn>
which would make the reference white level actually 200 instead of 203
<JEEB>
then later that stuff gets removed from 2390 altogether
<haasn>
and the value of 203 came from 75% HLG, right
<haasn>
hmm
<JEEB>
so it first appeared in 2390-3 and then most likely got moved to one of the newer specs
<JEEB>
uhh, not specs
<JEEB>
reports
<JEEB>
haasn: yea 2390-3 points at that while then I recall other documents mentioning actual visual testing
<haasn>
I think that the high degree of uncertainty around the OOTF and proper scaling of scene referred content pushes me in the direction of doing all tone mapping in display referred space after all
<haasn>
Especially given that I want to use IPT colorspace which is designed around an absolute light level
<haasn>
Hmm
<haasn>
This is all quite awkward
<JEEB>
yea so reports 2408 and 2446 are then the two other specs
<JEEB>
2446 being methods of conversion between HDR and SDR, 2408 being guidance for operational practices
<JEEB>
2408 specifically talks about conversion between 203 and 100 (BT.2035) SDR signal formats
<JEEB>
(another document reference :D)
<JEEB>
and 2446 also refers to 203 and points at 2408
<JEEB>
but yea, the 203 number first got added to 2390, and then they noticed that stuff required more documenting, so they split it out into 2408
<JEEB>
fun, it seems like if we take the "perf" (User time in this case) of ffmpeg.c from 2023-08 as 1.0, 2024-09 was 0.90 and now today it was 0.91 with this one use case that I've poked at. of course the jumps in results make me think that I need to rerun this stuff for a few more times, but this matches certain data I've seen overall. next stop: perf
<JEEB>
need to get a flamegraph to compare
b50d has joined #ffmpeg-devel
b50d has quit [Remote host closed the connection]
<haasn>
Figure 10 shows how the non-linear SDR BT.709 or BT.2020 video signal is converted to linear ‘scene light’ by applying the approximate inverse of SDR OETF, 𝐸=(𝐸′)2, as described in BT.2087. When the SDR source is with the BT.709 colorimetry, the conversion is followed by the colour conversion matrix as described in Recommendation ITU-R BT.2087.
<haasn>
wait what
<haasn>
gamma 2.0 is standard now?
<haasn>
so I think I should follow the recommendations in this document for adapting between the HDR curves and a reference space of scene referred 100 nits SDR
<JEEB>
another reference :D
<haasn>
but it does say that display referred mapping tends to produce better results
<haasn>
that would match my intuition as well
<haasn>
I will support both in libswscale in any case
<JEEB>
and ah, BT.2087 is an actual recommendation where they tell how2 BT.709->BT.2020
<haasn>
the question is just how to properly implement scene referred mapping
<JEEB>
right, I recall seeing this graphic before
<nevcairiel>
thats quite the out of context formula they quote there, BT.2087 just simplifies it to a square root and tells you to handle the deviation to the actual OETFs in another step, typical :P
<JEEB>
:D
<nevcairiel>
"The Rec. 709 and Rec. 2020 OETFs are similar to a square root function. The deviation of these OETFs from a 1/2.0-power function including the linear segment near black can be decomposed into the camera adjustment function. So the OETF itself can be regarded as a square root function.
<nevcairiel>
obviously they also only use it to sort-of linear briefly and then back using the same formula
<nevcairiel>
they also document two cases, the first actually uses 2.40 power gamma
<nevcairiel>
as an approximation of 1886
<haasn>
so can we conclude that nobody has any idea what's going on and every spec is suggesting something different?
<haasn>
I mean here ITU is literally suggesting doing tone mapping by just linear stretching in absolute light
<haasn>
or at least, compensating for the difference between 100 and 203
<haasn>
which is actually an incredibly weird way to phrase it because BT1886 on infinite contrast displays is linear it the output brightness anyway
<haasn>
shrug
<cone-947>
ffmpeg Leo Izen master:3c3bf6c10960: MAINTAINERS: list csp.c and csp.h maintainers
jamrial has joined #ffmpeg-devel
<nevcairiel>
I think things just shouldnt be quoted between recommendations, all 2087 says is "this approximation is fine for a quick linerization when converting from 709 to 2020", and nothing else
^Neo has joined #ffmpeg-devel
^Neo has quit [Changing host]
^Neo has joined #ffmpeg-devel
<elenril>
mkver: ping fffilter
<BBB>
thilo: for your webp patch, maybe you explained this in previous emails and I can't find it, but what is the reason for this change? Is it just cleanliness? Or performance? Or something else?
<BBB>
did we get a response from the meta guy on their loadable avfilter patches?
Workl has quit [Read error: Connection reset by peer]
haihao has quit [Ping timeout: 265 seconds]
haihao_ has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
<jamrial>
haasn: gcc asan seems really unhappy about the vf_scale changes
<thilo>
BBB: the patch is about supporting animated webp
<haasn>
probably some pointer math making it unhappy?
<BBB>
thilo: the two messages relate to different threads :) animated webp is cool, thanks for explaining
<jamrial>
haasn: heap-buffer-overflow in query_formats()
<BBB>
thilo: and yes, at vdd there was a presentation from wes (meta) about loadable avfilters
haihao_ has quit [Ping timeout: 245 seconds]
<thilo>
BBB: I see - no surprise, everyone wants loadable filters, codecs, etc... we still don't want them from the FFmpeg side, or has the tide shifted there?
<BBB>
you're right that everyone (I mean, on the outside) wants them
<jamrial>
the main argument against it is that an interface for it would be either very limited, or required to expose too many internals that would make further development a pita
<BBB>
right, elenril mentioned that
<JEEB>
yea, their patch had literal FFmpeg library versions there in symbols etc
<JEEB>
so they would not load the module if there was mismatch
<BBB>
it was worse. they had multiple ffmpeg versions
<BBB>
so they had plugins against multiple versions
<JEEB>
yes, but I think each was built against one exact version of libs
<BBB>
so this would allow loadint "the right ones" :)
<BBB>
exactly
<JEEB>
yup
<BBB>
:)
<jamrial>
we ELF now
<thilo>
BBB: if you have an opinion about the interface used in the webp patch (public API vs ff_ functions), please add to the thread. atm, it is thilo yeah and elenril nay for too long
<BBB>
thilo: ... I was afraid you'd ask
<thilo>
sorry, otherwise need to ask TC some time
<BBB>
I get it. I'm not sure I know enough to give intelligent feedback right now
<jamrial>
haasn: it does
<thilo>
BBB: then probably don't add to your stress because of it
<thilo>
BBB: elenril should have missed the last argument some versions ago as it seems
haihao has quit [Ping timeout: 245 seconds]
haihao has joined #ffmpeg-devel
sm2n has quit [Remote host closed the connection]
OctopusET has quit [Remote host closed the connection]
OctopusET has joined #ffmpeg-devel
sm2n has joined #ffmpeg-devel
<elenril>
thilo: I'm getting to it
<elenril>
there's a lot to look at after being away for a month
<jamrial>
libswscale/loongarch/swscale_init_loongarch.c: In function 'ff_sws_init_range_convert_loongarch':
<jamrial>
libswscale/loongarch/swscale_init_loongarch.c:45:25: error: 'src_range' undeclared (first use in this function)
<haasn>
stupid typo
<haasn>
, instead of .
ngaullier has joined #ffmpeg-devel
ngaullier has quit [Read error: Connection reset by peer]
haihao has quit [Ping timeout: 244 seconds]
haihao has joined #ffmpeg-devel
haihao has quit [Ping timeout: 245 seconds]
Sean_McG has quit [Ping timeout: 260 seconds]
haihao_ has joined #ffmpeg-devel
Sean_McG has joined #ffmpeg-devel
<compnnn>
BBB, not yet re meta filter plugin system
<compnnn>
didnt he require all of those plugin versions because the filter api kept changing ? you can blame users for it , but ffapichange is a thing too.
<JEEB>
exactly because it's not a public API
<JEEB>
so the solution makes sense if you don't want to be defining a public api
Marth64 has joined #ffmpeg-devel
Workl8 has quit [Read error: Connection reset by peer]
MrZeus__ has quit [Read error: Connection reset by peer]
MrZeus__ has joined #ffmpeg-devel
<kurosu>
BBB: re external filters, will they have the FF_FILTER_FLAG_TAINTED ? (kidding, the comparison point is avisynth being completely ok with closed source filters)
<Marth64>
I always appreciated the kernel calls things out as tainted
Krowl has joined #ffmpeg-devel
<compnnn>
filters are the future*
<compnnn>
no one cares about converting video game formats. they just want a filter that turns them into a giant cat
<compnnn>
lawyer stuck with cat filter.mp4
<cone-947>
ffmpeg Marvin Scholz master:6b9f4f36f740: swscale/internal: fix typo in loongarch specific code
<cone-947>
ffmpeg Marvin Scholz master:83c1c622a52c: MAINTAINERS: Add myself as Darwin maintainer
<beastd>
Having read everything that has been said since yesterday I must say: There is a a non-zero chance that ffcatchat might disrupt business and private communication, paving the way to a future of cat-charged collaboration solving the world's hardest problems.
<beastd>
SCNR ;-P Not sure how small that chance really is and sorry for the marketing speak but I feel it puts both ends of the spectrum nicely into perspective
zsoltiv_ has quit [Ping timeout: 272 seconds]
zsoltiv_ has joined #ffmpeg-devel
Gramner has quit [Ping timeout: 252 seconds]
michaelni has quit [Ping timeout: 252 seconds]
michaelni has joined #ffmpeg-devel
Gramner has joined #ffmpeg-devel
Gramner has quit [Remote host closed the connection]
steven-netint has quit [Ping timeout: 272 seconds]
steven-netint has joined #ffmpeg-devel
Gramner has joined #ffmpeg-devel
<fflogger>
[editedticket] MasterQuestionable: Ticket #11314 ([avformat] "alsa" capturing with "-c:a copy" had audio stuttering in AVI) updated https://trac.ffmpeg.org/ticket/11314#comment:12
Krowl has quit [Read error: Connection reset by peer]
realies has joined #ffmpeg-devel
<realies>
i'm doing dynamic dual pass loudnorm with ffmpeg version N-117857-g2d077f9acd-20241121 and some files make the second pass crash with: Assertion best_input >= 0 failed at fftools/ffmpeg_filter.c:2127 Aborted (core dumped)
<realies>
exact command that crashes it is ffmpeg -hide_banner -nostats -i in.wav -af "loudnorm=i=-19:tp=-1:lra=6:measured_I=-18.87:measured_LRA=7.70:measured_TP=0.00:measured_thresh=-29.82:offset=-0.13:linear=true:print_format=json,aresample=44100:resampler=soxr:precision=33:osf=dbl" out.wav
<realies>
removing aresample from the filters when doing dynamic loudnorm does not crash too
<realies>
does that mean running ffmpeg with a gdb prefix?
<JEEB>
the assert is pretty clear, the stuff that's more interesting is the way it got there, so possibly `-v debug -debug_ts` and then `2> funky.log` at the end? :) and then pastebin of choice and link here?
<JEEB>
also I think there was a script in the repo that did this 2pass loudnorm?
<JEEB>
yea, tools/loudnorm.rb
<realies>
i've read a few of those 2pass scripts and have reimplemented it in js, i'll try to get that log
<realies>
strange: [in#0/wav @ 0x61a28d07d740] EOF while reading input [in#0/wav @ 0x61a28d07d740] Terminating thread with return code 0 (success)
<JEEB>
that's quite OK, that means at that point it finished reading the input
<JEEB>
then you have the "decoder" thread some time later
<JEEB>
looking at the demux+tsfixup timestamps it read 65 seconds of stuff, and the output was at 62.6s
<JEEB>
when it asserted
<realies>
if i was in gstreamer land i'd put ,queue, in between the filters 👀
___nick___ has joined #ffmpeg-devel
<realies>
is it some sort of a race condition?
<JEEB>
also just as a note, please try to repro without soxr in the resampling stuff. since most people are not going to be building FFmpeg with soxr as swresample itself is a thing.
<realies>
i was just looking for a high quality resampler and _the_ chat suggested soxr
<JEEB>
sox is a good library, but until actually someone shows data that swresample is in any significant manner worse than sox I'd file that under cargo cult
<realies>
chat says: Remove soxr and use default swresample; Remove the precision=33:osf=dbl parameters since they're not providing benefit for a 16-bit output
<realies>
wasn't aware of the precision part for 16-bit audio
<JEEB>
in general resamplers have internal precision, which is usually some sort of float
<JEEB>
and then the output format is separate
<realies>
i was wondering if it works in 32bit mode like some DAWs
<realies>
or at least higher precision than output precision
<realies>
precisions 33, 32, 31 and 30 break, at 29 it passes fine
<JEEB>
fun, but at least makes it clear that it seems to be a mix of soxr and precision. I wonder if it happens with any input. if instead of `-i input` you utilize just a generated input from one of the signal generators
<JEEB>
`ffmpeg -h filter=sine`, for example
<JEEB>
which you should be able to utilize with filter_complex as input, but that would get rid of the input and filter chain separation so while it's technically better, you may want to utilize the less optimal `-f lavfi -i "sine=option=value:option2=value2"`
<realies>
before i reported the issue it was actually working with _some_ files, not sure what is special about the one that fails but i stopped there, lemme try the different inputs
<JEEB>
that means your magical wave file is no longer required for making things fail
<JEEB>
out of interest, does it actually work, if you remove the input definition (both format and input), and just put everything into one long -filter_complex line?
<realies>
i started poking at the command already, 65.706213 -> 65.70 does not fail
<realies>
3 decimals and it fails
<JEEB>
since you can just add a comma there after the sine and feed it into loudnorm and friends
<JEEB>
and yes, as shocking as it is `ffmpeg` does not require an input. it will just utilize the complex filter chain if there is one
<realies>
^ it was shocking
<realies>
this fails: ffmpeg -filter_complex "sine=frequency=440:sample_rate=44100:samples_per_frame=4096:duration=65.706,loudnorm=i=-19:tp=-1:lra=6:measured_I=-18.87:measured_LRA=7.70:measured_TP=0.00:measured_thresh=-29.82:offset=-0.13:linear=true:print_format=json,aresample=44100:resampler=soxr:osf=dbl:precision=33" output.wav
<JEEB>
wow
<realies>
same Assertion best_input >= 0 failed at fftools/ffmpeg_filter.c:2127
<JEEB>
I mean, it kind of makes sense since the end part of that chain is the same
<realies>
yeah
<realies>
time to stop staring at the screen, thanks a ton for the help!
<JEEB>
np, feel free to make an issue on trac. it's unfortunately soxr specific seemingly, but at least you can make it repro with generated input :)
<JEEB>
so you don't need to provide a sample separately
___nick___ has quit [Ping timeout: 246 seconds]
<realies>
why is it unfortunate? separate library?
___nick___ has joined #ffmpeg-devel
<JEEB>
non-required external dependency which I would guesstimate most people don't build FFmpeg with. and even if it gets enabled during build, it doesn't get utilized unless you specifically ask for it
<realies>
i see, i'll have a look at how to create an issue tomorrow, and possibly shorten the cmd as much as possible
<cone-947>
ffmpeg J. Dekker master:d89fbfd4df6f: avcodec: deprecate sonic
kasper93 has quit [Remote host closed the connection]
Krowl has quit [Read error: Connection reset by peer]
Teukka has quit [Quit: Not to know is bad; not to wish to know is worse. -- African Proverb]
nitroxis has quit []
Teukka has joined #ffmpeg-devel
Teukka has quit [Changing host]
Teukka has joined #ffmpeg-devel
<another|>
compnnn: Maybe my english is not up to par but is "inactive" some sort of insult or derogatory term?
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg-devel
nitroxis has joined #ffmpeg-devel
<Traneptora>
got an email from someone who claims that mDCv and cLLi are likely going to be renamed to mDCV and cLLI and wanted my thoughts
<JEEB>
I mean that would match cICP, but then again they hvae tEXt, zTXt, iTXt, pHYs, eXIf
<JEEB>
*have
<JEEB>
and yes, then you have bKGD, hIST, sPLT
<JEEB>
so it's already mixed up :D
<compnnn>
another|, depending on who wrote it, and the context around "inactive" when it was written, it has different meanings. it can be used as a fact or as a negative opinion
<Traneptora>
I don't really have any thoughts other than the risk of backwards compat
<compnnn>
i dont mind inactive as a fact. but coming from the person who wrote it, its annoying shit to me lol
<JEEB>
> Implemented in Adobe Photoshop, Lightroom and Camera Raw, MacOS ProApps, Compressor, Final Cut Pro
<JEEB>
I wonder if that's just cICP which is why they feel safe to change the two other things
<JEEB>
chromium apparently implemented mDCv
<JEEB>
and cLLi
<Traneptora>
so did we. not sure what to tell him though
<JEEB>
well yea, FFmpeg is prominently shown on that very page - I just didn't mention it since we all know we support it
<Traneptora>
it wouldn't be hard to change the implementation, just not sure if the spec changes, and we change to conform to spec, if that's just fine with us, and do we care about backward compat
<JEEB>
but since they are thinking about it they clearly are not seeing those two extra metadata blocks as important as the main cICP
<JEEB>
which is kind of true, since cICP is the primary color information
<Traneptora>
cLLi is arguably more important than mDCv b/c for PQ in cICP it can be helpful
<JEEB>
but if there's multiple implementations already, isn't that kind of eww to start changing something like the identifier without a good reasoning?
<Traneptora>
they want to make it Unsafe To Copy
<Traneptora>
which is the semantic meaning of capitalizing the last letter
<Traneptora>
Unsafe To Copy means it can't be moved elsewhere (unlike, say. tEXt)
<JEEB>
lol
<Traneptora>
I think that's probably accurate cause it affects rendering, it should be before IDAT
<Traneptora>
and the current naming convention implies otherwise
___nick___ has quit [Ping timeout: 276 seconds]
<JEEB>
but effectively it just means that they are reserving both identifiers and already implemented applications would then have to just follow two identifiers for read
<Traneptora>
that's true, but what it means is that applications that *don't* implement it won't necessarily ruin files by "helpfully" putting mDCv at the end
<Traneptora>
if they see mDCV and not mDCv
<JEEB>
yea, I mean that reasoning is not the dumbest
<JEEB>
since it's not just "oh we forgot to make this consistent"
<JEEB>
but has some semantic meaning
<Traneptora>
there's semantic meaning, yea
<Traneptora>
it becomes an issue of overhead on implementations to support both, versus the number of real-word PNG editors that can't read mDCv and will move it anyway
<Traneptora>
I don't really know where that ratio lies
<another|>
michaelni: I think it would be vary valuable for both the community as well as yourself for you to attend in-person events. Please consider FOSDEM 2025.
<JEEB>
Traneptora: since you writing side can just be moved to the new key it's not too bad as it's just the reader part
<JEEB>
*since the writing side
<Traneptora>
writing side is probably like a one-character change yea
<Traneptora>
I guess that's what I'll say
<JEEB>
reader-wise you just need to then utilize both
<Traneptora>
I don't believe it's hard to implement both, it's just less clean
<JEEB>
just another case having the same thing
<JEEB>
case MKTAG: case MKTAG:
<JEEB>
it just means that effectively the previously utilized identifiers are now effectively reserved
<JEEB>
so unusable for other blocks
<beastd>
haasn: congrats on all landed sws refactorings and improvements so far! diff on vf_scale looks nice and impressive. keep up the work. it's in a strange way bit like a marathon. it ain't over till it's over :)
<Traneptora>
JEEB: every identifier with a capital second letter is reserved
<Traneptora>
that makes it a public chunk defined by W3C/spec.
<Traneptora>
private chunks are required to have a lowercase 2nd letter
<JEEB>
yea, but reserved in the sense that the spec cannot utilize them any more
<JEEB>
since they have once already been defined for something and there are implementations for those
<Traneptora>
strictly speaking yes but since the spec has semantic meaning of the capitalization, there's no duplicates up to case
<JEEB>
so yea, the semantic difference at least gives a reasoning that's other than just "oh it looks bad when they're not uppercased the same way :<", but essentially they are then just throwing two identifiers into the wind since they no longer can be utilized by the spec for something else
<Traneptora>
I don't think running out of identifiers is a big deal
<JEEB>
if that's fine by them then so be it, implementation wise it's not too hard
<JEEB>
I'm not saying it's a big deal, that's just a decision they have to do
<JEEB>
not us
<JEEB>
I did not know how many they already utilize nor what the available space right now is
<JEEB>
it's just the effective effect and I decided to list it :)
<JEEB>
in other words, I just pointed out the headache that's on their side
<JEEB>
at least images are not moving in general, so any intelligent enough viewer should be capable of doing gamut/tone mapping calculation over the whole image as there's no realtime requirement
<JEEB>
so even if the twwo additional blocks get burned for existing implementations, it's not *that* bad (compared to completely incorrect CICP values)
<compnnn>
im wondering if we should split off the ffmpeg project policies to another list
<compnnn>
ffmpeg-devel and ffmpeg-endless
<haasn>
beastd: thanks! this is just the tip of the iceberg, I have many improvements lined up
<haasn>
just the most crucial part to land first
<beastd>
haasn: nice! good to know more is to come. hopefully also not too much to fix in regards to regressions etc.
<cone-947>
ffmpeg Marton Balint master:aea63ea7f574: avformat/framecrc: add AVFMT_NODIMENSIONS flag