michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.0.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
Teukka has quit [Read error: Connection reset by peer]
<Traneptora>
mkver: regarding the mDCv typo patch, files created by ffmpeg before the patch will have incorrect mDVc tags instead. Do we wish to continue to honor those for compat or should we ignore them?
Teukka has joined #ffmpeg-devel
Teukka has quit [Changing host]
Teukka has joined #ffmpeg-devel
Traneptora has quit [Quit: Quit]
Traneptora has joined #ffmpeg-devel
thilo has quit [Ping timeout: 240 seconds]
<Sean_McG>
Traneptora: was my concern as well
<Traneptora>
I think it comes down to how many incorrectly tagged PNGs were likely to be created during that time
<Traneptora>
since it was a relatively small window
thilo has joined #ffmpeg-devel
thilo has quit [Changing host]
thilo has joined #ffmpeg-devel
Traneptora has quit [Client Quit]
q66 has quit [Quit: WeeChat 4.0.2]
<mkver>
Traneptora: You should push the fix asap to finally close that window.
Lypheo has quit [Quit: Ping timeout (120 seconds)]
Lypheo has joined #ffmpeg-devel
arch1t3cht1 has joined #ffmpeg-devel
arch1t3cht has quit [Ping timeout: 252 seconds]
arch1t3cht1 is now known as arch1t3cht
feiwan1 has quit [Ping timeout: 260 seconds]
feiwan1 has joined #ffmpeg-devel
<Marth64>
Lynne: duration imo. the damage can probably be synthetically restored more easily
jamrial has quit []
averne has quit [Quit: quit]
averne has joined #ffmpeg-devel
Martchus has joined #ffmpeg-devel
Martchus_ has quit [Ping timeout: 255 seconds]
cone-393 has quit [Quit: transmission timeout]
witchymary has quit [Remote host closed the connection]
witchymary has joined #ffmpeg-devel
witchymary has quit [Remote host closed the connection]
witchymary has joined #ffmpeg-devel
Marth64 has quit [Quit: Leaving]
AbleBacon has quit [Read error: Connection reset by peer]
<j-b>
'morning
<JEEB>
mornin'
HarshK23 has joined #ffmpeg-devel
<Lynne>
'afternoon
Krowl has joined #ffmpeg-devel
Krowl has quit [Read error: Connection reset by peer]
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
cone-805 has joined #ffmpeg-devel
<cone-805>
ffmpeg Rajiv Harlalka master:fc446eea05b9: tests/fate/filter-audio.mak: add test for atempo audio filter
<cone-805>
ffmpeg Anton Khirnov master:f1664aabb18b: fftools/ffmpeg: rewrite checking whether codec AVOptions have been used
<cone-805>
ffmpeg Anton Khirnov master:901f7e3f72fe: fftools/ffmpeg_mux_init: make encoder_opts local to ost_add()
<cone-805>
ffmpeg Anton Khirnov master:9a7686e5458d: fftools/ffmpeg_mux_init: apply encoder options manually
kekePower has quit [Remote host closed the connection]
* elenril
considers killing graphmonitor
mkver has joined #ffmpeg-devel
<elenril>
mkver: iirc you wanted to comment on yuvj?
<mkver>
elenril: yuvj?
<elenril>
haasn's work on color range negotiation, aimed at killing yuvj pixel formats
<mkver>
Oh, that.
<courmisch>
are block elements above the nonzero count supposed to be zero or do they need to be masked?
<courmisch>
(ffh264)
<elenril>
would be really nice to get that resolved finally
<courmisch>
elenril: 3 days after the end of MMX
<courmisch>
or when chicks will grow teeth as French say
Krowl has quit [Read error: Connection reset by peer]
ccawley2011 has joined #ffmpeg-devel
jamrial has joined #ffmpeg-devel
<courmisch>
is h264_add8x4_idct_sse2 doing 2 DCTs in parallel?
<cone-805>
ffmpeg James Almer master:d241edc2b45f: fftools/ffmpeg_opt: add missing codec type to some options
Krowl has joined #ffmpeg-devel
novaphoenix has quit [Quit: i quit]
novaphoenix has joined #ffmpeg-devel
<kurosu>
courmisch: from my recollection, yes. This is mbaff, aka 2 interlaced interleaved 4x4 blocks
<kurosu>
I think the same exists for another codec (dv?)
<kurosu>
courmisch: for the zeroing, not sure. There are clear_block(s) DSP functions for that, or the IDCT is also in charge of wiping the coeff buffer (splatting 0 all across) as well for some codecs
<courmisch>
kurosu: but there are 16 IDCTS to be done there. shouldn't there be 16x4 AVX2 and 32x4 AVX512 ?
sgm has quit [Remote host closed the connection]
sgm has joined #ffmpeg-devel
rossy has quit [Remote host closed the connection]
<kurosu>
What do you mean 16 IDTCs? 16 1D IDCTs? Also, I'm not sure much was written AVX2-wise for ffh264. Registers being that large and blocks that small (max tx block is 8x8, and motion-compensation one 16x16 afaik)
rossy has joined #ffmpeg-devel
<kurosu>
it really relates to the block size. Maybe 16x8 AVX2 could exist, but not sure how much of a benefit an 8x8 avx2 would be. eg, given the H.264 are an usual, non-matrix-multiply kind of IDCT
<kurosu>
BBB: does dav1d do several TXs at a time (in case of tx split)? I suspect not, because using the max of the EOBs and associated code path, could result in worse performance than most taking an individual, much shorter, path
Krowl has quit [Read error: Connection reset by peer]
<BBB>
I experiemnted a bit with that in vp9, and it didn't really help. in most cases, the large transform is chosen anyway
<BBB>
I think tx-split is primariyl a thing at very high bitrates / low quantizers, or in intra
<BBB>
and that's just not prevalent enough
kekePower has joined #ffmpeg-devel
<BBB>
we instead did an interleaved itx path where we do two tx lines per txblock iteration in the simd. that is enough to make 4x4 use full 16byte registers or 8x8 full 32byte registers
<BBB>
and that's good enough
<BBB>
(we as in: dav1d)
<courmisch>
BBB: your "no" sounds like a yes??
* courmisch
confused
<courmisch>
kurosu: I can't parse x86 but it looks like it's doing 8 times 2 4x4 IDCTs in parallel in the idct_add16 case
<BBB>
courmisch: no, the answer is no
<BBB>
courmisch: we do not do multiple tx blocks at a time
<BBB>
for example, when you have two neighbouring 8x8 blocks, we do one, then the other
<BBB>
within the tx block, we interleave lines (e.g. do 2 8x8 lines in a single avx2 register)
<BBB>
this allows us to use a full avx2 register even though we only have 8 word coefficients (which would normally be 16byte, not 32)
<kurosu>
BBB's answer concerns dav1d only
<BBB>
yes
<kurosu>
I think courmisch's question was specific to ffh264, and I did a confusing link between ff264 and dav1d, maybe
Krowl has joined #ffmpeg-devel
<kurosu>
courmisch: looks like I was wrong. It's just to share the code for the inner part of transforms, much like dav1d calls into jump location also used in smaller transforms
<kurosu>
inner part of residual-adding functions
<kurosu>
then, I don't remember how mbaff is handled then
<courmisch>
4x4 IDCT is about 3x faster than C here, but it uses only half vectors
<kurosu>
I don't remember James Darnley alias here, or if he still visits, but he seems to be the last person contributing algo change to the file (eg 7aa90b4e94147d0512ab28535f6863767b888f19). And before that, I went down to 2012 without anything obvious
<kurosu>
BBB did contribute the zeroing inside the IDCT (see eg 62844c3fd66940c7747e9b2bb7804e265319f43f), which also answers that question of yours.
asivery has joined #ffmpeg-devel
<jdarnley>
you rang?
<courmisch>
ofc replacing a DC add with an IDCT sounds like a bad idea
<jdarnley>
oh jeez that idct stuff is a long time ago
<kurosu>
jdarnley: yeah, see courmisch's questions. I initially got wrong the meaning of add8x4_idct, which I think is not exposed nor called from the outside?
<courmisch>
but if 2 IDCTs come at the same price as one, might as well replace IDCT+DC or IDCT+IDCT with parallel IDCT
<kurosu>
yeah, it's a long time ago...
<kurosu>
7 years for you
Marth64 has joined #ffmpeg-devel
<kurosu>
And the code likely predates AVX2 availability by several years
<asivery>
Hello. I have a quick question regarding submitting FATE tests and samples. The FFMPEG Developer Documentation says that such requests need to be submitted to 'samples-request'. Is that a mailing list? I can't find any mention of it anywhere.
<jdarnley>
Is this the original question? 13:24 <courmisch> is h264_add8x4_idct_sse2 doing 2 DCTs in parallel?
<JEEB>
asivery: it's an email @ffmpeg.org, if you're doing this for the first time so it should happen after it seems like your patch for the actual test has received positive response
<kurosu>
jdarnley: indeed, the one I got wrong about this code being for mbaff (I only looked at the function name and thought it was called from the outside)
<kurosu>
"are block elements above the nonzero count supposed to be zero or do they need to be masked?" <- for this one, I found the commits from Ronald that ensure the block is all zeroes after idct (so that it is reused for the next block)
<asivery>
JEEB: I see, so I first need to submit the actual FATE patch to ffmpeg-devel? Before I submited two other patches, but none of them required additional tests. This time it was requested of me to send a sample, but I can't seem to find a clear answer in any docs what exactly I should do in order to accomplish it. If you don't mind me asking, what exactly should the FATE test consist of? Is it enough to just add the test into the appropriate
<asivery>
tests/fate/*.mak file?
<jdarnley>
I might not remember it well enough but I think you are right.
<JEEB>
asivery: add required modifications to relevant Makefile etc to have `make fate-rsync SAMPLES=/path/to/fate-suite` and `make fate SAMPLES=/path/to/fate-suite` still pass (with the new sample file added into SAMPLES locally)
<jdarnley>
I definitely recall wider registers being almost useless because of how little data there is
<jdarnley>
xmm has uses, ymm less so, but now zmm is so spacious
feiwan1 has quit [Ping timeout: 246 seconds]
sgm has left #ffmpeg-devel [#ffmpeg-devel]
feiwan1 has joined #ffmpeg-devel
<asivery>
JEEB: That's done. Should the FATE patch be submitted together with the actual patch that it tests, or should they be two separate emails to ffmpeg-devel?
<JEEB>
or wait this is not a good example since this just updates existing test results :D
<JEEB>
so yea, maybe for a new test separate?
witchymary has quit [Remote host closed the connection]
<asivery>
Ok, in that case I'll just send two emails - one the actual patch, and the other the FATE test, which will link to the sample files. Would that work?
witchymary has joined #ffmpeg-devel
<JEEB>
if threaded correctly it should show up as a patch set? not sure what tooling you're utilizing, `git send-email` handles this automagically
<asivery>
Unfortunately I can't utilize `git send-email`. I'll create a patch set manually. Thank you for your help and patience
Traneptora has joined #ffmpeg-devel
<Traneptora>
oof, forgot to open my client
<Traneptora>
so I have no idea if you responded, so I'll ask question again
<Traneptora>
mkver: regarding the mDCv typo patch, files created by ffmpeg before the patch will have incorrect mDVc tags instead. Do we wish to continue to honor those for compat or should we ignore them?
<mkver>
Traneptora: First priority is to no longer create invalid files.
<Traneptora>
aight, so should I just merge the patch?
<mkver>
Yes!
<Traneptora>
got it
<Traneptora>
wanted a LGTM
<cone-805>
ffmpeg Leo Izen master:c1af34c25bd4: avcodec/pngdec: fix mDCv typo
<cone-805>
ffmpeg Leo Izen master:d69e52252320: avcodec/pngenc: fix mDCv typo
<cone-805>
ffmpeg Leo Izen release/7.0:5ce0c378966f: avcodec/pngdec: fix mDCv typo
<cone-805>
ffmpeg Leo Izen release/7.0:89a85efbf1f7: avcodec/pngenc: fix mDCv typo
<JEEB>
I am not sure we need to add compat stuff since mastering display colour volume is not required to interpret the image correctly. it helps with tone and gamut mapping to know that the mastering display could only show X amount of stuff, but it's not fatal
<JEEB>
also Traneptora did you notice there were some reference samples posted earlier? might make sense to add such as FATE tests if we don't have such yet
<Traneptora>
JEEB: I can in the next few days
<Traneptora>
cause $dayjob
System_Error has quit [Remote host closed the connection]
<mkver>
JEEB, Traneptora: We can create samples via our encoder, so no need for new reference samples.
<Traneptora>
we need to encode a stream with that metadata set. can this be done with ffmpeg.c?
<JEEB>
yea
<JEEB>
mkver: the primary idea of reference samples was mostly to verify that our code indeed parses them correctly. if they had been found earlier they could have helped catch this bug earlier. but yes, after validation by means of such samples, self-created things for the actual test are a-OK I think
System_Error has joined #ffmpeg-devel
<Traneptora>
JEEB: how? I didn't find the option by grepping `man ffmpeg-all`
<JEEB>
Traneptora: what do you mean? st2086 metadata gets decoded from input and passed onto the decoded AVFrame. that is then passed into both the encoder avctx as well as the AVFrames
<JEEB>
so if you decode one of the samples I utilized for the st2086 metadata encoder passthrough stuff
<JEEB>
which is already in the FATE suite
<Traneptora>
ah so we have samples already with the metadata, got it
<Traneptora>
that makes sense, I was thinking we'd be generating it on the fly with a CLI switch, but now I see what you mean
<JEEB>
yea, we lack a nice generic string<->side data interface
<JEEB>
so adding new side data isn't that simple atm
<JEEB>
(from cli)
<JEEB>
I think the display matrix stuff is a special case where I added a new option to ffmpeg.c for it. but it should really be more generic.
<JEEB>
but in any case, yes. there are files in FATE suite which already have the metadata
<cone-805>
ffmpeg Rémi Denis-Courmont master:78e1565f8475: lavc/vc1dsp: fuse multiply-adds in R-V V inv_trans_4
<cone-805>
ffmpeg Rémi Denis-Courmont master:4a2de380b7c2: lavc/vc1dsp: fuse multiply-adds in R-V V inv_trans_8
<haasn>
:')
<courmisch>
easy problem to solve: don't standardise
<courmisch>
think about all the business opportunities lost because there are only barely two competing tracks for video codec standards
<courmisch>
compared to the glorious nineties
ramiro has quit [Ping timeout: 256 seconds]
ramiro has joined #ffmpeg-devel
<BBB>
haasn: from my perspective, I'm really only knowledgeable/interested in the native backend, not so much the gpu or external-lib ones
<BBB>
not sure how others feel about it. to each their own, I guess
unlord has quit [Changing host]
unlord has joined #ffmpeg-devel
Krowl has quit [Read error: Connection reset by peer]
<BBB>
I can see how gpu is interesting for some users, I just don't know anything about it, sadly
asivery has quit [Quit: Leaving]
<haasn>
I'm also hesitant to allow auto-choosing e.g. libplacebo when users don't specifically request it
<haasn>
because libplacebo has quite particular semantics and if we just add automagic support for it, an invocation like -vf format=yuv420p,some_gpu_filter might end up auto-inserting vf_scale to convert from yuv420p to whatever hwfmt, which would resolve via the libplacebo backend, which would force full conversion to RGB internally
<haasn>
rather than doing what would be more sane and intended in this case, and simply uploading
<haasn>
I'm also thinking that some of what I said backends are useful for would overlap strongly with the "kernels" we would have to end up defining inside avscale anyway
<haasn>
e.g. we could make a dedicated backend for pure endianness conversions, but at the same time, endianness conversions should also be an input/output kernel
<haasn>
I'm also starting to think that it may be a mistake to try and initially implement avscale.h using swscale internally; because it's incredibly restricting on the avscale API side - I don't want to recreate the swscale API exactly
<haasn>
and this effort may be better spend just focusing on designing the avscale pipeline to begin with
<haasn>
that said, it would be incredibly cheap and easy to get zimg support with the design I have currently
<haasn>
and not allowing users to at least benefit from that seems like a loss; especially since zimg is, from what I've seen/heard, much faster than status quo swscale
<haasn>
I'll have to think carefully about this
<haasn>
maybe the middle ground here is to support different backends internally but don't expose them to the user at all
<haasn>
that way the API isn't burdened by their existence, and we can roll them out whenever we're ready to replace swscale fully
<haasn>
we can also use zscale for any currently unsupported operations and aim to replace them by native implementations over time
j45_ has joined #ffmpeg-devel
<BBB>
as long as it's debuggable/configurable, I guess
j45 has quit [Ping timeout: 260 seconds]
j45_ is now known as j45
j45 has quit [Changing host]
j45 has joined #ffmpeg-devel
ramiro has quit [Ping timeout: 240 seconds]
ramiro has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg-devel
MetaNova has quit [Quit: quit]
MetaNova has joined #ffmpeg-devel
AbleBacon has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
* Sean_McG
peeks in
feiwan1 has quit [Ping timeout: 240 seconds]
feiwan1 has joined #ffmpeg-devel
LaserEyess has quit [Quit: fugg]
LaserEyess has joined #ffmpeg-devel
LaserEyess has joined #ffmpeg-devel
LaserEyess has quit [Changing host]
mkver has quit [Ping timeout: 264 seconds]
Krowl has quit [Read error: Connection reset by peer]
Krowl has joined #ffmpeg-devel
* Marth64
waves
<Sean_McG>
hi Marth
System_Error has joined #ffmpeg-devel
cone-805 has quit [Quit: transmission timeout]
<Sean_McG>
I'm not sure what I want to do about the fact that I can't 'git send-email' from my workstation anymore because GMail does not support application passwords. I really don't want to have to switch addresses.
<Sean_McG>
oh, and also who can help unlock my Patchwork account?
<Sean_McG>
I guess I can send the patches as attachments for now, but I don't like doing that
mkver has joined #ffmpeg-devel
Krowl has quit [Read error: Connection reset by peer]
<JEEB>
Sean_McG: I think the application passwords still work?
<Sean_McG>
only if I "un-secure" my account, which I also don't want to do
<JEEB>
whatever that means, since I still have 2fa etc on the usual side
<JEEB>
that said, I would not be surprised if they now try to talk you out of adding such