michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
<jamrial>
BtbN: that file seems to be missing the alpha layer sps/pps pair
<BtbN>
Hmmm, does nvenc expect you to just generate that yourself?
<jamrial>
no idea, but it's needed according to that pdf above, and present in the mov JEEB shared
<BtbN>
It has no controls beyond enabling a binary toggle and feeding it data with an alpha channel
<jamrial>
also, your sample has "Scalability type 3" which our decoder can't currently handle
<Traneptora>
ye, it reports an error
<Traneptora>
wasn't sure if that was due to the alpha or not
<Traneptora>
in either case, maybe our nvenc wrapper could add the sps/pps pair?
<BtbN>
I'd rather not start generating bitstream in the nvenc wrapper
<BtbN>
Also, there already is one, produced by nvenc
<jamrial>
the sample has the slice NALUs for the alpha layer, but the sps/pps pair are missing
<jamrial>
maybe they were not added to extradata?
<BtbN>
How would that even work? Just append them to the extradata?
<jamrial>
somewhere nuh_layer_id > 0 may be getting filtered
<jamrial>
does nvEncGetSequenceParams() not return them?
<BtbN>
if they're missing, it must not
<jamrial>
BtbN: what if you request spsId = 1 and ppsId = 1 in a separate call?
<BtbN>
My understanding of those parameters is that it tells it what to write into the header, not which header to grab
<jamrial>
well, it at least generated alpha layer slices with slice_pic_parameter_set_id = 1
<BtbN>
Would I just append the two buffers?
<jamrial>
yeah
<jamrial>
bah, depends, does it output hvcC?
<jamrial>
or raw annex-b?
<BtbN>
It has a flag I could set, NV_ENC_PIC_FLAG_OUTPUT_SPSPPS
<BtbN>
"Write the sequence and picture header in encoded bitstream of the current picture"
<jamrial>
that will output the sps/pps with the next output picture
<BtbN>
also a "bool outputAnnexBFormat", but only for AV1
<BtbN>
I can't find any concere info on what it outputs
witchymary has quit [Remote host closed the connection]
<jamrial>
BtbN: looks like raw annex-b, so you can append the two buffers
<BtbN>
If that does not work, the only other possibility is that it only works with inline SPS/PPS
<BtbN>
Which would be highly annoying
<Traneptora>
haasn: seem to be getting some libplacebo crashes on windows
<Traneptora>
with ffmpeg's libplacebo filter
<Traneptora>
it gets to: [libplacebo @ 0x5b52e8bbfa00] No VkInstance provided, creating one...
<Traneptora>
and then crashes. it's windows so I'm not really sure how to get a log or anything
<jamrial>
BtbN: i tried to set payload.spsId and payload.ppsId to 1 with a normal encoding and it did not error out. the output id in the bitstream for both was 0
<jamrial>
so it either does nothing, or will need extra precautions for non alpha encodings
<beastd>
wbs: there is a single problem left with fate-list-failing . When we call fate-list-failing with no .rep files present it will give error output about not being able to open the glob pattern. It's been there in the first version as well but I didn't catch it before. IMHO it's more of a cosmetical issue. could be only problematic for programmatic use of the target.
<wbs>
beastd: yep; would be nice if it would be possible to smooth over somehow
<Lynne>
are there still not enough TC volunteers to make it to 5?
<Lynne>
I volunteer in this case
<beastd>
wbs: thinking about a simple solution... will test it and reply later after dayjob
<Lynne>
(this is your cue to volunteer yourself)
<beastd>
Lynne: I intend to volunteer for TCC
<beastd>
*TC
Chagall has quit [Ping timeout: 260 seconds]
<pross>
will we ever get multithreaded configure script
<elenril>
how does one multithread in bash
<elenril>
also, configure is substantially faster when pinned to one core
haihao has quit [Ping timeout: 276 seconds]
haihao has joined #ffmpeg-devel
<wbs>
elenril: it's doable to have shell script fork off many child processes; collecting the results of it is a bit more complex though
<Lynne>
am I misremembering or shouldn't ff_decode_get_hw_frames_ctx call frame_params?
<elenril>
it does
<Lynne>
does it clean up the context afterwards?
<Lynne>
ah, it does not call it with hwaccel_priv_data set even if called within decoder init when hwaccel_priv_data should be allocated
<Lynne>
is this meant to happen or does hwaccel_priv_data get allocated only if frame_params suceeds when being called in the context of decoder init
<elenril>
IIUC hwaccel_priv_data is only allocated later
<Lynne>
does ff_decode_get_hw_frames_ctx call frame_params if frame params was explicitly called beforehand?
<elenril>
strictly speaking it should not matter
<elenril>
but typically if the caller calls avcodec_get_hw_frames_params() manually, they set the resulting frames context on the decoder
<elenril>
so ff_decode_get_hw_frames_ctx() does not get called
<Lynne>
but decoder init calls get_hw_frames_ctx(), which should call frame_params, but I'm not seeing that happen
<Lynne>
(I'm trying to overengineer frame_params being called in the context of decoder init propagating the temporary context onto the main context, because its just a few lines of code to do)
<elenril>
it only does that if the frames context is not set by the caller
<Traneptora>
the CLI isn't just a frontend to the library?
<BtbN>
jamrial: oh, that format isn't documented. Lovely
<jamrial>
our vuya? or nvidia's ayuv?
<BtbN>
Traneptora: yes, but it's weird
<BtbN>
jamrial: the docs only list the rgb ones, and the weird NV12 one
<jamrial>
ah
<BtbN>
Are you saying it works fine and generates a sensible SPS/PPS?
<BtbN>
Or just "works the same as the other ones"? :D
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
<jamrial>
BtbN: mmh, i get the second sps/pps pair, but no slice nalus? weird
<jamrial>
using ./ffmpeg -lavfi testsrc2,format=argb and ./ffmpeg -lavfi testsrc2,format=vuya here
<BtbN>
Isn't AYUV a 444 format?
<jamrial>
yeah
<BtbN>
The docs say Alpha encoding is only supported in 420 mode.
<jamrial>
yes, it's converted to 420 internally i assume, same as if you feed it rgb
<jamrial>
if i add vuya to the IS_YUV444() list, the wrapper forces 444 and init() fails
<BtbN>
Weird
<BtbN>
Guess it's not fully implemented then
<jamrial>
could be. the sps/pps are generated but then it outputs no slice data for the alpha channel
<BtbN>
Maybe that odd NV12 format is the only format they actually tested?
<JEEB>
would not be surprised
<BtbN>
FFmpeg has nothing alike it though
<BtbN>
so it'd need to be somehow emulated on the fly from yuva420p or something
<jamrial>
yeah, it probably expects something in alphaBuffer
<BtbN>
It expects another entire NV12 buffer there
<BtbN>
with Y containing A, and U/V being memset to 0x80
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
witchymary has quit [Remote host closed the connection]
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
iive has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
<Traneptora>
that's wild ngl
<BtbN>
I mean, I _could_ add that as a format. It'd be the only way to implement it. But no way that could get merged.
<BtbN>
Only other alternative would be to add a conversion kernel to nvenc itself, to convert yuv420p or something
<Traneptora>
if it only accepts it in this arcane format and nothing else uses this format for anything that seems more sensible than trying to add it to swscale
<Traneptora>
if it later turns out that it's used, you can always move the conversion to swscale
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
Marth64 has quit [Quit: Leaving]
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
Marth64 has joined #ffmpeg-devel
cone-516 has quit [Quit: transmission timeout]
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
System_Error has quit [Remote host closed the connection]
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
System_Error has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]
sunyuechi has joined #ffmpeg-devel
sunyuechi has quit [Remote host closed the connection]