michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
Marth64 has quit [Quit: Leaving]
<fflogger>
[newticket] bastimeyer: Ticket #11425 ([avformat] 5813e5a (fix for hevc regression from a696b28) has not been backported to 7.1) created https://trac.ffmpeg.org/ticket/11425
Mirarora has joined #ffmpeg-devel
witchymary has joined #ffmpeg-devel
Mirarora has quit [Remote host closed the connection]
mkver has quit [Ping timeout: 248 seconds]
Mirarora has joined #ffmpeg-devel
cone-948 has joined #ffmpeg-devel
<cone-948>
ffmpeg James Almer release/7.1:aeb863104803: avformat/hevc: fix writing hvcC when no arrays are provided in hvcC-formatted input
<fflogger>
[editedticket] jamrial: Ticket #11425 ([avformat] 5813e5a (fix for hevc regression from a696b28) has not been backported to 7.1) updated https://trac.ffmpeg.org/ticket/11425#comment:1
witchymary has quit [Remote host closed the connection]
witchymary has joined #ffmpeg-devel
thilo has quit [Ping timeout: 252 seconds]
thilo has joined #ffmpeg-devel
aaabbb has quit [Changing host]
aaabbb has joined #ffmpeg-devel
<Compn>
BtbN, well there you go. setup foregejo and lets goooo
<Compn>
then we test it and figure out if it works/sucks later
<kierank>
22:45:06 <steven-netint> Though, how do you guys feel about HW vendors which only Fork ffmpeg but not not upstream their codecs/filters? I'm not too familiar but I thought MA35D is on an FFmpeg-n4.0 fork?
<kierank>
Upstream
<kierank>
I've told this to Alex before but he's not at netint any more...
<Lynne>
steven-netint: does it use our standard hwcontext framework?
<Lynne>
oh, it does, hwcontext_ni_logan/quad
jamrial has quit []
<Lynne>
looks fine tbh, with some coding style changes I don't see why not
Marth64 has joined #ffmpeg-devel
^Neo has quit [Ping timeout: 245 seconds]
<steven-netint>
kierank: we are upstreaming it now. Tho there is a lot of styling things to clean up in the code first
cone-948 has quit [Quit: transmission timeout]
<steven-netint>
Lynne: thanks for taking an initial look! I'm still working on moving the custom code netint put into the common FFmpeg framework, into netint specific files. Hoping to have the first patch draft submitted to mailing list in a few months
<Lynne>
if there are a lot of netint files, consider moving them to libav<component>/netint
<Lynne>
we started doing this with codecs some time ago
<steven-netint>
ok, thats probably for the best. There truly is a lot of files built up from years of feature creep
<steven-netint>
*a lot of netint files
<llyyr>
I saw this on the ML: "having the ability to git send mail to any new thing would be nice." and just wanna add that it won't work with any of the git forge solutions, except sourcehut (which is mostly abandoned these days)
<Lynne>
steven-netint: what is p2p used for, and why does it need to be a filter?
<Lynne>
since the devices are attached via pcie, they should be able to p2p and dma with zero involvement
<steven-netint>
let me check that p2p filter. I know we have a separate P2P app for grabing from a GPU display buffer. But need to check what the filter is for
<Lynne>
oof, vf_yuv420to444_ni and vf_yuv444to420_ni really should be merged into vf_scale_ni
<steven-netint>
one of the quirks with netint is that the HW interfaces via NVMe, the storage protocol. So it does not use direct PCIe access in kernel
<Lynne>
correction; those 2 filters should not exist
<steven-netint>
Lynne: ahh, theres a lot of feature creep which I would not upstream such as those 2 filters you just mentioned
<Lynne>
you can already do what those filters do, though
<Lynne>
-vf hwmap,scale,hwmap back to ni
<steven-netint>
some customer wanted to transport yuv444 but our HW only supports yuv420. Thus the two filters to help with turning 1xyuv444->2xyuv420 and back again
<Lynne>
they're software filters
<steven-netint>
its jank. Not gonna upstream that
<Lynne>
yeah
<Lynne>
wow, that's jank
<steven-netint>
theres also a filter in there for viewing the decoded output in SDL2 at filtergraph stage. Pretty jank feature request too
<steven-netint>
Lynne: I believe that netint 'p2pxfer' filter is probably to allow users to control shunting HW frame from one HW device to another. Probably too nitty gritty for most people to want to use
<steven-netint>
Most of the code in the netint github probably won't be upstreamed for being too niche
<steven-netint>
but, if you see anything you want, let me know. I will try to push small features before the big patch with all the codecs
<steven-netint>
stuff like 'making the fps display moving average instead of global average'
<steven-netint>
theres a text file in the netint github if you want to make a shopping list of non-netint specific features to request haha
<steven-netint>
*text file at the root level
<Lynne>
preferably none of them
<Lynne>
ni_av1_tile_repack_bsf can't the library do this?
<steven-netint>
I think it made during the FFmpeg-n4.3 days. I'm not sure if libav had existing functionality for this back then
<steven-netint>
I think some advanced features such as p2p, mutli-tile, and even HW frames do not have to be in the first patch for upstreaming. Initial upstream should just the basic decode/filters/encode
<Lynne>
you should consider porting encode to hw_base_encode.c
<Lynne>
it will cut down on the code significantly
Martchus_ has joined #ffmpeg-devel
Martchus has quit [Ping timeout: 245 seconds]
<steven-netint>
Thanks for the tip, I'll look into it
<steven-netint>
the ni codecs started as somthing similar to a SW codec which does not use hwaccel or hwframes. Now, after 8 years its time for reconning :/
rvalue has quit [Ping timeout: 245 seconds]
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg-devel
rvalue has joined #ffmpeg-devel
rvalue has quit [Ping timeout: 252 seconds]
System_Error has quit [Remote host closed the connection]
<BtbN>
Putting ultra generic defines into the header
<BtbN>
if(ENABLE_ALPHA) set(ENABLE_ALPHA 1) endif() is also kinda wtf
^Neo has joined #ffmpeg-devel
^Neo has quit [Changing host]
^Neo has joined #ffmpeg-devel
ccawley2011 has joined #ffmpeg-devel
<ePirat>
BtbN, is it so they have a different solution just for the sake of it not being your code?
jamrial has joined #ffmpeg-devel
<Traneptora>
ePirat: no, it doesn't fix the problem, which is that x265.h (public header) defines ENABLE_ALPHA
<Traneptora>
they moved the define from x265.h to x265_config.h, but the former #includes the latter
<Traneptora>
so it doesn't really solve the issue of public namespace pollution
<ePirat>
yeah I meant more why they even tried to do it differently in the first place
<Traneptora>
x265 works in strange and mysterious ways
<BtbN>
That's not the problem
<BtbN>
the problem was that the public header did NOT define ENABLE_ALPHA, but depended on it
<BtbN>
so yes, they kinda fixed the problem, but in the worst possible way, while making weird decisions on the way
<Traneptora>
well, their fix involved defining it
<Traneptora>
so now they have another problem, which is that they just depend on it
<Traneptora>
and pollute the namespace
<BtbN>
No idea if they just missed my fix or what
Marth64 has quit [Quit: Leaving]
ngaullier has joined #ffmpeg-devel
Marth64 has joined #ffmpeg-devel
ngaullier has quit [Ping timeout: 264 seconds]
ccawley2011 has quit [Ping timeout: 244 seconds]
ngaullier has joined #ffmpeg-devel
System_Error has quit [Ping timeout: 264 seconds]
System_Error has joined #ffmpeg-devel
ccawley2011 has joined #ffmpeg-devel
<IndecisiveTurtle>
Lynne: Heya a small question, is it possible to get a 3-plane frame with each channel being 32-bit? I refactored the code entirely to match what ffv1enc does (so uses av_frame_alloc and av_hwframe_get_buffer) which works nicely but image is grey because it doesn't have planes 1/2 with 32-bit format which makes sense.
<IndecisiveTurtle>
I thought about using image2DArray as I can pass nb_layers, but current API seems doesnt make 2D_ARRAY views at all. Should I just allocate 3 separate frames or is there better way?
Marth64 has quit [Quit: Leaving]
<Lynne>
IndecisiveTurtle: yeah, just copy what get_supported_rgb_buffer_fmt does
<Lynne>
you'll probably want to use AV_PIX_FMT_RGBA128
<Lynne>
ah... I see what you mean
<Lynne>
you want GRAY32, right?
<IndecisiveTurtle>
Not just one plane, but 3 planes of 32-bits
<IndecisiveTurtle>
GRAY32 seems only creates 1 plane with 32-bits
<haasn>
nice
<IndecisiveTurtle>
RGBA128 gives me rgba32ui which has the space, but its a single image and I can't write to one channel without overwriting the rest
<haasn>
my new swscale code outperforms swscale's special converter by a factor of 3 for gbrp->rgba
<haasn>
without even any special cases except a single macro invocation to generate a compiled function for this path
<Lynne>
IndecisiveTurtle: you probably want to use GRAY32
<haasn>
the special case consists of only three function calls
<Lynne>
just call get_frame 3 times, and you get 3 images
<haasn>
read_planar8(&tmp, in, 3); swizzle(&tmp, SWS_FROM_GBRA); write_packed8(&tmp, out, 4);
<haasn>
but clang inlines & optimized this down to a single SIMD loop
<haasn>
because all functions are written in a way that enables efficient fusion by the compiler
<haasn>
overall I very much like this approach
<haasn>
1. write a library of efficient, fusable pritimives, 2. use macros to generate specialized functions for the most important cases, 3. fall back to multiple function calls for obscure / slow paths
elenril has joined #ffmpeg-devel
elenril has left #ffmpeg-devel [WeeChat 3.8]
<haasn>
GCC optimizes it slightly worse than clang, so only 2x speedup there
<haasn>
I guess this is why we still will want hand written assembly in the end
<haasn>
sadly the swscale rgb24 -> rgba special converter is still faster, let's see if I can fix that case
<haasn>
if anything, we can still preserve a dedicated hot path for cases where the new generic code ends up slower
<haasn>
but I want to try and make the generic code fast enough to not need those hot paths in the first place
<Lynne>
IndecisiveTurtle: give me a moment, I'll send a patch on the ml
cone-595 has joined #ffmpeg-devel
<cone-595>
ffmpeg Lynne master:5c59e6ce1911: vulkan: enable using .elems field for buffer content definitions
<cone-595>
ffmpeg Lynne master:7187eadf8c0f: ffv1dec: use dedicated pix_fmt field and call ff_get_format
<cone-595>
ffmpeg Lynne master:d987feae2a48: ffv1dec: move slice start finding into a function
<cone-595>
ffmpeg Lynne master:f75812e054c3: ffv1dec: move header parsing into a separate function
<cone-595>
ffmpeg Lynne master:5e4a510cce97: ffv1dec: move slice decoding into a separate function
<Lynne>
in general you want to use gray32 since when you have subsampled images, it would be a bit wasteful to have so much waste at high resolutions
<Lynne>
(you'll want to use a single pool per plane in that case)
<Lynne>
you can use rgba128 too if you want to, you'll just need to load, modify, and store
<Lynne>
these days, there's no texel cache, storage images, sampled images, and buffers all use the same cache path
<IndecisiveTurtle>
So create a pool per plane to not waste storage by allocating larger images for planes? Got it
<IndecisiveTurtle>
The only thing is, the images will have to be separate bindings now so I will have to make a small switch statement to access them in shader.
System_Error has quit [Remote host closed the connection]
Everything has joined #ffmpeg-devel
System_Error has joined #ffmpeg-devel
Marth64 has quit [Quit: Leaving]
<BtbN>
https://code.ffmpeg.org exists now, but I have not yet figured out how to properly do registration. Right now I think I have to manually approve every new user. Pretty sure if I don't do that, it'll get spammed hard
<BtbN>
there is zero synchronisation, I just pushed a mirror of the FFmpeg repo there, so feel free to break it
Marth64 has joined #ffmpeg-devel
cone-548 has joined #ffmpeg-devel
<cone-548>
ffmpeg Michael Niedermayer master:c0769e9213fc: libavutil/pixfmt: 16bit float support
<cone-548>
ffmpeg Michael Niedermayer master:0c237d6e8a35: avcodec/ffv1: simplify version checks with combined_version
<cone-548>
ffmpeg Michael Niedermayer master:665b0cf3bffa: swscale: 16bit planar float input support
<cone-548>
ffmpeg Michael Niedermayer master:497b205ad56d: avcodec/ffv1enc: dont reset version
<cone-548>
ffmpeg Michael Niedermayer master:62c98cdd549a: avcodec/ffv1enc: Fix RCT for GBR colorspace
<cone-548>
ffmpeg Michael Niedermayer master:54897da7ce8a: avformat/mpegts: Add standard extension so hls can check in extension_picky mode
<cone-548>
ffmpeg Michael Niedermayer master:c43dbecbdad1: avformat/vqf: Check avio_read() in add_metadata()
<cone-548>
ffmpeg Michael Niedermayer master:49fa3f6c5ba6: avformat/vqf: Propagate errors from add_metadata()
<cone-548>
ffmpeg Michael Niedermayer master:e81d410242ea: avcodec/cbs_vp9: Initialize VP9RawSuperframeIndex
<cone-548>
ffmpeg Michael Niedermayer master:17b019c517af: avformat/wtvdec: Initialize buf
<cone-548>
ffmpeg Michael Niedermayer master:788abe0d253b: avformat/ipmovie: Check signature_buffer read
<cone-548>
ffmpeg Michael Niedermayer master:aec2933344b2: avformat/iamf_reader: Initialize padding and check read in ff_iamf_read_packet()
<cone-548>
ffmpeg Michael Niedermayer master:ef71552cf970: avcodec/huffyuvdec: Initialize whole output for decode_gray_bitstream()
<cone-548>
ffmpeg Michael Niedermayer master:90ff3ae9769d: tools/target_swr_fuzzer: do not use negative numbers of samples
<cone-548>
ffmpeg Michael Niedermayer master:6ecc96f4d08d: avformat/mxfdec: Check avio_read() success in mxf_decrypt_triplet()