michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 6.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
mkver has quit [Ping timeout: 240 seconds]
iive has quit [Quit: They came for me...]
kurosu has quit [Quit: Connection closed for inactivity]
jamrial has joined #ffmpeg-devel
navi has quit [Quit: WeeChat 4.0.4]
feiw1 has quit [Ping timeout: 240 seconds]
feiw1 has joined #ffmpeg-devel
<cone-420> ffmpeg James Almer master:4fee63b241e0: x86/takdsp: add missing wrappers to AVX2 functions
thilo has quit [Ping timeout: 246 seconds]
thilo has joined #ffmpeg-devel
thilo has quit [Changing host]
thilo has joined #ffmpeg-devel
derpydoo has quit [Quit: derpydoo]
derpydoo has joined #ffmpeg-devel
lemourin has quit [Read error: Connection reset by peer]
lemourin has joined #ffmpeg-devel
jamrial has quit []
jarthur has joined #ffmpeg-devel
cone-420 has quit [Quit: transmission timeout]
AbleBacon has quit [Read error: Connection reset by peer]
\\Mr_C\\ has joined #ffmpeg-devel
epony has quit [Remote host closed the connection]
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg-devel
jarthur has quit [Quit: jarthur]
feiw1 has quit [Ping timeout: 246 seconds]
feiw1 has joined #ffmpeg-devel
dellas has joined #ffmpeg-devel
dellas has quit [Remote host closed the connection]
<Lynne> ^^we should just drop old yasm by now
<Lynne> it was 2.5 years ago since the last time the topic was brought up
Krowl has joined #ffmpeg-devel
tmm1 has quit [Ping timeout: 264 seconds]
tmm1 has joined #ffmpeg-devel
derpydoo has quit [Ping timeout: 256 seconds]
feiw1 has quit [Ping timeout: 260 seconds]
feiw1 has joined #ffmpeg-devel
philipl has quit [Ping timeout: 260 seconds]
philipl has joined #ffmpeg-devel
rvalue has quit [Quit: ZNC - https://znc.in]
rvalue has joined #ffmpeg-devel
feiw1 has quit [Ping timeout: 255 seconds]
feiw2 has joined #ffmpeg-devel
Krowl has quit [Read error: Connection reset by peer]
kurosu has joined #ffmpeg-devel
<motherboard> A bit off-topic, but how is 3D visual data encoded, like the ones it would be needed for VR headsets
Krowl has joined #ffmpeg-devel
Workl has joined #ffmpeg-devel
Krowl has quit [Ping timeout: 260 seconds]
averne has quit [Quit: quit]
averne has joined #ffmpeg-devel
<thardin> you mean point clouds or?
averne has quit [Quit: quit]
<thardin> also I just found I had an IMA APC encoder laying around. rebasing
averne has joined #ffmpeg-devel
mkver has joined #ffmpeg-devel
averne has quit [Quit: quit]
averne has joined #ffmpeg-devel
<thardin> what migh cause configure not to find a muxer?
<thardin> ah I was using AVOutputFormat not FFOutputFormat
<thardin> is fate supposed to work with --disable-everything?
<thardin> "make: *** Ingen regel för att skapa målet ”libavcodec/tests/mjpegenc_huffman”, som behövs av ”fate-libavcodec-huffman”. Stannar." I'm guessing no
<thardin> there we go. I even had tests
jamrial has joined #ffmpeg-devel
<Lynne> full fate requires even ffprobe
<Lynne> which I never enable, no way I'm waiting for another linking step
Kei_N_ has quit [Read error: Connection reset by peer]
Kei_N has joined #ffmpeg-devel
ccawley2011 has joined #ffmpeg-devel
Workl has quit [Read error: Connection reset by peer]
paulk has quit [Ping timeout: 260 seconds]
paulk has joined #ffmpeg-devel
navi has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
<motherboard> thardin: yes point cloud
<thardin> depends on what type of point cloud I think
<thardin> some use ellipsoids rather than points
<Lynne> thardin: now I think about it, couldn't you just skip some coeffs during decoding?
<Lynne> unless the quarter res option does that already, IIRC it only affected transforms
dellas has joined #ffmpeg-devel
<thardin> that would be ideal
<thardin> j2k unfortunately is *flexible*, so I'm not sure whether you can always do that
<thardin> pal: ?
<thardin> each pass can, if I'm not mistaken, encode either one or more level(s) of coeffs, or bit slices thereof, or both. all using CABAC
<thardin> htj2k changes this so you can only use CABAC for the MSBs
<Lynne> wavelet levels directly correspond to resolution
<Lynne> if you need quarter res, you can skip the last 2 levels
dellas has quit [Remote host closed the connection]
lemourin has quit [Quit: The Lounge - https://thelounge.chat]
lemourin has joined #ffmpeg-devel
<BBB> I believe most of us would at this point be supportive of dropping yasm support (giggle)
<BBB> I don't think dav1d builds with yasm anymore
derpydoo has joined #ffmpeg-devel
kurosu has quit [Quit: Connection closed for inactivity]
<Lynne> we wouldn't even be dropping yasm, just 2009-circa versions of it
<Lynne> which some distributions shipped
<Lynne> nevcairiel: I think it was you last time who pointed out where yasm support was needed, do you still remember?
<jamrial> unless some change in x86inc/util requires a modern yasm/nasm, i don't think dropping support for old versions is justified
<jamrial> if all it takes is a %if HAVE_WHATEVER_EXTERNAL check
<nevcairiel> I had some issues with linking nasm objects with msvc ages ago, but that was fixed either on our end or on theirs at some point
<thardin> Lynne: not so easy with CABAC I think
<Lynne> jamrial: it's not that big of a deal, but keeping compatibility with yasm requires using 3-arg instructions everywhere
<Lynne> and some other stuff
<thardin> you're supposed to be able to abort the coeff bitstream at any point though. I'd need to dig into the spec some more to find out the proper way to do it
<jamrial> on avx functions for some instructions, yeah
<thardin> openjpeg does that I think, but it has other problems
Kei_N_ has joined #ffmpeg-devel
<Lynne> jamrial: pretty much everywhere for avx, in fact
<Lynne> it's not a big deal, but still, it's a 15-ish year old yasm, we have to cut the line at some point and stop worrying about it
<Lynne> thardin: slices are independent of each other, right?
<Lynne> including AC context
Kei_N has quit [Ping timeout: 256 seconds]
<thardin> tiles are independent
<thardin> codeblocks are too to an extent
<thardin> tiles are typically quite large, on the order of 1024x1024. codeblocks are always 4096 pixels, usually 64x64 but not necessarily
<thardin> the present decoder can only ||ize the IDWT, not the CB decoding. I forget whether my coarse tile decoding is in master atm or not
<Lynne> yeah, with 1024x1024 tiles, I can see how you'd have issues with slice threading
<thardin> 1024x1024 is not granular enough to be useful
<thardin> I whipped up something the ||izes tiles x components so for a 4k image you can do 24 component files in ||
<Lynne> err, 24 components?
<thardin> 3 components * 8 tiles
<Lynne> ah, ok
<thardin> but that's not enough to fully utilize say a 96 core machine. and often the files aren't equally sized. you typically have a large one in the center and three smaller ones. or just one big tile
<thardin> the tiles*
<thardin> only by doing it at the CB level can you guarantee good speedup
<thardin> I forget whether each CB is strictly limited to one reslevel or not
derpydoo has quit [Ping timeout: 256 seconds]
<Lynne> this could've been fixed by defining j2k levels, but a bit too late for that
<thardin> htj2k fixes it sort of, by being less stupid
<thardin> there is something in j2k similar to scan orders in jpeg. I forget the name
<thardin> for htj2k this is more constrainted. and of course you can mix htj2k and regular j2k in the same file
<thardin> for example you can have a low quality htj2k with fewer reslevels and then a lossless j2k version of the same image in the same file
<Lynne> how is htj2k adoption coming along?
<thardin> there is interest from the usual suspects
<thardin> pal should know better than I
kurosu has joined #ffmpeg-devel
derpydoo has joined #ffmpeg-devel
<Lynne> non-existent, I guess :/
<thardin> disney is interested I think
<thardin> we got some lossless RGB48 samples that were non-trivial to decode quickly
<thardin> are audio packets always keyframes? I'm trying to clear AV_PKT_FLAG_KEY for every packet except the first one in the APC demuxer
lemourin8 has joined #ffmpeg-devel
lemourin is now known as Guest1785
<thardin> codec_props
<jamrial> thardin: no, mlp/truehd is an example of audio with non keyframes
<jamrial> but i don't know if the generic logic takes that into account properly
<thardin> yeah I figured it out
Krowl has quit [Read error: Connection reset by peer]
epony has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
dellas has joined #ffmpeg-devel
dellas has quit [Remote host closed the connection]
tmm1_ has joined #ffmpeg-devel
tmm1 has quit [Ping timeout: 264 seconds]
noonien852 has joined #ffmpeg-devel
noonien85 has quit [Ping timeout: 256 seconds]
noonien852 is now known as noonien85
Krowl has quit [Read error: Connection reset by peer]
jamrial has quit [Read error: Connection reset by peer]
jamrial_ has joined #ffmpeg-devel
kurosu has quit [Quit: Connection closed for inactivity]
rvalue has quit [Ping timeout: 264 seconds]
<pal> > j2k unfortunately is *flexible*, so I'm not sure whether you can always do that < --> both a gift and a curse
<Lynne> from the PoV of someone familiar with vc-2, it's just far too flexible
<pal> for new file-based media applications the goal is encourage folks to use on of the two constraint sets at Annex I of https://pub.smpte.org/doc/st2067-21/20221124-pub/
<pal> these constraint sets should work for any frame-based RGB(A)/YCbCr/XYZ media application (lossy or lossless)
<pal> for sub-frame latency, e.g. as used in streaming over RTP, the community is still honing the set of constraints
dellas has joined #ffmpeg-devel
<pal> thardin: let me know if you come across users that are looking to new applications of J2K and/or revamping old applications
rvalue has joined #ffmpeg-devel
<pal> ... it would be good to understand their requirements and either modify the constraint sets above or encourage them to adopt them as-is
<pal> ... the goal is to create a single reasonable set of constraints for media applications
<pal> (sorry for the delay over IRC... my workstation blew up before the holiday break... no amount of backups can recreate a honed machine :(
<JEEB> &34
<JEEB> whoops
<thardin> no worries
<thardin> and yeah sounds a bit like mxf. the format is so general
<pal> more like ffmpeg :)
<Lynne> it is a general format, it was designed to cover anything from photography to fingerprint images to satellite photos in different wavelengths
<Lynne> just not quite real-time video
<Lynne> nor running on any realistic hardware, current or future
<pal> there is activity around J2K in HEIF/ISOBMFF (https://github.com/strukturag/libheif/discussions/1054), as friendlier alternative to JPX and JP2
<pal> Lynne: HT is running on today's hardware
<Lynne> yeah, but it was standardized with a delay of 20 years
<pal> ... maybe it should have been called JPEG 2020
<Lynne> the jp2 header isn't that bad, though
<Lynne> much better than isobmff
<pal> s/jp2/mj2/
<pal> (mistype)
<thardin> too bad heif is a mess
<thardin> and yeah j2k is likely the only format that standardizes hyperspectral images. sort of.
<thardin> makes me wonder if NASA or any other space agency uses it for probes
<Lynne> nope
<pal> very much yes
<Lynne> they use tiff mostly afaik
<thardin> lo
<thardin> l
<Lynne> or at least that's what they publish
<pal> widely used in geospatial applications
<pal> LOL
<thardin> Lynne: published and what's sent in space are two different things
<Lynne> I kinda doubt that they do jpeg2000 onboard satellites, maybe as mezzanine on the ground
<thardin> but, the CPUs used in probes are typically 20 years old or so. can't use too complicated formats
<thardin> for a space project I was involved with we purposefully picked an AVR running at 7 MHz because it has an affordable space qualified variant
<thardin> affordable here meaning 3,000€ or so
<thardin> I wonder how something like intra prediction would play with j2k
<pal> ... is one approach
<Lynne> it's more complex than this
<Lynne> wavelets don't play well with intraprediction
<Lynne> as unlike a dct, they have to be contiguous, even when done on a per-block basis
<thardin> could you have a "tilted" wavelet mimicking intra prediction?
<thardin> so that if a block has a general "angle" one picks a wavelet following the angle
<pal> what do you mean by "they have to be contiguous"
<thardin> thereby compressing variance into one dimension
<Lynne> what dirac did was use OBMC, transformed it, and signalled a residual to be added onto the prediction
<Lynne> similar to what daala ended up doing
<thardin> wavelets don't have to be contiguous. you can have lapped wavelets if you want
<Lynne> that's my point
<Lynne> you need data from the previous block to correctly apply an idwt
<thardin> sure
<Lynne> larger and more complex wavelets than haar need more taps
<thardin> j2k uses tiles to solve that issue. I'm sure it's possible to come up with a way to mix wavelets
<thardin> each coefficient has a corresponding "pattern" after all
<thardin> after all, the purpose of intra prediction is to exploit directional redundancy
<Lynne> "solve that issue"?
<pal> (ok I am slow this morning... I had read "inter"... ignore the link above, which is about inter)
<thardin> Lynne: tile borders are non-lapped. within each file you get the inherent "lapping" the dwt provides. for better or worse
<thardin> each tile*
<thardin> if you have say 8 "angles", each with their own corresponding set of wavelets, you should be able to at least group "macroblocks" according to angle then perform a sparse (I)DWT for each set of blocks
<thardin> sadly that wouldn't likely not thread well
<thardin> would likely not
<Lynne> thardin: so when you do an idwt, you take each tile's data, and for any area outside of it, you assume the value is 0?
<thardin> nah that wouldn't work
<thardin> j2k extends values outside the boundaries of each tile. I forget the name of that
<Lynne> mirror?
<thardin> hm.. yeah it mirrors it
<Lynne> oof
<Lynne> no wonder tiles are huge
<thardin> dct by contrast repeats the data
<motherboard> all of you seem to be quite knowledgeable, what are your backgrounds if you don't mind me asking
<thardin> compsci and broadcast
<Lynne> dirac/vc2 just avoids the problem
<Lynne> physics
<thardin> haven't looked at vc-2
<motherboard> nicee, i am a cs undergrad
<thardin> I feel like picking basis per block is a better idea than intra prediction, at least from a threading perspective
<Lynne> picking basis?
<thardin> dct and dwt are just two of many linear bases you could use
<thardin> non-linear stuff is also coming into vogue, under the umbrella of "AI"
<Lynne> it's been tried
<thardin> it's just regression
<Lynne> at the end of the day, you just want a predictable (literally) frequency output
<thardin> for people who don't want to pretend they're doing statistics
<Lynne> DCTs do this optimally
<Lynne> by giving you a laplacian distribution that you can take advantage of while quantizing & coding
<thardin> welll. dct is generic
<thardin> it's good at compacting energy under a certain model of how the input data behaves IIRC
<thardin> it can't change depending on the input data. neither can the color transform vary across the image
<thardin> we can't go full KLT at present for computational reasons. but on the other hand tensor units are becoming more and more common
epony has quit [Remote host closed the connection]
<Lynne> you won't gain that much out of it
<Lynne> you'll get more with better quantization
<Lynne> and more importantly, DC prediction
<Lynne> with daala's lossless mode, predicting DC was a free 5% gain iirc
<Lynne> and this wasn't even a DCT, but a 5-level haar on 64x64 blocks (1x1 DC)
<thardin> DC prediction just sounds like DWT or the hierarchical DCT in JPEG that no one implements
epony has joined #ffmpeg-devel
<Lynne> daala went a step further with it, by also predicting the DC of each subblock within a superblock by using a haar
<Lynne> that's mostly incompatible with the current trend of using rectangular blocks, but it made sense back then
<Lynne> oh, speaking of, rectangular blocks sort of aleviate the gains you'd otherwise get from tunable transforms
<Lynne> they're cheaper to analyze for too
<Lynne> though you need a way to expressively signal partitions that isn't going to be too expensive
<thardin> nothing stops you from using a rectangular haar, no?
<Lynne> I was stuck on that while working on my own codec
<thardin> but yeah, subdividing blocks into rectangular subblocks is an old ida
<thardin> idea
<Lynne> it took years and was first implemented in av1, I think, unless VP9 or HEVC had it (don't remember)
<Lynne> but yeah, it was an old idea, vc-2 had it in a limited form by recommending 32x8 slices, due to mostly being used with interlacing
<thardin> I mean old as in 90's or even 80's
<thardin> interplay MVE has rectangular subdivisions I think
<thardin> it's not very fancy of course, just a way to do palette stuff a bit better. but still
<thardin> picking bases strikes me as being very VQ-ish come to think of it
derpydoo has quit [Ping timeout: 256 seconds]
<Lynne> it's just far too expensive and boring though
<Lynne> the ideal codec will have fully lapped transforms with frequency-domain intra and VQ
<thardin> there's a sliding scale between VQ, custom bases and using some fixed base
Krowl has joined #ffmpeg-devel
<Lynne> sort of, if you pick a good base, that takes care of needing clever coefficient coding
<Lynne> but it takes too long to search for a good base, tunable transforms are hard to implement in fixed hardware, and writing a clever coefficient coder is sort of its own reward
<thardin> yeah
dellas has quit [Read error: Connection reset by peer]
dellas has joined #ffmpeg-devel
kasper93 has quit [Ping timeout: 256 seconds]
<Lynne> there's also the issue of overflows in transforms
<Lynne> not a mistake we'd like to repeat again after av1
kasper93 has joined #ffmpeg-devel
<Lynne> though now that I think back, the hardware folks told us that tuning the coefficients was not a problem for them
<Lynne> so that's room to play with, but not as much as fully tweakable transforms
dellas has quit [Ping timeout: 246 seconds]
dellas82 has joined #ffmpeg-devel
AbleBacon has joined #ffmpeg-devel
mkver has quit [Ping timeout: 240 seconds]
jamrial_ has quit []
jamrial has joined #ffmpeg-devel
dellas82 has quit [Ping timeout: 246 seconds]
dellas82 has joined #ffmpeg-devel
mkver has joined #ffmpeg-devel
dellas82 has quit [Ping timeout: 268 seconds]
dellas82 has joined #ffmpeg-devel
Krowl has quit [Read error: Connection reset by peer]
TheSashmo has quit [Quit: Leaving...]
TheSashmo has joined #ffmpeg-devel
TheSashmo has quit [Client Quit]
TheSashmo has joined #ffmpeg-devel
<ubitux> ok i found the fix for prores
<JEEB> 'grats
<ubitux> damn the hell is that drama thread
<ubitux> i'm going to have 2012 ptsd reading that thread
<JEEB> I saw it grew longer and decided that while I was trying to lower my stress levels I would not read into it unless absolutely necessary
qeed_ has quit [Remote host closed the connection]
qeed_ has joined #ffmpeg-devel
ccawley2011 has quit [Read error: Connection reset by peer]
dellas82 has quit [Remote host closed the connection]
<BBB> nice debugging effort
<BBB> please do continue merging the two encoders ;)
<BBB> having 1 is better than having 2, sometimes