michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 6.1.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
TheSashmo has quit [Quit: Leaving...]
<IndecisiveTurtle>
Sorry but I have some confusion about this, so gonna write a bit my thoughts if there is something that can be cleared up. In all the research I did, the image is subdivided into four sections, proccesing is done on 2x2 blocks of pixels and each pixel is sent to the respective section. So in the end one ends up with a lower res image in top-left and the other "detail" sections. Then haar can be performed recursively
<IndecisiveTurtle>
ffmpeg seems to do the same, just in two steps first horizontally and then vertically to a different buffer than input. This means that the average and detail pixels are in the same block. Then the deinterleave step splits them into the sections and writes to the original buffer, which can be used with haar transform again as the comment in the function mentions.
<IndecisiveTurtle>
So this task, the encoder shouldn't deinterleave but also be able to be run on the same image multiple times, it needs to find the correct pixels to compute. In the first level this is the first block and in the second level it will be every other pixel from how I'm thinking to implement it.
<michaelni>
Lynne, yes, i want, FFmpeg needs more humor
<kierank>
FFmpeg is sold to Fraunhofer?
<kierank>
FFraunhofer
TheSashmo has joined #ffmpeg-devel
<kierank>
royalties on each build
<michaelni>
kierank, possible
<kierank>
or FFbi has taken this website
<michaelni>
maybe ffmpeg needs to register a onlyfans
<michaelni>
but your first suggestion sounds best so far i think
<kierank>
would be quite funny but there will be a small but vocal minority that won't see the funny side
<kierank>
just like there are one or two people that really hate my tweets
<michaelni>
Its ok if someone doesnt like our joke as long as its not a lawyer
<kierank>
we could get cancelled
<kierank>
that's not a good thing
<kierank>
arguably worse than a lawyer
<kierank>
there was a similar joke at demuxed
<kierank>
and the person making it was cancelled and had to apologise on stage
<another|>
OFFv1
<another|>
We'll make our own OnlyFans! With ffmpeg and hookers!
<michaelni>
kierank, yes there is a risk i guess. Iam happy to leave this to the community and the people active on the social media to decide.
<kierank>
ffraunhofer is ok, though they have been trying to understand open source and contribute
<kierank>
maybe ffmpeg is disbanding
<kierank>
because of AI
<michaelni>
We could admit the project was since years only run by AI
<kierank>
we could say that the ffmpeg command line will go straight to openai
<kierank>
that's a classic thing people say on twitter
<kierank>
"FFmpeg gives up maintaining command line and instead will send all commands to ChatGPT"
<michaelni>
FFmpeg will need a broadband connection and acceptance of EULA for ChatGPT in the future
IndecisiveTurtle has quit [Ping timeout: 268 seconds]
<Lynne>
AI is cheap humor these days, and I know all about cheap humor
<Lynne>
I'll think of something, it should be semi-believable at least for a paragraph
<iive>
Are you thinking about 1'st april jokes?
tufei has quit [Quit: Leaving]
<iive>
VR is also hot topic. You can skip cli and gui and go directly to VR interface.
<iive>
you can say that the VR meta interface is dynamically created by AI linked to a blockchain.
<iive>
any other buzz words to add?
<michaelni>
maybe we should announce a NFT airdrop in the metaverse for everyone who contributes a patch in git before end of 1st april
thilo has quit [Ping timeout: 260 seconds]
<iive>
or just follow reddit example and announce IPO.
thilo has joined #ffmpeg-devel
<kierank>
nft is old
<kierank>
AI is what everyone tweets about
<iive>
how about AI encoder?
<kierank>
FFGPT, a model specifically trained to explain the FFmpeg command line
<kierank>
that would actually be quite believable
<Lynne>
but heartbreaking for most who didn't take time to learn the command line, sadly
<iive>
i'm pretty sure people are generating ffmpeg command lines with gpt.
<kierank>
"After seeing that popular tools like ChatGPT sometimes made up imaginary command lines, the FFmpeg project decided to create FFGPT, a model trained specifically on FFmpeg command lines found on stackoverflow"
<iive>
that would require cooperation with openai and ffmpeg will have to send every ffmpeg command line to them for training.
<another|>
<@kierank> maybe ffmpeg is disbanding
TheSashm_ has joined #ffmpeg-devel
TheSashm_ has quit [Remote host closed the connection]
TheSashmo has quit [Read error: Connection reset by peer]
<another|>
just fix it with deband filter
TheSashmo has joined #ffmpeg-devel
<Daemon404>
"ffmpeg renames to libav"
<iive>
again?
iive has quit [Quit: They came for me...]
<kierank>
Daemon404: wow
<kierank>
Or closes down as C is unsafe
<BtbN>
Rewrite in Fortran
<Lynne>
I take it back, I know nothing about cheap humor
<kierank>
Ffmpeg is closing down for 10 years for rust rewrite
thilo has quit [Ping timeout: 245 seconds]
thilo has joined #ffmpeg-devel
thilo has quit [Changing host]
thilo has joined #ffmpeg-devel
av500 has quit [Remote host closed the connection]
mkver has quit [Ping timeout: 252 seconds]
<BtbN>
People have already shown up in #ffmpeg with absolutely nonsensical commandlines they claimed have come from ChatGPT
<haasn>
why was AVDOVIMetadata defined as a list of size_t offsets instead of just pointers?
<haasn>
I honestly don't remember, there must have been some reason surely
<haasn>
ah it was mkver's suggestion
<michaelni>
j-b, 10886 is still marked as blocking
mkver has joined #ffmpeg-devel
<sdc>
kierank: I think rust rewrite one would be pretty good, could say the White House paper they released convinced the maintainers something like that
<cone-927>
ffmpeg Leo Izen master:83ed18a3ca05: avformat/jpegxl_anim_dec: set pos for generic index
IndecisiveTurtle has quit [Ping timeout: 256 seconds]
<j-b>
michaelni: ok
Kei_N has quit [Read error: Connection reset by peer]
<haasn>
oh right, a pointer doesn't survive memcpy into different side data block, nvm
<mkver>
Great, thanks to James' C17 patch Clang warns about ATOMIC_VAR_INIT being deprecated here.
<haasn>
Is there something like a reallocz that ensures all newly allocated bytes are zero-initialized?
<JEEB>
mkver: I like the comment on it being removed in C2x, but that it still remains in C++23
<mkver>
haasn: If the old contents are supposed to be kept: no.
<haasn>
michaelni: is there a way to calculate a CRC on a not byte-aligned bitstream without memcpy
<haasn>
because memcpy is murder
<michaelni>
haasn, it should be possible
<haasn>
maybe by initing the crc with whatever value exactly cancels out the "error" from shifting it by a few bits
<michaelni>
yes
<michaelni>
and that should go in av_crc_bitstream or something like that so it can be reused
<michaelni>
or av_crc_bits(uint64 data, int len_in_bits, .... to handle the first byte(s)
<haasn>
yeah that makes sense
<haasn>
av_crc(x[0..i]); av_crc([i..n]) is equal to av_crc(x[0..n]) right?
<Daemon404>
this sounds overly clever and hard to read to me
<michaelni>
haasn, yes
<michaelni>
assuming crc is passed through and nothing odd happens
<haasn>
av_crc_bits() can just call into av_crc() on the remaining bytes as an internal optimization
<haasn>
so the user just needs to do av_crc_bits(buffer, index_in_bits, size_in_bits)
<michaelni>
possible
Kei_N has joined #ffmpeg-devel
Livio has joined #ffmpeg-devel
<cone-927>
ffmpeg Anton Khirnov master:e99594812c52: tests/fate/ffmpeg: evaluate thread count in fate-run.sh rather than make
elastic_dog has quit [Ping timeout: 245 seconds]
<elenril>
nice, I can cause an FPE ff_convert_matrix() by supplying an all-1 chroma_intra_matrix
<mkver>
elenril: And of course you have no one to blame for that FPE but yourself.
elastic_dog has joined #ffmpeg-devel
Marth64 has joined #ffmpeg-devel
Krowl has joined #ffmpeg-devel
<Marth64>
hello
Marth64 has quit [Ping timeout: 245 seconds]
Livio has quit [Ping timeout: 252 seconds]
Marth64 has joined #ffmpeg-devel
<haasn>
michaelni: well, it can be done, but needs access to the initial crc definition (poly, le)
<haasn>
so we can't do it from an AVCRC *, at least not easily
<haasn>
because we need to be able to somehow "partially" apply the LFSR, for fewer than 8 bits
<haasn>
(basically changing the j < 8 condition in the table initialization to a lower number)
<haasn>
also I'm confused, in the check to use multibyte crc you test for if (!ctx[256])
<haasn>
but in av_crc_init, ctx[256] = 1; is set unconditionally
<haasn>
so isn't this code just dead?
<haasn>
ah, no, it gets overwritten, just confusingly written
lexano has joined #ffmpeg-devel
Guest52 has quit [Ping timeout: 250 seconds]
<cone-927>
ffmpeg Michael Niedermayer master:57f252b2d10c: avcodec/cbs_h266_syntax_template: Check tile_y
<cone-927>
ffmpeg Jun Zhao master:bfbf0f4e82ef: lavc/vvc_parser: small cleanup for style
Krowl has quit [Read error: Connection reset by peer]
IndecisiveTurtle has joined #ffmpeg-devel
<michaelni>
haasn, the poly can probably be read out from entry 1 of the table with some LE dependant byte/bitswap
<michaelni>
not sure its worth it
<michaelni>
but can be done
<haasn>
probably not
<haasn>
I ended up doing the memcpy anyway for other reasons
<haasn>
having a byte-aligned payload really is useful
HarshK23 has quit [Quit: Connection closed for inactivity]
mkver has quit [Ping timeout: 245 seconds]
j45_ has joined #ffmpeg-devel
j45 has quit [Ping timeout: 260 seconds]
j45_ is now known as j45
j45 has quit [Changing host]
j45 has joined #ffmpeg-devel
<Traneptora>
haasn: how do you turn a non-byte-aligned bitstream into a byte-aligned one with memcpy?
<haasn>
Traneptora: for (i = 0; i < bytes; i++) buf[i] = get_bits(gb, 8);
<haasn>
poor man's memcpy
___nick___ has joined #ffmpeg-devel
<Traneptora>
would it make more sense to do AV_RL64(buf), left shift, AV_WL(64)
<Traneptora>
would it actually be faster
<haasn>
feel free to test it and report back
<Traneptora>
such an engineer's answer
<haasn>
probably it would be good to add a get_bytes8(uint8_t *buf, int length) to get_bits.h
<haasn>
which does whatever internal optimizations are beneficial
<haasn>
to read a bunch of bytes from a GetBitContext
<Traneptora>
the jxl parser does have a loop similar to that as well
<Traneptora>
extra channels can have a channel name, which has a length and then a utf-8 encoded string name. length usually zero but can be nonzero
j45_ has joined #ffmpeg-devel
j45 has quit [Ping timeout: 260 seconds]
j45_ is now known as j45
j45 has joined #ffmpeg-devel
Traneptora has quit [Quit: Quit]
kurosu has quit [Quit: Connection closed for inactivity]
mkver has joined #ffmpeg-devel
HarshK23 has joined #ffmpeg-devel
Kei_N_ has joined #ffmpeg-devel
Kei_N has quit [Ping timeout: 245 seconds]
Traneptora has joined #ffmpeg-devel
Livio has joined #ffmpeg-devel
___nick___ has quit [Ping timeout: 245 seconds]
___nick___ has joined #ffmpeg-devel
Guest52 has joined #ffmpeg-devel
<Lynne>
Traneptora: zero copy, you can do it via memfd_create + mmap
<IndecisiveTurtle>
Lynne: I believe the shader should be good now, the encoder can run on the same image multiple times, while keeping the data interleaved. The detail pixels are also visible in places were the color shifted in the original image
<Marth64>
hmm... i should probably do the assert for both cases
<Marth64>
once at the start. will fix :facepalm:
<Lynne>
IndecisiveTurtle: nice
<Lynne>
could you abstract away the code to a function, and call it to complete and entire block, rather than always 2x2?
Livio has quit [Quit: leaving]
<Lynne>
the block size may be arbitrary, but commonly, 32x32 or 32x8
Livio has joined #ffmpeg-devel
snowworm has joined #ffmpeg-devel
<IndecisiveTurtle>
Lynne: Oki so split it in a separate func and make it work in 32x32 workgroup size. I think 32x32 is the nvidia subgroup size, what uses does 32x8 have in this case?
<Lynne>
IndecisiveTurtle: the encoder runs on blocks of arbitrary size
<Lynne>
they can be 1024x1024, all the way down to 8x8, the only requirement is that they're a power of two
jamrial has joined #ffmpeg-devel
<Lynne>
it's up to the user to configure that (smaller blocks == less latency, faster, but worse quality)
mkver has quit [Ping timeout: 252 seconds]
<IndecisiveTurtle>
Hm so if we set the block size to 8x8 it will process 8x8 blocks of pixels at a time and the result would be the same as applying 2x2 haar transform 3 times?
<Lynne>
right, so the number of transforms (the transform depth) is also a user setting
<Lynne>
vc-2 allows for at most 5 transform levels
<Lynne>
but that's limited by the transform size, since you cannot do 5 transforms of 8x8 (since you'd do 8x8, 4x4, 2x2, and then you'd run out of pixels to transform)
<Lynne>
the first level transform size is always equal to the block size
Marth64 has quit [Ping timeout: 264 seconds]
Marth64 has joined #ffmpeg-devel
<IndecisiveTurtle>
Why do smaller blocks in this case have lower quality?