<FlorianBad>
Did you ever come across some kind of stats that would describe the bitrates internet users can usually tolerate in most developed countries (or per country...) ?
<FlorianBad>
and I'm not talking about their internet connection because their WiFi might slow down, e.g. I'm really talking about something measured at the browser level
lexano has quit [Ping timeout: 246 seconds]
lavaball has quit [Quit: lavaball]
navi has quit [Quit: WeeChat 4.1.2]
<FlorianBad>
kepstin, I absolutely love the crf unit for picking the various bitrates available, it just makes so much sense. Thanks for suggesting that a few days ago :)
<FlorianBad>
It's really perfect for my use-case to offer evenly spaced bitrates to the client
KombuchaKip has quit [Remote host closed the connection]
Suchiman has quit [Quit: Connection closed for inactivity]
KombuchaKip has joined #ffmpeg
thilo has quit [Ping timeout: 240 seconds]
thilo has joined #ffmpeg
five618480 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
<aaabbb>
the problem with crf is that you're going to have to adjust the range, otherwise you may have videos that aren't streamable at *any* crf you use
MrZeus has quit [Ping timeout: 255 seconds]
<FlorianBad>
aaabbb, because even 27 e.g. might be super heavy?
<aaabbb>
yes
<FlorianBad>
And you're talking about high-action motion typically?
<furq>
it's unlikely if these are just music videos
<FlorianBad>
like Tom Cruise running type of thing?
<furq>
but you should use vbv constraints
<aaabbb>
high-action motion, complex scenes, full color space, 10 bit, yuv444 or 422, large dimensions, high fps, etc
<aaabbb>
lots of factors
<FlorianBad>
well remember these will always come from my own final project exports, which will be the fps I chose, along with everything else
<aaabbb>
yeah vbv constraints will let you prevent it from being overwhelming but then you might also be streaming a video that could be better because you're underutilizing the bandwidth
<aaabbb>
eg i could make a video where even crf 18 barely uses any bitrate
<FlorianBad>
and I already have -pix_fmt yuv420p
<aaabbb>
then it would just be color range, dimensions, and complexity (both spatial and temporal)
<FlorianBad>
aaabbb, sure but the quality that -crf 18 would be so amazing that 15 would make no visual difference, right?
<furq>
as usual it depends
<furq>
but i don't think i've ever encoded anything below 18
<aaabbb>
yeah but my point is just that if you are targeting a specific bitrate because you are streaming and want to make sure you are tailoring the video for each person's bandwidth, then you usually want abr with constraints, and 2pass abr is the best way to do it
<FlorianBad>
Ok, I think it will be fine, because I will have crf 17 19 21 23 25 27 for each of ~10 resolutions lol So my back-end will pick what fits best. I just need to write that part well and should be fine
<furq>
that seems like a uselessly high number of variants
<aaabbb>
that's a lot
<aaabbb>
it's much better for streaming to use abr, the use of crf is really for storage
<furq>
i don't know if i agree with that but it's at least no worse than using crf
<furq>
unless you're paying a lot for bandwidth
<furq>
for vod i would use crf+vbv
<aaabbb>
furq: it's most commonly used by companies that do streaming (not big ones like netflix, they have their own special tricks)
<furq>
yeah for live
<furq>
this is vod
<aaabbb>
crf+vbv is also good
<furq>
if you're encoding 60 variants per video then i guess at least cpu and disk space are cheap
<furq>
i would also be shocked if more than five of those ever get used
<aaabbb>
yeah that's a whole lot of resolutions, and crf 17 is super super overkill
<aaabbb>
for x264 at least
<FlorianBad>
furq, they will, because it will always pick the resolution that is >= to the resolution of the canvas element in the browser page. So then it will look at which crf based on current bandwidth stats
<FlorianBad>
since the first step is to pick the resolution, then you realize that's not that many options after all
<aaabbb>
how big is the buffer on the browser?
<FlorianBad>
but just enough ;)
<FlorianBad>
aaabbb, I will handle that manually in my code, haven't decided yet, but it will probably be dynamic based on what happened in the past
<aaabbb>
FlorianBad: the smaller the buffer the more strict you want to be with vbv
<FlorianBad>
I see, yeah. Well it will probably be small if no bw irregularities are detected
<FlorianBad>
I don't see why it would be greater than 5s if you have an amazing connection with little latency
<FlorianBad>
(at least as long as I'm sure of that)
<aaabbb>
greater means better quality because you can allow the bitrate to shoot up more for high complex scenes
<furq>
youtube only has ~30 variants for 4k and that's with two codecs
<furq>
and more than half of those are for 360p/240p/144p
<aaabbb>
youtube also has huge asics that makes ~30 variants practical
<FlorianBad>
But youtube cannot afford to make insane decisions, I can ;)
<furq>
and that also includes the premium locked streams
<aaabbb>
FlorianBad: insane decisions like -preset placebo? ;)
<FlorianBad>
lol
<aaabbb>
which actually isn't always insane for x264
lolok has joined #ffmpeg
<FlorianBad>
no like very small increments in resolutions. e.g. right now I'm my script with a video and this is all the increments it will produce:
<furq>
that's 30 variants for 4k30 but i assume you don't have any 60p stuff anyway
<FlorianBad>
Now encoding each of 17 resolutions (448x252 512x288 576x324 640x360 768x432 896x504 1024x576 1152x648 1280x720 1536x864 1728x972 1920x1080 2304x1296 2560x1440 2944x1656 3328x1872 3840x2160) to each bitrate
<FlorianBad>
(that's the print of my script)
<aaabbb>
that's a lot
<furq>
yeah that seems entirely unnecessary
<aaabbb>
you can use fewer and then let the browser do the downscaling
<furq>
especially with six variants for each
<furq>
in general i would rather have higher crf and lower resolution than the opposite
<aaabbb>
i prefer the opposite personally just because downscaling is less prone to artifacts than upscaling
<FlorianBad>
Yeah it's a lot but if your canvas in the page is 1477x831 then it will pick 1536x864 which will rescale beautifully in the page
<furq>
yeah but upscaling is less prone to artifacts than a low quality encode
<aaabbb>
true
<furq>
also i meant lower crf but i guess you figured that out
<FlorianBad>
Ok so now I need to research this VBV thing that I know nothing about :)
<aaabbb>
FlorianBad: what i usually do is i keep things in resolutions divisible by 8
<aaabbb>
vbv lets you do constraints
<aaabbb>
so you can say "crf 18 but never exceed 10mbps over this amount of time even if you otherwise would"
<FlorianBad>
aaabbb, my program picks the best nice modulos (as you can tell from above, it decided all these resolutions just from the original 3840x2160)
<FlorianBad>
aaabbb, ok, but wouldn't that really just apply to Tom Cruse stuff?
<aaabbb>
not necessarily
<aaabbb>
things you might never expect, like gentle snowfall, can totally bitrate starve it
<furq>
static shot of a lake
<aaabbb>
exactly and then when you use crf, you're telling it "throw as many bits as you possibly can until you hit the desired quality"
<furq>
fast camera pans and stuff will generally not be that bad because it's blurry anyway
<aaabbb>
also fast camera pans are simple motion
<furq>
but god forbid there's some confetti
<aaabbb>
confetti, snow, anything with lots of particles moving in unpredictable directions
<aaabbb>
but fast tom cruise running won't actually use all that much because the majority of the shot is smooth motion in one predictable direction
<furq>
and it'll be shot on film so there'll naturally be a lot of motion blur
<FlorianBad>
snowfall :D ok I see
<aaabbb>
what makes sense intuitively as far as "complex" isn't always the same as what the encoder considers complex
<FlorianBad>
confetti hahaha
<furq>
have you never seen a sports team win the championship
<furq>
guaranteed to ruin an encoder's day
<aaabbb>
snowfall or confetti or brain just turns into a simple "glitter falling" perception which is even simpler for our minds to process than tom cruise running or a car chase scene, but the former is a nightmare for the encoder and the latter is relatively straightforward
<FlorianBad>
but wait a second, why would that be important since I will calculate the bitrate of each piece of stream from mpd anyway?
<furq>
because you're using crf
<aaabbb>
that's why i usually recommend 2pass abr (or crf+vbv as furq says)
<FlorianBad>
my program will know that this stream chunk corresponds to 2 seconds, e.g. So if that won't work for the connection this person has it will simply pick the same chunk from a lower crf
<furq>
"simply" doing a lot of work there
<aaabbb>
you don't really want to change it too fast
<furq>
you want to avoid switching as much as you can
<FlorianBad>
furq, well it will do it anyway, so it's not like I will code anything extra
<aaabbb>
even if it's struggling to keep up with it, let it run out of buffer before you switch
<aaabbb>
FlorianBad: but then it will look perceptively bad
<FlorianBad>
why would switching cause problems?
<furq>
viewers can often perceive the switch
<aaabbb>
because going between crf 24 and crf 26 every few seconds is more jarring than just staying with crf 26
<furq>
and also it takes time to detect whether you need to switch
<FlorianBad>
oh I see, because if there's no constraint I would have to switch so far from the previous crf that there would just a huge difference in quality?
<aaabbb>
FlorianBad: not even so far, but just a little
<aaabbb>
because when the crf changes, the encoder has to use different encoding techniques
<FlorianBad>
hmm, really?
<aaabbb>
yeah
<FlorianBad>
oh I see, ok. So my program will have to be very hesitant in switching then. ok. Well then vbv makes sense
<aaabbb>
you only want to switch if you are stuck between the decision "give this person crap quality even tho his bandwidth is plenty" and "let the video get stuck buffering over and over"
<aaabbb>
vbv isn't to make switching less often, it's to stop massive spikes in bitrate
<FlorianBad>
so it VBV just -maxrate and -bufsize ?
<furq>
yeah
<aaabbb>
vbv basically encodes, then decodes in real-time to verify how much buffer is needed
<furq>
i don't know of any other streaming service that tries to do such granular switching
<furq>
so this is stuff you'd want to experiment with
<aaabbb>
and if you put minrate and maxrate at almost the same place, then that's what cbr is
<aaabbb>
(you don't want cbr ofc)
<furq>
e.g. youtube will just bump you down from 1080p60 to 480p30 and then never switch you back again
<furq>
which isn't great either
<aaabbb>
switching resolution is more jarring than switching crf, but you really don't want to switch crf often
<furq>
also minrate doesn't do anything
<FlorianBad>
ok, I will code with these in mind then
<FlorianBad>
I was wondering also, switching audio probably produces clicks, right?
<furq>
audio is such low bitrate that you should never need to switch it
<aaabbb>
depends on where it is switching
<FlorianBad>
or does the dash algorithm figure out a way to slice when the waveform is at zero?
<furq>
youtube uses 128k aac or 128k opus for almost all video streams
<aaabbb>
many codecs have like 200ms independent packets for example
<furq>
you would usually have video and audio separate
<furq>
so the audio just carries on using the same playlist/manifest when the video switches
<aaabbb>
FlorianBad: it's not real-time, it will buffer and then "splice it together" no matter where the waveform is
<aaabbb>
but usually you don't want to switch audio
<FlorianBad>
ok
<aaabbb>
especially if it's music video, also 128k aac will sound much worse than 128k opus
<furq>
it really won't
<FlorianBad>
I'll test just out of curiosity
<aaabbb>
furq: sure it will, with classical music
<aaabbb>
if aac-lc
<furq>
which encoder
<FlorianBad>
(test the change in kbps during play)
<aaabbb>
fdk_aac (and especially twoloop native aac)
<furq>
fdk compares very well to opus in the tests i've seen
<furq>
at 96k and up
<furq>
below that is where opus really shines
<aaabbb>
i've found that it's only at about 128k and up
<aaabbb>
from what i've read in abx testing
<furq>
on account of he-aac sounds absolutely awful
<aaabbb>
well, with certain music
<aaabbb>
like classical
<aaabbb>
classical can even cause 96k opus to fall into intensity stereo
<FlorianBad>
for audio I'm pretty confident I will hear a ton of difference, I mixed music my whole life and I use the Focal Twin6be right now, which are absurdly bright due to the technology of their tweeter
<aaabbb>
you gotta abx test before determining that you can hear a difference
<furq>
if you can abx 128k opus from 128k fdk then fair enough
<furq>
but they should both be transparent on most samples
<aaabbb>
yeah most samples they are
<aaabbb>
it's really classical music that stresses audio codecs
<aaabbb>
and killer samples ofc but no one listens to that
<aaabbb>
FlorianBad: these are music videos right?
<aaabbb>
will people be allowed to download and archive or is it just streaming?
<aaabbb>
if people will download it then you might want to have a higher bitrate option, like maybe even 160k
<furq>
or just make the source video available
Muimi has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<aaabbb>
yeah that's a better idea lol, i'm dum
<aaabbb>
i have a dumb question but for x264/x265, is there ever a reason *not* to disable the deblock filter if the source is lossless or perceptually lossless (ie no macroblocking whatsoever)?
lemourin3 has joined #ffmpeg
lemourin has quit [Killed (zirconium.libera.chat (Nickname regained by services))]
lemourin3 is now known as lemourin
<furq>
deblocking is applied when decoding
<aaabbb>
wow, my question really was dumb. thanks
<furq>
also can you even turn it off
<furq>
--deblock 0:0 is just the default value
<aaabbb>
yeah you can, on both x264 and x265
<furq>
oh never mind there's --no-deblock
<aaabbb>
-x264opts no-deblock or -x264-params no-deblock=1
<FlorianBad>
aaabbb, in the vast majority of the cases it will be music videos yes, piano performances
<aaabbb>
oh x265, it's deblock=0:0:0 with the first being whether it's enabled or not
<aaabbb>
FlorianBad: oh ok, both aac and opus are very good at piano iirc
<aaabbb>
(debock=1:0:0 being default on x265 iirc)
<aaabbb>
furq: does that mean that a video's deblocking value can be changed without reencoding, if it's just metadata specifying what to do at decoding time?
<FlorianBad>
aaabbb, this is 5-years old and the next step for me will involve getting my hand on amazing equipment so it will be much better filmed, but just for the idea of the style/mood: https://vimeo.com/574631289 (so not Tom Cruise lol)
<aaabbb>
just make sure you use either 2pass abr or crf+vbv
<aaabbb>
and bigger buffer size = better quality (because vbv will be less aggressive)
<FlorianBad>
I need to get some food, will read everything you guys wrote in 15-20min, thanks again for the help
<furq>
i have no idea if you can change it but it is also applied during encoding before prediction
<furq>
so if you can change it then it'll probably lead to bad results
<aaabbb>
furq: oh ok that makes sense
orthoplex64 has joined #ffmpeg
minimal has quit [Quit: Leaving]
<aaabbb>
one more dumb question i've been wondering for a while, but why is a flac file with mono turned into stereo smaller than when downmixing it to mono?
<furq>
no idea why it would be smaller
<furq>
i would expect it to be exactly the same size
<aaabbb>
i have an input that is 2ch stereo where each channel is 100% identical, when i do -ac 1 it's more than 20% larger
<furq>
maybe swresample is doing something funky
<aaabbb>
sample rate doesn't change, sample type doesn't change, and i can losslessly convert it back to the original 2ch just by duplicating the mono
<furq>
or maybe not
<aaabbb>
hmm
<aaabbb>
could it be applying dither?
<furq>
might be a question for #xiph if people still talk in there
<aaabbb>
maybe i was wrong about it being losslessly convertible back, i'll test again
<aaabbb>
furq: if it's swresample then it would be an ffmpeg thing and not a libflac thing
<furq>
sure
<aaabbb>
i'll try selecting one stream instead of mixing them, my guess is that it's adding dither when mixing
<furq>
if you can recreate it losslessly then it's definitely not swr
<furq>
if you can't then it probably is
<furq>
you can use -af channelmap if you want to eliminate that possibility
<furq>
-af channelmap=FL:channel_layout=mono
Epakai_ is now known as Epakai
<furq>
also assuming you're using libflac and not lavc flac
<aaabbb>
just -c:a flac
<aaabbb>
which i think uses libflac doesn't it?
<aaabbb>
i used -af "pan=mono|c0=c1"
blaze has quit [Ping timeout: 246 seconds]
blaze has joined #ffmpeg
<furq>
ffmpeg can't use libflac
<furq>
it's only got the internal encoder
<aaabbb>
i thought that was based on libflac
<furq>
it's loosely based on flake
<furq>
but they diverged years ago
<aaabbb>
is libflac superior?
<furq>
they're about the same i think
<furq>
last i checked flac performed very slightly better
<furq>
but not enough to bother switching
<furq>
also that was just comparing flac -8 and lavc -compression_level 10
<furq>
there's more extreme settings for both
<aaabbb>
i use -compression_level 12
<aaabbb>
i know it's non-subset but i only use decode it with ffmpeg
<furq>
yeah idk how well flac does with -ep or --freeformat or whatever
<aaabbb>
and sometimes cholesky which occasionally has better compression than levinson
<FlorianBad>
furq, I don't know about opus vs libfdk_aac, but earlier today I listened to libfdk_aac 320 vs 128 and the difference was huge. But again, I have the best monitors and trained my ears for years to hear these things
<aaabbb>
FlorianBad: did you do abx testing?
<furq>
like i said if you can abx it then more power to you
Muimi has joined #ffmpeg
<aaabbb>
because except for killer samples or certain types of samples, you won't notice a difference between 320 or 128 (and even with certain types of samples, not between 160 and 320)
<aaabbb>
if you can tell the difference between say 160 and 320 for a typical (non pathological) sample, then you are the only human being on earth to do so
<FlorianBad>
aaabbb, no download, in fact my player modifies the bytes of video data to make it very complicated to download the content. The player controls are also part of the canvas itself so it's not like you can grab the video element in the page and do things with it either. Still technically possible but extremely difficult, especially with the amount of code obfuscation I use
<aaabbb>
i meant downloading the audio since it's a music video
Marth64 has joined #ffmpeg
<FlorianBad>
aaabbb, in a way it was abx because I didn't realize which one I was playing until I notice the degradation in sound quality, and then I looked and realized it was 128kbps versus 320 for the previous video I was looking at (I was really just looking at pictures, but then I noticed the sound difference)
<aaabbb>
that's a/b and not abx
<aaabbb>
but it has to be done multiple times to be valid
<FlorianBad>
ok, fair enough, but trust me it was too obvious
<aaabbb>
and it's possible that the 128k sample had something that played bad with libfdk_aac
<aaabbb>
it might be but if that's the case then something is wrong with libfdk_aac
<FlorianBad>
well I was using the main theme of Gladiator as audio input, so that would definitely fit in your "classical music" thing
<aaabbb>
ah
<aaabbb>
yep that would make sense. did it have symbols too?
<FlorianBad>
an no, no audio nor video download, if I want download I'll put that separate in the page, not part of the player files at all, but I probably won't
<aaabbb>
symbals*
<furq>
c
<aaabbb>
cymbals
MightyBOB has quit [Ping timeout: 260 seconds]
<FlorianBad>
yeah but it's mostly string ensembles that I developed a really good ear for over the years
MightyBOB has joined #ffmpeg
<aaabbb>
yeah that makes sense, 128k aac is probably not going to be transparent for that kind of thing
<aaabbb>
it's not about the segments, it's about the decoder's buffer
<FlorianBad>
I know but I will then use that mp4 file to put in dash
<aaabbb>
the only thing that matters about the segments is that they're independently decodable
<FlorianBad>
well, their size too, I don't want a 10MB segment
<FlorianBad>
I actually want them as small as possible because my program actually puts them in an object when sending to client, so it can have many in a single request
<FlorianBad>
so the smaller the better as long as it doesn't have any other downsides
<aaabbb>
it does have more downsides
<aaabbb>
worse compression efficiency
<aaabbb>
you want the segment to fit in whatever unit of response you give
<aaabbb>
so if you send 1mb chunk at a time, you want a segment ~1mb
waleee has quit [Ping timeout: 240 seconds]
<FlorianBad>
Ideally I'd like segments to correspond to an approximate length, probably about 1s
<aaabbb>
yeah that's fine but if you're sending it in chunks that have 5s worth of content, then you want the segments to be 5s
<aaabbb>
the longer it is, the more efficient the compression is and the less bandwidth you use for the same quality
<kepstin>
for a vod application, i'd probably look at something in the 5-10s length as an encoding efficiency compromise.
<FlorianBad>
how bad of a difference in compression % are we talking about (just guessing) beteen a 5s buffer and a 1s?
<aaabbb>
FlorianBad: i don't know about percentage and it depends on the video contents, but it's not insignificant
<aaabbb>
more s means you can have more consecutive bframes and a much lower i to b/p ratio
<kepstin>
for reference, note that the default gop size with libx264 is 240 frames, so 10s at 24fps
<FlorianBad>
kepstin, no because in the first few seconds my program will gather as much stats as possible about the bandwidth and fps of the client, so I don't want to be stuck for a whole 5s
<aaabbb>
FlorianBad: bandwidth especially with qos is an interesting thing, gathering the first few seconds might not give you anything accurate because of tcp sliding winow
<furq>
shorter segments also makes the buffer more likely to run out
<FlorianBad>
aaabbb, not just that... the user presses F and goes full screen, now I have to change the resolution, but I'm stuck with the small one for another 5s
<kepstin>
and for buffering vbr content, you want the client to buffer several chunks ahead if possible to workaround issues with tcp scaling
<furq>
especially if you're trying this hard to match the bandwidth the client has
<aaabbb>
FlorianBad: you shouldn't be changing resolution that fast anyway, even 10s isn't excessive
<FlorianBad>
aaabbb, I can, because it's all in canvas so no one will really know, it's all handled in the way I draw in that same canvas
<kepstin>
people will know, because quality switch is a visual difference
<aaabbb>
FlorianBad: but will the decrease in quality from a small gop size be worth it?
<FlorianBad>
sure, but that's a good thing, so I can switch very quickly after they went fullscreen
<kepstin>
you want to avoid switching quality as much as possible, since it gives a worse experience
<FlorianBad>
(for example, among many other things)
<FlorianBad>
So -gop is what determines how small dash will slice?
<aaabbb>
the gop size will be a limiting factor
<kepstin>
you can create chunks with multiple gops, you can't create chunks smaller than a gop
<aaabbb>
because you can't predict across slices
<aaabbb>
yeah and generally you don't want a chunk to have multiple gops unless the chunk is really really giant
<FlorianBad>
kepstin, right, ok
<FlorianBad>
I see
<aaabbb>
bigger gop = more efficiency but slower seeking (and for realistic chunk size, the seeking won't be a problem)
<FlorianBad>
so then with by -crf options I should definitely use all 3 options: -maxrate -bufsize and -g ?
<aaabbb>
you might want to turn off scenecut detection too
lolok has quit [Remote host closed the connection]
<aaabbb>
but definitely use maxrate and bufsize with crf
<FlorianBad>
Also I assume -g should be a multiple of keyint ?
<aaabbb>
-g is literally keyint
<FlorianBad>
I did that I use -x264-params "scenecut=0:keyint=xxx"
<aaabbb>
-g 300 is the same as -x264-params 'keyint=300'
<FlorianBad>
ah! well, then I already have -g :) lol
<FlorianBad>
(That term "group of pictures" seems misleading, but anyway)
<kepstin>
for reference, i _think_ google is using 10s gop for vod youtube content
<aaabbb>
it's even more misleading when you realize the difference between open gop and closed jop ;)
<aaabbb>
s/jop/gop
<aaabbb>
x264 defaults to closed tho. it's slightly less compression efficiency but you need it for dash
<kepstin>
(youtube's bandwidth estimation routinely underestimates my connection speed, but it still guesses a value high enough to play videos at max available quality)
<FlorianBad>
kepstin, when quality gets bad (like crf 25) I can clearly notice the keyint like a clock on the picture, when I set keyint to fps I can clearly see everything change every second... So for that reason I think I'll just use a fps/2 keyint (500ms) which looks a lot better
<aaabbb>
fps/2 is a very bad iea
<aaabbb>
idea
<aaabbb>
that's extremely short gop, plus switching often is not good anyway
<aaabbb>
FlorianBad: the sudden quality change at each keyint is caused by the qp ratio for i and p/b frames
<FlorianBad>
that doesn't mean I will switch between bitrates, it just means these I-frames won't be so obvious on the screen
<kepstin>
if you see keyint causing issues in crf mode, that might mean your vbv limits are too low causing it to run in bandwidth limited mode instead of crf mode :(
<aaabbb>
just set the qp to be higher for i frames and lower for p/b frames and that sudden "jump" as the i frame hits will go away
<kepstin>
the defaults combined with crf mode _should_ cause reasonably consistent quality over a video
<FlorianBad>
kepstin, I had no vbv option set thus far, and -crf 25 with keyint=fps looked really bad
<aaabbb>
FlorianBad: set it larger than fps then
<aaabbb>
like fps*5 or fps*10, and if you keep getting that "jump", change the qp for i frames
<FlorianBad>
But it probably depends on the type of video. I was testing with that ARRI Alexa footage of the woman that I posted a few days ago
<FlorianBad>
(so little subtle movements of the hair... these keyframes made that terrible)
<aaabbb>
reducing keyint is the wrong solution
<FlorianBad>
aaabbb, hmm, interested in knowing what that means: "qp to be higher for i frames and lower for p/b frames" :)
<aaabbb>
FlorianBad: so the reason it "jumps" is because an i frame is like a still picture, like a jpeg. p frames and b frames only hold differences. a lower i frame qp (ie higher quality i frame) means that you need less bits in p/b frames to keep quality, a lower quality (higher qp) i frame means that you need more bits in the p/b frames to "make up for that"
<aaabbb>
the reason you see a jump is because the quality of the p/b frames is kinda low, so you get errors that accumulate, until it hits an i frame and it suddenly jumps up in quality
<kepstin>
well, it's actually more common to get the other way around, where the i frame is kinda low quality, and the p/b frames 'repair' it over time - that's caused by bitrate limits.
<kepstin>
usually caused by*
<aaabbb>
kepstin: yeah but if he's getting those sudden jumps in quality, it's not that
<kepstin>
since i frames are _huge_. in most cases, the i frame is the majority of the gop, and the predicted frames are tiny in comparison.
<kepstin>
so if the i frame doesn't fit in the vbv buffer bandwidth budget, then it has to be encoded smaller (lower quality), but the predicted frames can still be encoded full quality.
<aaabbb>
that's also why having a very small gop like just 12 frames is a bad idea
<FlorianBad>
aaabbb, hmm so you're talking about adjusting the BALANCE in quality of these keyframes (I-frames) vs the ones in between? what parameters do that?
<kepstin>
yeah, the lower quality keyframes is more commonly an issue in _live_ video.
<furq>
ipratio
<aaabbb>
FlorianBad: yeah the balance, iff you're having problems with the keyframe causing an unpleasant jump in quality
<aaabbb>
default ipratio is 1.4 for x264
<kepstin>
i.e. if a keyframe is qp=23, then the pframe will be qp=32.2
<kepstin>
(note that while qp and crf use the same number scale in libx264, they don't mean the same thing)
<aaabbb>
FlorianBad: intra refresh is also a possibility
<aaabbb>
that will do gradual i frame like updates instead of one sudden change, but that also slightly decreases compression efficiency
<kepstin>
hmm, intra refresh is more problematic than it's worth imo
<kepstin>
you don't get "keyframes" then, so you can't quality switch
<FlorianBad>
I don't see an ipratio option in `ffmpeg -h encoder=libx264` it's not from libx264?
<aaabbb>
kepstin: ahh good point
<furq>
it's not exposed in ffmpeg
<aaabbb>
FlorianBad: it is but you gotta use x264-params
<furq>
^
<FlorianBad>
ah I see, the other extra params, thanks
<aaabbb>
the ffmpeg options just make it easier to do it with flags to ffmpeg, like "-direct_pred 1" is just an optional way of doing "-x264-params direct=spatial"
<kepstin>
i wanted to use intra refresh in some live stuff, but i had issues with some decoders/players not being able to start the stream when joining late because they never saw a frame marked as a keyframe, so they never started decoding.
<kepstin>
that was back in the days of flash tho, i dunno if it's gotten better
<furq>
seems like it would be totally useless if that was still the case
<aaabbb>
FlorianBad: just adjust the ratio until the subjective quality improves. for testing, i recommend 2pass abr so that you can do actual comparisons of quality at the same bitrate
<aaabbb>
even if you'll use crf+vbv in production
<furq>
you should still use vbv with 2pass
<aaabbb>
furq: even for just testing to compare ipratio?
<furq>
i meant for streaming but it's probably better to use your actual settings for testing
<FlorianBad>
furq, but 2pass means no crf ?
<aaabbb>
2pass is abr not crf
<aaabbb>
but i just meant for testing
<kepstin>
the quality and bitrate you get from crf depends on other options - it's not "fixed". so in order to do a fair comparison, you need to fix one of those things, and fixing bitrate is easier to do, and more relevant for streaming applications
<aaabbb>
^
<FlorianBad>
aaabbb, yeah but he doesn't mean for testing
<aaabbb>
i think he was replying to me saying 2pass abr
<FlorianBad>
ok
<aaabbb>
like crf 20 with ultrafast will look way way worse than crf 21 with veryslow
<aaabbb>
and changing ipratio counts as changing settings
<FlorianBad>
but in the same time, for some videos it's absurd to have a high bitrate that isn't necessary because nothing moves
<FlorianBad>
kepstin, so I should not set qp value, just crf and ipratio?
<kepstin>
using too small of a keyint means that you need high bitrate even if nothing moves, since it can't make use of the fact that nothing moves to improve compression.
<aaabbb>
FlorianBad: never ever set qp directly
<aaabbb>
that's cqp which is inferior to crf
<kepstin>
unless you're trying to use lossless mode :)
<aaabbb>
well -crf 0 also works for lossless (except 10 bit)
<aaabbb>
FlorianBad: that's not setting qp directly, that's just setting some biases
<FlorianBad>
kepstin, ok, will probably not need a lot keyint as soon as I figured the ratio thing
<FlorianBad>
aaabbb, ok I wasn't planning to do it, just wanted to confirm
<kepstin>
FlorianBad: a keyframe (idr frame) is a point in the video at which it's not allowed to reference (reuse data from) earlier frames. so the more often you have an idr frame, the less the codec is able to take advantage of re-using data from parts of the frame that doesn't change.
<aaabbb>
^ and at the extreme end where you are i frame only, you're no better than mjpeg
<kepstin>
so having larger gop size is important for reducing the bitrate needed for a given quality of video, especially if there's limited motion.
<FlorianBad>
keyint=1 :)
<furq>
you are still quite a bit better than mjpeg
<kepstin>
i mean, x264 has better intra-only compression than jpeg, so it's not _as_ bad as mjpeg ;)
<furq>
see
<kepstin>
you'd have to go back to mpeg-1/2 for intra-only to be the same as mjpeg.
<FlorianBad>
kepstin, what I'm wondering is at what point the difference between a given keyint (gop) and another 50% greater one becomes something like 3% in savings...
<furq>
you'd have to test that
<furq>
there's not much benefit to using short gops for vod
<furq>
but i guess the thing you want to do is one of the times it would make some kind of sense
<aaabbb>
it totally depends on the amount of motion
LionEagle has quit [Read error: Connection reset by peer]
<kepstin>
using longer gops shouldn't ever make quality per bit _worse_, since modernish codecs like h264 have the ability to encode individual parts of frames as non-predicted blocks even in the middle of a gop on high motion stuff.
<kepstin>
(the purpose of scenecut detection is simply to make sure that the codec doesn't encode a bunch of non-predicted blocks near the end of a gop right before an idr frame, resulting in wasting bandwidth by immediately re-sending those blocks; instead it moves up the idr frame)
<FlorianBad>
well another reason I don't want too long keyint is because when the user moves the playback bar it will show some gray crap for a long while
<furq>
it should never do that
<FlorianBad>
so you might say, just force them to start playing at the nearest keyframe, but then it can be a little annoying if that's 10s accurate...
<furq>
you don't need to do that either
<FlorianBad>
hmm?
<FlorianBad>
ok, so a proper player will go back and grab the previous I-frame?
<furq>
the decoder needs to do that but you don't need to present the frames in between
<aaabbb>
if it's doing gray crap then there's a bug
<furq>
it would probably be green but yeah
<FlorianBad>
well, so far I just noticed that on ... VLC ;)
<kepstin>
seeking should normally only cause a pause for buffering, which means that the last shown video frame will be held until enough data is available to decode the frame being seeked to, at which point that frame will be shown and playback will resume.
LionEagle has joined #ffmpeg
<aaabbb>
if the seek isn't in the buffer ofc
<kepstin>
(youtube buffers up to 70s ahead in my experience)
<kepstin>
buffering ahead gives you better ability to dynamically quality switch, not worse, since you can try downloading higher quality replacements for upcoming already downloaded stuff and improve your bandwidth estimation - and then give up and keep playing the existing quality without a "bump up then down" if there's not enough bandwidth available :)
<aaabbb>
but yes it has to grab the previous i frame (specifically idr frame, aka an i frame in a closed gop), it's impossible to decode a p or b frame without referencing the i frame
<aaabbb>
kepstin: doesn't av1 have some kind of magic frame type that lets it send low and high quality at once? or am i confused with a dream i had?
<furq>
maybe you're thinking of lcevc
<aaabbb>
oh i was thinking of S frames in av1
deetwelve has quit [Quit: null]
deetwelve has joined #ffmpeg
<kepstin>
the weird alternate frame stuff in av1/vp9 are a workaround to avoid patents on re-ordered frames
fling has quit [Ping timeout: 255 seconds]
lockywolf is now known as bigot_age_dude_l
bigot_age_dude_l is now known as lockywolf
lolok has joined #ffmpeg
<aaabbb>
and yet still some assholes created a patent pool to extort people
ivanich has quit [Ping timeout: 264 seconds]
Tano has quit [Quit: WeeChat 4.1.2]
fling has joined #ffmpeg
stolen has joined #ffmpeg
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
<FlorianBad>
Ok so I think I have the mp4 encoding pretty much figured out (to tweak later if my tests once in the player have issues). So now my next step is going to slice that into -c copy -f dash Will start with that tomorrow (10pm here)
<FlorianBad>
THanks again aaabbb, furq, and kepstin for your very valuable help! :)
AbleBacon has quit [Read error: Connection reset by peer]
Marth64 has quit [Ping timeout: 268 seconds]
epony has joined #ffmpeg
<aaabbb>
FlorianBad: out of curiosity, what ipratio did you find helpful in your case?
<FlorianBad>
I haven't tested yet, but will soon. For now I just put ipratio=1.0:pbratio=1.0 in my script with a TODO to test that later in details
<furq>
pbratio doesn't do anything unless you disable mbtree
<furq>
please don't disable mbtree
<FlorianBad>
ah, ok thanks
epony has quit [Read error: Connection reset by peer]
epony has joined #ffmpeg
vincejv has quit [Quit: Bye bye! Leaving for now...]
<aaabbb>
FlorianBad: pbratio just picks between p and b frame qp, but both p and b frames are inter frames so they aren't a problem. if you want to have more b frames, change b_bias
<aaabbb>
as far as your issue is concerned all that matters is the ratio between i frames and non i frames
<FlorianBad>
quite frankly I don't even understand the difference between b and p frames ;)
<FlorianBad>
right, I will test tomorrow to see
<furq>
p predicts from past frames, b predicts from both past and future frames
<furq>
b = bidirectional
<aaabbb>
and having lots of b frames consecutive usually improves compression efficiency (but not always). badapt will adaptively place b frames in the optimal places. just let x264 do its job that way
<furq>
yes
<furq>
you can get very deep into the weeds with x264 settings but it's mostly not worth it
<FlorianBad>
furq, AH! thanks, well then why P frames at all then instead of only B ?
<FlorianBad>
ah
<aaabbb>
because eventually it's less efficient
<FlorianBad>
I see, because B already gives info about next frames which means more data?
<furq>
it requires info from subsequent frames which means they already have to be in the decoded picture buffer
<furq>
which is potentially only a few frames in size
<FlorianBad>
and not the last ones, right?
<aaabbb>
the last of a gop is always a p frame (for closed gop)
<FlorianBad>
which also means I don't get their benefit if keyint is too small, right?
<aaabbb>
exactly
<FlorianBad>
ok
<furq>
well you get a bit less benefit
<FlorianBad>
keyint and bufsize I guess
<furq>
bufsize makes no difference
<furq>
that's not the same thing as the DPB
<FlorianBad>
ah ok
<aaabbb>
you can have up to 16 b frames consecutively, so it's kinda silly to have a gop of only 12 frames
<furq>
the DPB size is the level
<aaabbb>
ofc badapt=2 (the default on higher presets) will rarely achieve 16 consecutive because it's not usually ideal, but still
<furq>
very high bframe values will really slow down the encode
<furq>
often for little benefit
<furq>
although if you have a lot of static shots then it works out
<furq>
the x264 post-encode log will show how often it used n consecutive bframes so you can see if it was worth it
<JEEB>
x264 was basically the point where encoding was simplified. in most cases you'd tweak two values: preset for how much time you want to utilize for compression or analysis, CRF for the quantizer range (compression vs quality)
<JEEB>
man I still recall before x264 got presets *shrug*
<furq>
yeah 9 times out of 10 i just use preset, tune and crf
<aaabbb>
speaking of post-encode log, i noticed that it was using 100% spatial 0% temporal direct mvs, but when i do -direct_pred spatial, the bitrate actually goes *up* when i expect there would be no change (or even lower bitrate with less metadata), why is that?
<aaabbb>
but when there's 0% weightp, when i turn off weightp, there's naturally less overhead and bitrate goes down
<JEEB>
pick like 2100 frames, minute or two of content that more or less represents what you're encoding (-ss SECONDS and -t SECONDS can be utilized to limit the encode to not the full input with ffmpeg cli). figure out the slowest preset you're willing to take, then adjust CRF starting with 23 (default) and going down if it looks bad, and up if it looks good.
<furq>
or just encode the whole thing with preset veryslow tune film crf 19
<furq>
and then if it's too big then decide you've already spent too much time and just leave it
<JEEB>
I don't believe in magical CRF values :D
<JEEB>
I might have used 19.5 for 720p24 animation at 10bit, but that's due to actually testing first with similar content.
<furq>
you should try 19
<furq>
it's pretty good
<JEEB>
I've used that for other stuff too :P bottom line, don't like magical numbers. at least it's not 18 that some people just parrot :D
<CounterPillow>
nah man 19.14 is the absolute peak
<furq>
well 18 is just silly
<JEEB>
also I've dealt with some dark SD content where I've had to go bonkers with CRF 14 because x264 would otherwise decide it can compress the dark spots a bit too much
<JEEB>
although I think that was with 8bit, should check that stuff with 10bit at some point
<furq>
you'll be using zones next
<furq>
slippery slope
<aaabbb>
furq: in testing out the absolute best i could get with lossless x264, i started using zones extensively
<aaabbb>
just for fun
<JEEB>
furq: I've done debanding with manual area definitions :P
<JEEB>
"oh this shotgun barrel has banding in it from the master"
<furq>
well that's just necessary
<JEEB>
not sure if area is the correct word, basically part of the frame from that range of frames :D
<furq>
spend days on your vapoursynth script if you have to
<furq>
and then guess what: it's crf 19
<JEEB>
depends on the content vOv
<JEEB>
also it's funny how when my "how bad can it go until I can't take it", I think since 2006 it's been around 0.5-6fps (I used to have a dual core AMD Turion laptop). and x264 by now can easily do a 1080p50 live stream at preset veryslow :D
<JEEB>
newer formats having more options/variables at least let me go slow enough that I dislike it :D
rv1sr has joined #ffmpeg
<aaabbb>
JEEB: you can always set merange to a super high value if x264 is not slow enough ;)
Muimi has quit [Remote host closed the connection]
Suchiman has joined #ffmpeg
YuGiOhJCJ has joined #ffmpeg
whatsupdoc has quit [Quit: Connection closed for inactivity]
cc0 has quit [Ping timeout: 260 seconds]
lullerhaus has quit [Ping timeout: 256 seconds]
lullerhaus has joined #ffmpeg
cc0_ has joined #ffmpeg
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
stolen has quit [Quit: Connection closed for inactivity]
LionEagle has quit [Read error: Connection reset by peer]
stolen has joined #ffmpeg
sm1999 has quit [Quit: WeeChat 4.3.0-dev]
rish has joined #ffmpeg
<rish>
Does anybody knopw if I can install ffmpeg 6.1.1 on Debian 12 from some repo?
Blacker47 has joined #ffmpeg
sm1999 has joined #ffmpeg
<JEEB>
possibly but I'd probably just grab an automated linux 64bit build from BtbN 's github setup
<JEEB>
since BtbN is part of the community
TheElixZammuto has quit [Remote host closed the connection]
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
lexano has joined #ffmpeg
lavaball has joined #ffmpeg
vampirefrog has quit [Ping timeout: 268 seconds]
<CounterPillow>
Yeah grab a static build, replacing your distro ffmpeg would involve replacing everything that links against its libraries, and not replacing your distro ffmpeg with a deb packaged build that ships the libraries may conflict with the distro ffmpeg depending on sonames and such
navi has joined #ffmpeg
MootPoot has joined #ffmpeg
navi has quit [Ping timeout: 264 seconds]
Ogobaga has quit [Quit: Konversation terminated!]
navi has joined #ffmpeg
ivanich has joined #ffmpeg
Ogobaga has joined #ffmpeg
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
ivanich has quit [Remote host closed the connection]
Nixkernal has joined #ffmpeg
Nixkernal has quit [Client Quit]
<LimeOn>
then you can make an alias so you can use that ffmpeg version easily from terminal
epony has quit [Remote host closed the connection]
bibble has quit [Quit: bibberly bobberly]
alexherbo2 has joined #ffmpeg
billchenchina has joined #ffmpeg
psykose has quit [Remote host closed the connection]
bibble has joined #ffmpeg
psykose has joined #ffmpeg
billchenchina has quit [Remote host closed the connection]
waleee has quit [Quit: updating stuff]
pikapika is now known as militantorc
stolen has quit [Quit: Connection closed for inactivity]
psykose has quit [Remote host closed the connection]
minimal has quit [Quit: Leaving]
Starz0r_ has quit [Ping timeout: 268 seconds]
Starz0r has joined #ffmpeg
TuxJobs has joined #ffmpeg
<TuxJobs>
Problem: In mpv, I navigate to a black frame and press the "c" button. This calls a script I've made which uses FFMPEG to take the video file, cut out from the given timestamp to the end, and save this as a new video. Frustratingly, it sometimes doesn't work right. Even though the video frame in mpv was 100% black, the first frame(s) in the output video has the nagscreen which flicker by. Very annoying. It seems to work on MOST videos, but behaves like
<TuxJobs>
this for some. I've also noticed that there can be horrible audio/video desync issues caused by doing this. What could be the reason for this? Both the videos that work and those that don't are typically .mp4. Example command: `ffmpeg -loglevel quiet -y -ss '00:00:03.133' -i '/home/me/Desktop/test_in.mp4' -c copy '/home/me/Desktop/test_out.mp4'`
<CounterPillow>
<CounterPillow> you are doing a stream copy
<CounterPillow>
<CounterPillow> you cannot cut frame perfectly
<CounterPillow>
<CounterPillow> modern codecs do not work like this
AbleBacon has joined #ffmpeg
APic has quit [Ping timeout: 268 seconds]
APic has joined #ffmpeg
intrac has quit [Quit: Konversation terminated!]
intrac has joined #ffmpeg
epony has joined #ffmpeg
ivanich has joined #ffmpeg
rv1sr has quit []
Vonter has quit [Quit: WeeChat 4.2.1]
Vonter has joined #ffmpeg
fling has quit [Remote host closed the connection]
lucasta has joined #ffmpeg
fling has joined #ffmpeg
treefrob has quit [Ping timeout: 240 seconds]
treefrob has joined #ffmpeg
minimal has joined #ffmpeg
Dotz0cat has quit [Ping timeout: 256 seconds]
jarthur has joined #ffmpeg
Dotz0cat has joined #ffmpeg
treefrob has quit [Ping timeout: 264 seconds]
lns has joined #ffmpeg
gvg__ has quit [Ping timeout: 264 seconds]
gvg has joined #ffmpeg
gvg_ has joined #ffmpeg
gvg___ has quit [Ping timeout: 240 seconds]
rvalue has quit [Ping timeout: 252 seconds]
treefrob has joined #ffmpeg
rvalue has joined #ffmpeg
mrelcee has quit [Quit: I want Waffles!]
mrelcee has joined #ffmpeg
lucasta has quit [Quit: Leaving]
irrgit has joined #ffmpeg
MrZeus has joined #ffmpeg
lucasta has joined #ffmpeg
rv1sr has joined #ffmpeg
ivanich_ has joined #ffmpeg
ivanich has quit [Read error: Connection reset by peer]
treefrob has quit [Ping timeout: 256 seconds]
Ogobaga has quit [Ping timeout: 264 seconds]
Ogobaga has joined #ffmpeg
waleee has joined #ffmpeg
cosimone has joined #ffmpeg
Ogobaga has quit [Ping timeout: 246 seconds]
treefrob has joined #ffmpeg
lavaball has quit [Quit: lavaball]
lavaball has joined #ffmpeg
<TuxJobs>
Why not?
<TuxJobs>
It would just "play forward" from the last keyframe or whatever, internally?
<TuxJobs>
What is the problem exactly?
<kepstin>
you can't "play forward" from the last keyframe unless you copy starting at the keyframe before the cut point into the video
<kepstin>
and then in most formats there's no way to say "actually don't show the start of the video, but hide it until time T"
<TuxJobs>
That's what I mean: FFMPEG would do this at the time when it's converting the video to the new, shorter video.
<furq>
that would work if you were reencoding
<TuxJobs>
Hmm.
<kepstin>
but you can't if you're copying, that needs to re-encode
<kepstin>
ffmpeg does do that by default when you re-encode, in fact.
<TuxJobs>
Well, is there some way to know where a keyframe begins, then, in mpv? Would be nice to be able to instead of moving frame by frame (as I do right now), move "keyframe to keyframe" instead.
<furq>
you can disable hr-seek
<TuxJobs>
Well, re-encoding inevitably means quality loss, no?
WereSquirrel has quit [Ping timeout: 260 seconds]
<furq>
with hr-seek=absolute relative seeks will always be to a keyframe
<furq>
unless the key is bound to "seek 5 exact" etc
Tano has quit [Ping timeout: 256 seconds]
ivanich_ has quit [Remote host closed the connection]
realies has quit [Read error: Connection reset by peer]
Ogobaga has joined #ffmpeg
realies has joined #ffmpeg
NaviTheFairy has joined #ffmpeg
Gaboradon has quit [Quit: Quitting]
NaviTheFairy has quit [Ping timeout: 252 seconds]
user03 has joined #ffmpeg
user03 is now known as gchound
LionEagle has joined #ffmpeg
LionEagle has quit [Remote host closed the connection]
iive has joined #ffmpeg
MG2021 has joined #ffmpeg
NaviTheFairy has joined #ffmpeg
<MG2021>
How can I cut a .TS (AAC) file without losing quality?
bitoff_ has joined #ffmpeg
alexherbo2 has quit [Remote host closed the connection]
NaviTheFairy has quit [Ping timeout: 255 seconds]
hussein1 has joined #ffmpeg
bitoff has quit [Ping timeout: 264 seconds]
MG2021 has quit [Quit: Client closed]
* FlorianBad
testing various ipratio values to see if it fixes the visible keyint changes
<FlorianBad>
I couldn't see much difference between 1.4 and 1.0 so I went to 0.3 to see what happens and it's 10x worst, which tells me that that maybe the problem was the reverse! (now testing higher than 1.4 to see)
<FlorianBad>
in other words maybe the I frames had a bad quality which was immediatley fixed by the following P-frames, until the next crappy I frame
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
<FlorianBad>
My guess is that it depends on the type of motion, if everything is very fixed then it might makes sense to have nice P-frames to redraw these subtle changes in pixels, but if that's some Tom Cruise stuff then these I-frames might be more important because they change so much... ?
NaviTheFairy has joined #ffmpeg
<FlorianBad>
But in my videos things will almost always move very slowly, so I might need a high ipratio (now testing 1.7 and 2.5 now on that same ARRI Alexa footage I got from here in 3840x2160 : https://www.youtube.com/watch?v=ccc9-zhGPbo )
deus0ww has quit [Ping timeout: 268 seconds]
<FlorianBad>
(also I meant "B-frames" for the Tom Cruise stuff above)
<FlorianBad>
P
deus0ww has joined #ffmpeg
<FlorianBad>
Yea, so 1.7 is still noticeable (not far from 1.4 after all) but 2.5 becomes excellent, especially considering that I'm testing this with -crf 25
<FlorianBad>
(aaabbb, furq)
<FlorianBad>
So the conclusion (I think) is that if there's a lot of motion/action ipratio should drop to give more quality to the P-frames between keyframes (makes sense), but if it's very smooth camera movement, then these I-frames should be better quality or they will be crappy while the P-frames suddenly become nicer when they don't need to be that nice since very little changed
<FlorianBad>
Makes sense?
JanC_ has joined #ffmpeg
JanC is now known as Guest3217
Guest3217 has quit [Killed (lead.libera.chat (Nickname regained by services))]
JanC_ is now known as JanC
TuxJobs has quit [Quit: Leaving]
<FlorianBad>
Well, one thing I didn't realize is that increasing ipratio w/ crf increases bitrate significantly, so it's not just a "ratio"
realies has quit [Read error: Connection reset by peer]
realies has joined #ffmpeg
five618480 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
rv1sr has quit []
vampirefrog has joined #ffmpeg
realies has quit [Read error: Connection reset by peer]
realies has joined #ffmpeg
realies has quit [Read error: Connection reset by peer]
realies has joined #ffmpeg
AbleBacon has quit [Read error: Connection reset by peer]
AbleBacon has joined #ffmpeg
emmanuelux has joined #ffmpeg
<FlorianBad>
Ok so I guess I should have listened to aaabbb, kepstin, and furq ;) Indeed... increasing the keyint (gop) solves the problem and results in much better quality without raising the ipratio. After all that ipratio is directly dependent on keyint since there will be an "I" every keyint-1 "P"
<kepstin>
it's really only indirectly dependent, not directly dependent.
<kepstin>
ipratio allows you to increase (or decrease) the relative size of predicted frames compared to i frames, so it should actually make a bigger difference in terms of file size with longer gop.
<kepstin>
if you're using 2-pass mode with target bitrate, x264 will automatically adjust the overall quality of the video to compensate for predicted frames taking more space; with crf mode you just get a different video size.
cosimone has quit [Remote host closed the connection]
cosimone has joined #ffmpeg
lavaball has quit [Remote host closed the connection]
cosimone has quit [Remote host closed the connection]
<FlorianBad>
yeah, ok
gchound has quit [Quit: WeeChat 3.8]
HarshK23 has quit [Quit: Connection closed for inactivity]
<FlorianBad>
kepstin, so if my movie has a lot confetti it could be good to set a *lower* ipratio so that these P-frames in between I-frames can get some extra quality?
lavaball has joined #ffmpeg
<kepstin>
hard to say; the default is "good on most content on most settings", and you need to test on specific content with specific settings when changing it.
<FlorianBad>
ok, let me find a confetti/slow video :D haha
five618480 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
<FlorianBad>
lol there's some 10-hour snow falling footage on Youtube... How to piss-off Google :)