BtbN changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.0 is released
lemoniter has quit [Ping timeout: 246 seconds]
Suchiman has quit [Quit: Connection closed for inactivity]
<aaabbb>
when re-encoding video to a lower bitrate or audio to a lower bitrate, how can i determine the point where there is no generation loss (beside damage caused by lowering bitrate)?
<aaabbb>
so if i have 500kbps audio then i reencode it down to 50kbps there is quality loss. but let's say instead i go 500->200->50kbps. i can imagine that the final 50kbps retains so much less that 500->200->50 and 500->50 would be effectively identical, right?
<aaabbb>
having a hard time to formulate my exact question into english, but i want to know how to find out at what point generation loss is "masked" by a lower bitrate final encode
EmberCrest has joined #ffmpeg
<EmberCrest>
good eeeeeeeeevening, ff-friends:)
<EmberCrest>
my project is using libvpx-vp9 to stream files over RTSP using the -re flag. works like a charm, but *only* if I specify -compression_level=4 -cpu-used=6
<aaabbb>
well, libvpx is a very slow encoder
<EmberCrest>
ah, I did not realize that. yeah streaming only reaches 1x if I give it performance options like that.
<aaabbb>
for streaming a lot of people either use software h264 (libx264), or a hardware encoder for hevc. you can also use av1 with libsvtav1 and although av1 is slow, it has a lot of options that make it usable to stream in real time
<EmberCrest>
I'm very used to using x264 for RTMP-- I've switched to RTSP ingest -> WebRTC egress. Libvpx has been the most cooperative.
<aaabbb>
the problem with libvpx is that it requires 2pass encoding to take advantage of many of its compression features, which obviously isn't possible doing real-time streaming
EmberCre1t has joined #ffmpeg
<EmberCre1t>
crap wifi connection, at a coffee shop. thx aaabbb!
EmberCrest has quit [Remote host closed the connection]
EmberCre1t has quit [Client Quit]
five6184803391 has quit [Remote host closed the connection]
five6184803391 has joined #ffmpeg
Icedream has quit [Quit: A lol made me boom.]
Icedream has joined #ffmpeg
yans has quit [Quit: Let us play... Hide and Slay!]
Mister_D has joined #ffmpeg
MisterMinister has quit [Ping timeout: 264 seconds]
xx has quit [Ping timeout: 260 seconds]
realies has quit [Quit: Ping timeout (120 seconds)]
realies has joined #ffmpeg
YuGiOhJCJ has joined #ffmpeg
lavaball has joined #ffmpeg
DHE has quit [Remote host closed the connection]
DHE has joined #ffmpeg
DHE has quit [Remote host closed the connection]
DHE has joined #ffmpeg
Renb has quit [Quit: $WITTY_QUIT_MESSAGE]
Rena has joined #ffmpeg
coldfeet has joined #ffmpeg
HarshK23 has joined #ffmpeg
realies has quit [Quit: ~]
realies has joined #ffmpeg
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg
<vlt>
aaabbb: I don’t see any benefit from going 500->200->50kbps. The contrary. I think, re-encoding to a lossy codec will *always* introduce generation loss. Or … I didn’t correctly follow your train of thought :D
<aaabbb>
vlt: there's no benefit i know. but let's say hypothetical example, an extreme one. a 50mbps video->40mbps->100kbps. the end result already has sooo little left to work with that it would be just as bad as 50mbps ->100kbps
<vlt>
aaabbb: My gut says 50Mbps->40Mbps->100kbps will be slightly worse than 50Mbps->100kbps.
<aaabbb>
then s/100kbps/20kbps/ it's just that i imagine there comes a point where loss caused by lower bitrate overshadows generation loss
Rue has quit [Quit: WeeChat 4.3.3]
Rue has joined #ffmpeg
rv1sr has joined #ffmpeg
jemius has joined #ffmpeg
Kabouik has joined #ffmpeg
<Kabouik>
Hey #ffmpeg. I have an intro.mkv file where the audio is terrible, and a global.mp4 file where the audio is complete and good, but the first 30 seconds of the video should be replaced by that from intro.mkv (without adding the audio of intro.mkv). How would I do that?
<Kabouik>
I tried that (https://superuser.com/a/1625781) and it worked, except the overlaid intro.mkv now has terrible framerate and even freezes, while playing it alone was smooth
rvalue- has joined #ffmpeg
<Kabouik>
In other words, I just want to replace the beginning of global.mp4 (video part only) with the video of intro.mkv, in full.
rvalue has quit [Ping timeout: 246 seconds]
<aaabbb>
Kabouik: unless you can cut at a keyframe and they were encoded at nearly the same settings, you can't losslessly do this. you'll have to re-encode
<Kabouik>
I don't mind re-encoding
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<Kabouik>
I am trying after re-encoding intro.mkv using x264, then the setpts method
rvalue- is now known as rvalue
lavaball has quit [Remote host closed the connection]
<Kabouik>
Hum somehow `ffmpeg -i original.mp4 -i intro.mp4 -filter_complex "[1]setpts=PTS-STARTPTS+0/TB[fg];[0][fg]overlay=enable='between(t,0,30)'[v]" -map "[v]" -map 1:a -c:a copy final.mp4` does the opposite of what I wanted: it used the audio from intro.mp4 instead lf original.mp4, but otherwise did what I wanted on the video part
<Kabouik>
Actually not, the video is still low framerate (and even frozen) on the beginning of final.mp4
Kei_N has joined #ffmpeg
<Kabouik>
Would you have an idea aaabbb? I need to send the video (conference video, deadline approaching due to timelag)
<aaabbb>
unfortunately i'm a bit drunk at the moment and probably best someone else answer. just stick around and you'll get an answer with patience im sure
<Kabouik>
Understood, good luck to you then! :]
<aaabbb>
thaks lol
<Kabouik>
I'll probably just have to use kdenlive for today, spent too much time trying to do it in ffmpeg (but I did many other useful steps for my video using ffmpeg so I'm happy still).
jemius has quit [Quit: Leaving]
blb has quit [Ping timeout: 252 seconds]
lavaball has joined #ffmpeg
blb has joined #ffmpeg
<Kabouik>
Damn, Kdenlive created a 600MB file where ffmpeg did fine with a 40MB one, the only difference being these 30 seconds I need to change at the beginning, but can't do with ffmpeg so far
<furq>
did you try just concat without setpts
<furq>
you really shouldn't need to reencode the whole thing
<Kabouik>
I tried concat furq but probably did it wrong, it resulted in a file of nore than one-hour long while it should have been 15 minutes
<Kabouik>
I probably failed to re-encode the first part properly, didn't know which exact parameters I needd to replicate and I admit I'm far (far) from being fluent with ffmpeg, I am only good enough to replicate posts I see on SO.
Some_Person has joined #ffmpeg
Tinos has joined #ffmpeg
vlm has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
MrZeus has joined #ffmpeg
lemoniter has joined #ffmpeg
Suchiman has joined #ffmpeg
coldfeet has quit [Remote host closed the connection]
MrZeus has quit [Read error: Connection reset by peer]
beaver has joined #ffmpeg
Kaybi__2 has joined #ffmpeg
jagannatharjun has quit [Quit: Connection closed for inactivity]
kasper93 has quit [Ping timeout: 252 seconds]
<Kaybi__2>
Hello folks! I can't seem to find a way to achieve something so I came here to ask a little bit of help X)
<Kaybi__2>
I'm trying to create segments on-demand of a video file to create a HLS stream.
<Kaybi__2>
I'm extracting keyframes from my file using ffprobe which gives me this sample output:
<Kaybi__2>
The segment generated is not the same that the upper command generated (duration is 3s vs 0.1s) as if my keyframe was not being used..
<Kaybi__2>
So the question is: How can I create a single segment using the exact same keyframes that ffmpeg is using when generating the whole playlist?
<Kaybi__2>
Hope this is clear enough and that someone will be kind enough to help me out, thanks for reading! =D
kasper93 has joined #ffmpeg
lavaball has quit [Remote host closed the connection]
johnjaye has quit [Ping timeout: 252 seconds]
johnjaye has joined #ffmpeg
bitbinge has quit [Quit: bitbinge]
xx has joined #ffmpeg
Rena has quit [Ping timeout: 244 seconds]
rsx has joined #ffmpeg
GooseWing has joined #ffmpeg
<GooseWing>
Hello everyone, I'm currently digging ffmpeg's docs about two-pass h264 encoding and came to a question: does specifying -b:v parameter automatically means two-pass encoding?
<GooseWing>
Because I can use it without specifying -pass (hence don't run ffmpeg twice) and it results in a same file
<furq>
no it doesn't
<intrac>
anyone with knowledge about libvidstab? I'm trying to see if there's a way to calculate the motion (first pass) on a low resolution copy of the video, then apply the calculated motion (transforms) back to the original HQ video
<intrac>
but it seems the resolution can't change between the first and second passes
<GooseWing>
Then what the point of using two-pass encoding, if I still can specify target bitrate (and hence get desired file size) and use single-pass?
<GooseWing>
Docs and the internet says like I need to use two-pass to be able to control target file size
coldfeet has joined #ffmpeg
vlm has quit [Read error: Connection reset by peer]
<intrac>
GooseWing: you can control quality and filesize with either; 1. two pass encoding, 2. direct bitrate control (-b:v) (one-pass) , 3. constant quality (-crf option) (also one-pass)
<GooseWing>
1. and 2. will be the same?
<intrac>
no, I don't think so.
<intrac>
two-pass should give better estimation/calculation of the filesize and bitrate, especially for content with a lot of variation in movement and complexity
<intrac>
by comparison, one-pass encoding (option 2. above) can sometimes struggle to stay close to the bitrate given by the user
<intrac>
it might over or undershoot the target given with -b:v
<GooseWing>
Well, I just figured out why I get two same files between "direct bitrate control" and two-pass encoding - it is because of h264_nvenc
<intrac>
so if I understand these options correctly, two-pass should give more control and better allocate the bitrate to slow/fast sections of the video
<GooseWing>
I just tried libx264 and it gives two different files plus two-pass result is bigger
<intrac>
aah
<intrac>
I'm not really familiar with h264_nvenc other than I use it to take the load of my CPU
<intrac>
"Multipass in nvenc is internal and not like x265, if you use p7 its already enabled, p7 is the highest quality, slow is depricated. "