devinheitmueller has quit [Quit: devinheitmueller]
Marth64 has joined #ffmpeg
jmcantrell has quit [Quit: WeeChat 4.2.1]
jmcantrell has joined #ffmpeg
Suchiman has quit [Quit: Connection closed for inactivity]
<Marth64>
Curious what subtitle format do folks prefer for very simple positioning (e.g. top center, left, right, bottom center)? Use case is personal devices, so ideally want a compatible and simple-ish format. Seems like ASS, or SRT with ASS positioning tags are the way to go?
<Marth64>
I like WebVTT in concept but it seems also bloated
JanC has quit [Ping timeout: 240 seconds]
<aaabbb>
furq: for x265 you can only do 2pass crf if you are setting vbv
realies has quit [Quit: ~]
JanC has joined #ffmpeg
realies has joined #ffmpeg
Mister_Magister has quit [Quit: bye]
<aaabbb>
noobaroo: you can improve x265 2pass performance a bit with multi-pass-opt-analysis=1. that improves performance (second pass goes about twice as fast) without the quality hit that slow-firstpass=0 would cause
<aaabbb>
you can also improve 2pass quality slightly almost for free with multi-pass-opt-distortion=1
<furq>
Marth64: whatever ffmpeg supports that is easy to edit or script
<furq>
so probably ass
<aaabbb>
noobaroo: unlike x264, you have to put pass=1 and pass=2 in -x265-params and -pass 1 and -pass 2 do not work
<Marth64>
furq: Yeah, I can't find a good alternative at this point. the players I use all use libass anyway
<aaabbb>
noobaroo: for higher quality, you can use a slower preset (if you have the time). in crf mode, presets can do random things to the bitrate like some parameters might actually increase bitrate, but the quality will be higher (for any given bitrate) at slower presets
justache has quit [Ping timeout: 255 seconds]
justache has joined #ffmpeg
zenstoic has quit [Quit: Connection closed for inactivity]
<noobaroo>
aaabbb Thanks. So should I keep both or just use pass=1 and pass=2 in -x265-params?
<noobaroo>
Use all -pass 1, -pass 2 and -x265-params pass=1 and pass=2 or just the last two?
<aaabbb>
noobaroo: -pass does nothing for x265
<noobaroo>
Thanks.
Marth64 has quit [Ping timeout: 264 seconds]
<aaabbb>
noobaroo: how precisely do you want to hit the 4mbps limit?
evilscreww has joined #ffmpeg
Marth64 has joined #ffmpeg
evilscreww has quit [Remote host closed the connection]
<noobaroo>
I'd rather have less than 2mbps to be honest. And I'm very confused because previously I was using CRF, not two pass CBR, and getting less than 2mbps with "-crf 28". Then I started coming across videos that are actually giving higher x265 bitrates than the original x264 file (no idea why) Unless I used a ridiculously high CRF
<noobaroo>
Now I'm experimenting with 2 pass CBR. I'd like bitrates less than 4mbps for 1080p video. The lower the better.
<aaabbb>
cbr is very very bad
<aaabbb>
only use 2pass abr, never cbr unless you have super precise requirements like you're a cable operator
minimal has quit [Quit: Leaving]
<noobaroo>
Do you have a link for me? I've never heard of 2 pass abr
<aaabbb>
2pass is 2pass abr by default
<aaabbb>
you'll see in the output something like: x265 [info]: Rate Control / qCompress : ABR-1244 kbps / 0.60
<noobaroo>
So... what should I change? Remove "-b:v 3000k" ?
<aaabbb>
no keep it there, it sounds like you're using abr, not cbr anyway
<noobaroo>
Oh yeah I see ABR-3000 kbps :)
<noobaroo>
I guess I don't know what CBR is
<aaabbb>
abr means the bitrate will hit what you put in -b:v on average. cbr means it will hit it precisely so -b:v 1000k means that every single second is exactly 1000k, but that means that complex scenes will only get 1000k which may be too little and simple scenes will get 1000k which may be too much. with abr complex scenes might get 3000k and simple scenes might get 500k, but it always averages out to 1000k
<aaabbb>
if you just do -b:v and then pass=1 and pass=2 then it is abr which is good
<noobaroo>
Thanks for your help
<aaabbb>
yw
<noobaroo>
I don't know what I changed that is causing this but I just got an error again. I remember I got this error earlier today and don't know what caused or how I fixed
<noobaroo>
It finished and then at the end of the second pass it says Conversion failed!
<noobaroo>
But the video seems to work fine.
lavaball has quit [Remote host closed the connection]
justache has quit [Ping timeout: 255 seconds]
<aaabbb>
noobaroo: paste the whole thing to bpa.st
<aaabbb>
so something went wrong with the first pass
<aaabbb>
did you run out of storage space?
fling has quit [Remote host closed the connection]
<aaabbb>
noobaroo: a few small tips: move the -an to before the -i. that will prevent the audio from even being decoded during the first pass. the way it is now, it'll be decoded but the output just thrown away. also you don't need to say your conversion settings in the comment. that is already put in the hevc metadata
<noobaroo>
I have 50GB free
<aaabbb>
something went wrong with writing the stats file somehow. delete it and start from pass 1 again
fling has joined #ffmpeg
<noobaroo>
Okay. Move just the "-an" or move all "-an -f null /dev/null" to before the "-i" ?
<aaabbb>
just the -an
<aaabbb>
an dyou can do "-f null -" intsead of "-f null /dev/null" because -f null means it will not write anything
<noobaroo>
Wow thank you
<aaabbb>
if it still gives an error about the cu stats the next time, try removing the multi-pass-opt- parameters
<noobaroo>
By delete the stats file, do you mean one of these 4 files? I have x265_analysis.dat which is 201MB, x265_2pass.log.cutree which is 27MB, x265_2pass.log which is 482KB, and ffmpeg2pass-0.log which is 0B.
<noobaroo>
It was giving me this error earlier today so I don't think it's the multi-pass-opt- parameters
<aaabbb>
yes all of those. the ffmpeg2pass-0.log is because you did -pass 1 with x265 which isn't supported, you can delete it too
<aaabbb>
but something is wrong with the .cutree file... it gave that error the last time too?
<noobaroo>
It gave it like 12 hours ago
<noobaroo>
I get: At least one output file must be specified
<aaabbb>
try doing it with BtbN's ffmpeg build
<noobaroo>
I'm going to add back the /dev/null
<aaabbb>
noobaroo: add -
<aaabbb>
that means stdout, it's the typical way to do it. with -f null it will output nothing to stdout. same behavior, just simpler
<aaabbb>
since it says ffmpeg-git i'm guessing you compiled it from source?
<noobaroo>
Okay, so I have "-f null -" it's working
<noobaroo>
I can grab BtbN build and come back later or tomorrow if I fall asleep. Thanks for all the advice :)
<noobaroo>
Is it normal to have to delete the x265_analysis.dat and x265_2pass.log.cutree and x265_2pass.log ? I remember this morning I deleted them too, because they were taking tons of space and I wasn't even running ffmpeg
<aaabbb>
it looks to me like it's a bug because i don't see anything wrong in your commands. you can try adjusting parameters related to cutree. in a worst case you can disable cutree but that will be bad for compression efficiency
<aaabbb>
yeah it's normal to delete them
<aaabbb>
they are temporary files only needed for the second pass
<noobaroo>
The output file seems to work fine though
<aaabbb>
you mean even though it says conversion failed, you get a valid file?
<noobaroo>
Yes. And after I added the "multi-pass-opt" params it looks much clearer :) And even the timestamps are working perfectly.
<noobaroo>
Sometimes when I convert a 1min section, it will come out thinking it's the full length and unable to skip through. But I see 0 issues at all here
<aaabbb>
does it have that same error when you cut from a different spot?
<aaabbb>
e.g. -ss 50:00 -to 51:00
gvg has joined #ffmpeg
gvg_ has joined #ffmpeg
<noobaroo>
I don't know. I just started a conversion of the whole file so it can run while I'm asleep, and then tomorrow I will switch to BtBn daily build
hightower4 has joined #ffmpeg
<noobaroo>
I'll also try to cut from different spots tomorrow. Again thanks for all of your help. I'm gonna retire for today
<aaabbb>
gl
<noobaroo>
The multi-pass-opt parameters make it look 2x clearer
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg
<evilscreww>
aaabbb: yeh so i managed to follow the steps you described before, and needless to say it worked sucessfully
<evilscreww>
however,im a little concerned as to what these error messages mean that came up during the concat process https://i.ibb.co/D5T5vMW/image.png
Keshl has quit [Read error: Connection reset by peer]
Keshl_ has joined #ffmpeg
<aaabbb>
evilscreww: the final target should be a .mp4 not a .ts
<aaabbb>
22mbps for 720p h264 is pretty damn high
<evilscreww>
aaabbb: im puzzled i thought you told me to convert it to ts
<evilscreww>
oh you mean the clips to be linked
<aaabbb>
evilscreww: each individual clip should be .ts, but when you merge it, turn it to .mp4
<aaabbb>
it's no harm you can just turn the .ts into a .mp4
theobjectivedad has quit [Remote host closed the connection]
jmcantrell has quit [Quit: WeeChat 4.2.1]
theobjectivedad has joined #ffmpeg
<evilscreww>
ill give it another try
noobaroo has quit [Quit: Konversation terminated!]
noobaroo has joined #ffmpeg
System_Error has quit [Remote host closed the connection]
<Marth64>
Then from there tune it to your needs/extraction/etc, if the data you need gets printed
<Marth64>
ffmpeg also has a readeia608 filter. it may or may not be helpful to you, i have no experience with it, but have found it interesting for situations like this
<infinity0>
Failed to inject frame into filter network: Cannot allocate memory
<infinity0>
Error while filtering: Cannot allocate memory
<infinity0>
this is about 10 minutes in, it was working fine up until that
<infinity0>
using amdgpu radeon rx 7800 xt on debian
<infinity0>
any advice?
<infinity0>
i'm not tied to vaapi, just was the first command line i tried that worked
<infinity0>
radeontop shows plenty of memory available at the time of the crash, 11GB / 16GB
<infinity0>
system dram also barely used, i have 64 GB in total
<infinity0>
so it's not a "out of memory" issue
<BtbN>
VRAM could be somehow so fragmented it has no continous space for the rather big frames
<infinity0>
ffmpeg version 6.1.1-1 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 13 (Debian 13.2.0-9)
<infinity0>
BtbN: but it was working for 10 minutes up to that
tykling has quit [Remote host closed the connection]
<infinity0>
in fact for the whole 10 minutes, vram usage was 11GB / 16GB
<BtbN>
None of that contradicts memory fragmentation
<infinity0>
er 10 minutes of video, realtime approx 2 minutes
<infinity0>
wouldn't all frames be the same size though
<infinity0>
how can the space become more fragmented as the encode progresses
<BtbN>
The more stuff gets freed and allocated, the more fragmented it gets
<infinity0>
it should just be freeing and reallocated same sized blocks of memory no?
<BtbN>
It's allocating more than just frames, and other applications are using the GPU as well
<BtbN>
In any case, it's a driver side issue, ffmpeg has little control over the driver reporting OOM
<BtbN>
On a loosely related note, hevc_vaapi with default parameters will produce very poor results. I don't know what parameters it takes, but you will want to investigate that.
devinheitmueller has joined #ffmpeg
<infinity0>
ok thanks, yeah i'm just a noob playing atm
<infinity0>
hmmmm actually the output has the same length as the input, so maybe the error is bogus, because it's basically at the end of the video
<infinity0>
strange
devinheitmueller has quit [Client Quit]
<frankplow>
If a sequence contains resolution changes, is there any way to get FFmpeg to output without scaling the video? The default behaviour seems to be to scale all frames to match the resolution of the first.
<infinity0>
ok yeah looks like the error is bogus, it happens at the end of any video even very short ones but the output in fact is totally fine
rv1sr has quit []
<JEEB>
frankplow: you can tell it to not insert any auto-scale filters
<infinity0>
hm missing like half a second maybe, weird
<infinity0>
probably bug, will go file
<infinity0>
very much doubt it's fragmentation related
<BtbN>
It's still the driver reporting OOM
<BtbN>
So you'll have to report this to Intel and AMD
<frankplow>
Thanks JEEB, was just missing the "auto" keyword to help grep the manual
<JEEB>
-noauto_conversion_filters
<infinity0>
can't tell if it's driver bug or ffmpeg misinterpreting the driver, without looking at the bug. first stop is to have ffmpeg devs look at the bug
<infinity0>
looking at the code, rather
<BtbN>
The most likely cause of this is you not using an Intel card
<infinity0>
gpu drivers been fine so far on all other hwaccel apps i've been using
<BtbN>
Intel largely develops and tests the vaapi ffmpeg stuff. On Intel hardware.
<infinity0>
ah ok, so vdpau/vulkan would be better for amd
<BtbN>
AMD implements vdpau?
<infinity0>
but i think they are missing one of the things in the pipeline currently
<BtbN>
That's just old proprietary nvidia driver at this point. Everything else dropped it.
<BtbN>
And Vulkan-Encode is far from ready.
<BtbN>
The "Just work" thing on amd is AMF
<infinity0>
ah fair enough, yeah i guess amd does need more time to mature
<BtbN>
It's more like... VAAPI was designed to be a very low level interface around Intel GPUs.
<BtbN>
Something pretty much always breaks when adapting it to other stuff
<infinity0>
right, but doing the thing i wanted with other apis wasn't even available, so it seems it's the best option currently
<BtbN>
AMF can do full transcode.
<BtbN>
And there's a scale filter for it on the ML
<infinity0>
ML = mailing list?
<BtbN>
yes
<infinity0>
right that's my point, the full pipeline of infrastructure is not yet available
iive has joined #ffmpeg
<BtbN>
AMF is just a de/encoder
<BtbN>
There will never be "every filter (AMF version)" for it
<infinity0>
does it use the gpu? i can't see it listed under -hwaccels
<infinity0>
that's the topic of my investigation atm
<BtbN>
hm? It's AMDs nvenc equivalent
<BtbN>
So, it's an ASIC on the Graphics Card
<BtbN>
vaapi uses the same hardware, just via a different api
<infinity0>
so to use amf i don't need to give a -hwaccel flag, but for other stuff i do? this is a bit confusing..
<BtbN>
amf is also using the hwaccel infra. But it's disconnected from vaapi
<BtbN>
not sure what it can be mapped to
<infinity0>
well, my debian ffmpeg 6.6 doesn't yet have hevc_amf or h264_amf yet, guess i'll just have to wait a bit more
<BtbN>
Yeah, AMF definitely exists and works on Linux
<infinity0>
6.1 rather
<BtbN>
You need to build ffmpeg with support for it
<infinity0>
i see ok
<BtbN>
it definitely exists in 6.1
Livio has quit [Ping timeout: 264 seconds]
<infinity0>
so you're saying regular non-gpu filters work for these _amf things but for sure they also run on the gpu?
<infinity0>
so presumably the filters are done in the cpu and only the encode is gpu?
<infinity0>
i was hoping to get a full gpu pipeline
<BtbN>
No, right now, no filters other than hwdownload/upload will work
devinheitmueller has joined #ffmpeg
flom84 has quit [Ping timeout: 260 seconds]
<infinity0>
in theory when everything is finished, how would i do a full gpu pipeline decode-scale-encode with _amf
devinheitmueller has quit [Client Quit]
devinheitmueller has joined #ffmpeg
<infinity0>
it sounds like maybe _amf is only good for decode/encode, but you can't actually process the raw frames in a more complex way in the gpu in between this
<infinity0>
more custom rather, scale is not really "complex"
<BtbN>
AMF seems to also have a scaler. But that's the end of its abilities.
<JEEB>
the whole "only windows" thing probably comes from amfenc.c listing only D3D11 and DXVA2 frames in its AVCodecHWConfigInternal
<JEEB>
it might have support for vaapi surfaces
Ogobaga has quit [Read error: Connection reset by peer]
<BtbN>
it definitely does work on Linux
<JEEB>
it as in the AMF library (not the FFmpeg wrapper), I mean.
<BtbN>
I'm enabling it in my builds since forever
<BtbN>
on Linux
<JEEB>
yea the library and wrapper does probably work
<infinity0>
what about vdpau/vulkan, are those also hacks for amdgpus
<JEEB>
also I see official docs mentioning that you utilize d3d11va and dxva2 for hwdec with it
<BtbN>
like I said, vdpau is a thing of the past. Only proprietary nvidia ever really used it
<JEEB>
so AMF does not handle decoding, which is good
<JEEB>
(vaapi works JustFine on AMD for decoding)
<infinity0>
well fwiw my gpu using vaapi is about 4x as fast as my cpu, so i think in general it's worth it to pursue a fully-featured gpu video computation pipeline
Ogobaga has joined #ffmpeg
<BtbN>
well, what's it worth if it's 4x as fast but looks worse than what x264 could do on the CPU with a worse codec?
devinheitmueller has quit [Quit: devinheitmueller]
Ogobaga has quit [Quit: Konversation terminated!]
<infinity0>
i can't tell the difference how it looks
<infinity0>
in fact the vaapi one seems a bit sharper on this current local test
<infinity0>
very very slightly, only from pausing and staring, not from normal watching
devinheitmueller has joined #ffmpeg
tonofclay has joined #ffmpeg
tonofclay has quit [Remote host closed the connection]
echelon has quit [Ping timeout: 260 seconds]
Ogobaga has joined #ffmpeg
hamzah has joined #ffmpeg
devinheitmueller has quit [Quit: devinheitmueller]
devinheitmueller has joined #ffmpeg
vincejv has quit [Remote host closed the connection]
devinheitmueller has quit [Quit: devinheitmueller]