BtbN changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.0 is released
memset_ has joined #ffmpeg
memset has quit [Ping timeout: 260 seconds]
KombuchaKip has joined #ffmpeg
waleee has quit [Ping timeout: 276 seconds]
CarlFK has joined #ffmpeg
vincejv_ has joined #ffmpeg
vincejv has quit [Ping timeout: 265 seconds]
vincejv_ is now known as vincejv
MrZeus has joined #ffmpeg
Suchiman has quit [Quit: Connection closed for inactivity]
a0z has joined #ffmpeg
jarthur has quit [Quit: jarthur]
CarlFK has quit [Quit: Leaving.]
minimal has quit [Quit: Leaving]
memset has joined #ffmpeg
memset_ has quit [Ping timeout: 260 seconds]
a0z has quit [Quit: Leaving]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
memset has quit [Remote host closed the connection]
memset has joined #ffmpeg
vincejv_ has joined #ffmpeg
vincejv has quit [Ping timeout: 276 seconds]
vincejv_ is now known as vincejv
Keshl has quit [Ping timeout: 248 seconds]
Keshl has joined #ffmpeg
lavaball has joined #ffmpeg
FH_thecat has joined #ffmpeg
Vonter has quit [Read error: Connection reset by peer]
Vonter_ has joined #ffmpeg
MrZeus has quit [Ping timeout: 252 seconds]
rv1sr has joined #ffmpeg
lavaball has quit [Remote host closed the connection]
Keshl_ has joined #ffmpeg
lavaball has joined #ffmpeg
Keshl has quit [Ping timeout: 252 seconds]
Keshl__ has joined #ffmpeg
Keshl_ has quit [Read error: Connection reset by peer]
Vonter has joined #ffmpeg
Vonter_ has quit [Read error: Connection reset by peer]
dreamon has joined #ffmpeg
System_Error has quit [Ping timeout: 260 seconds]
System_Error has joined #ffmpeg
vlm has joined #ffmpeg
HerbY_NL has joined #ffmpeg
Icedream has quit [Quit: A lol made me boom.]
Icedream has joined #ffmpeg
Blacker47 has joined #ffmpeg
RedShift has quit [Quit: Client closed]
Suchiman has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Perflosopher has quit [Ping timeout: 264 seconds]
Perflosopher has joined #ffmpeg
rex has quit [Ping timeout: 252 seconds]
rex has joined #ffmpeg
HerbY_NL has joined #ffmpeg
vampirefrog has quit [Ping timeout: 264 seconds]
vlm has quit [Quit: Leaving]
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
vlm has joined #ffmpeg
Tinos has joined #ffmpeg
pa has joined #ffmpeg
beaver has joined #ffmpeg
lavaball has quit [Quit: lavaball]
HerbY_NL has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
fedosgad has joined #ffmpeg
Keshl__ is now known as Keshl
HerbY_NL has joined #ffmpeg
SystemError has joined #ffmpeg
System_Error has quit [Ping timeout: 260 seconds]
fedosgad has quit [Remote host closed the connection]
fedosgad has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<fedosgad>
Hello everyone! I'm writing a piece of software which processes videos from online sources. At some point I need to extract audio track(s) from video, which boils down to "copy audio track as-is to a suitable container".
<fedosgad>
I wrote a working CLI utility using FFmpeg (library, not binary) which does what I want for a single video. Now I want to scale this code so many videos can be processed in pipeline-like fashion.
<fedosgad>
My question is: what structures should I reuse (as opposed to recreate/reallocate them each time) between video conversions (provided that all videos have the same format)?
HerbY_NL has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
vlm has quit [Ping timeout: 248 seconds]
alexherbo2 has joined #ffmpeg
Tinos has quit [Remote host closed the connection]
Tinos has joined #ffmpeg
vlm has joined #ffmpeg
rv1sr has quit []
gt7 has quit [Ping timeout: 244 seconds]
HerbY_NL has joined #ffmpeg
rv1sr has joined #ffmpeg
halvut has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
vincejv_ has joined #ffmpeg
vincejv has quit [Ping timeout: 252 seconds]
vincejv_ is now known as vincejv
alexherbo2 has quit [Remote host closed the connection]
Tinos has quit [Remote host closed the connection]
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
microchip__ is now known as microchip_
Tinos has joined #ffmpeg
rv1sr has quit []
lucasta has joined #ffmpeg
rv1sr has joined #ffmpeg
rvalue- has joined #ffmpeg
rvalue has quit [Ping timeout: 248 seconds]
rvalue- is now known as rvalue
<znf>
I've always wondered something
<znf>
If ffmpeg is able to take a text-based sub format and burn it over the video, why can't it also take the same text sub and... create an image-based subtitle track; seeing that it currently just bails out and calls you out on trying to do text > bitmap
<JEEB>
AVSubtitles vs AVFrames. if you look at how ffmpeg cli itself works, you can see that it can only make AVFrames out of subpicture AVSubtitles. for text based subs you need to go through a separate filter that both reads, decodes and renders them.
<JEEB>
in theory even with AVSubtitles nothing stops you from coding logic that takes text AVSubtitles and then makes subpictures out of them.
<JEEB>
that code just doesn't exist :P
<JEEB>
if/when subtitles go into AVFrames, they could be pushed through the same flow as video and audio frames, and then you could have a filter that only does the rendering part and that can then be fed into a subtitle encoder
<JEEB>
this is the image based subtitle AVSubtitle -> AVFrame logic
<JEEB>
which is why you can overlay image based subtitles with ffmpeg cli as if it's a video stream
<znf>
sad, guess nobody ever needed that functionality
<JEEB>
of course for your use case it would be AVSubtitle to AVSubtitle. or if you want to enable filtering AVSubtitle -> AVFrame (which gets pushed to filter chain) -> AVSubtitle
<JEEB>
but yea, the general gist being "it's definitely possible, but you need to write the glue code to make it possible"
<znf>
How would I possibly do a "live" (hls/mpeg-ts) transmission from a video file (say mkv with ass/srt subtitle tracks), where the player can actually select between different languages?
<znf>
is WebVTT for hls the only way, basically?
<znf>
When I tried WebVTT, it adds them as a separate entry in the m3u8, which is not something I specifically wanted
<JEEB>
that's how HLS works
<JEEB>
the players then check what is listed as alternative representations
<JEEB>
same for DASH
<znf>
So what if I restrict myself to mpeg-ts? Are dvb_subtitles the only way?
<JEEB>
or DVB teletext, or ARIB captions
<JEEB>
technically there's a mapping for TTML but no-one has implemented that. and I've had enough fun with the MP4 TTML specification
<JEEB>
not sure if webvtt in MPEG-TS was specified
<znf>
but ffmpeg can't write DVB teletext, can it?
<JEEB>
there is no encoder for it, yes
<znf>
also, isn't ARIB just something japan uses?
<JEEB>
and various south american countries
<znf>
should I expect an european or american device (ie: tv) to be able to show them?
<JEEB>
probably not, although I've seen a sony TV show Japanese markings at a Finnish store which was quite funny.
<znf>
So with all limitations in place, my only technical alternative would be to, uhm... do dvbsub, I guess?
<JEEB>
Android TV also probably normalizes that stuff. but in any case, there is no encoder in FFmpeg for either DVB teletext or ARIB captions
<znf>
which I'd have to create externally with some other tools
<JEEB>
or you just write the required glue
<pal>
HLS supports TTML
<znf>
last time I checked, there was no encoder for TTML either in ffmpeg
<JEEB>
pal: yea but I just explained that HLS or DASH doesn't do subtitles within a single mux
<JEEB>
znf: there is
<znf>
if I recall corectly, that's some xml-based format, right?
<znf>
JEEB, uh, since when?
<znf>
I think I had a look at it over a year ago
<JEEB>
uhh, quite a while is when I merged that
<pal>
JEEB: +1
Tinos has quit [Remote host closed the connection]
<znf>
The reason I'm asking is because 99% of the end-devices only support mpeg-ts, so I'm stuck there :)
<znf>
git.videolan.org seems sluggish today
<JEEB>
it was better just a bit earlier
<JEEB>
now it is indeed sluggish
<JEEB>
znf: but as a spoiler, it was merged 2021-04-26 16:40:31
<znf>
I see, neat
<JEEB>
oh actually even earlier, that is when the regions and styles got added
<JEEB>
2021-03-05
<JEEB>
anyways the only reason I implemented that was because... a thing I was integrating against only supported TTML. and they implemented webvtt support only after this implementation was already done :P
<JEEB>
although most of my dislike towards TTML comes from how it's put into MP4
<znf>
I just dislike everything that is XML
<JEEB>
"what is this thing called packet PTS? let's put the full timestamp into the document so if someone needs to copy this packet they need to modify the contents if there are timing adjustments"
<pal>
znf: XML was designed to mark-up text, which is nicely suited for subtitle/captions
<pal>
znf: just pointing out that applying XML to subtitle/captions is one the least crazy application of XML :)
<JEEB>
znf: anyways for your use case I'd say you could look into whether zvbi can do teletext encoding
<JEEB>
because then you could mux that stuff into MPEG-TS and at least in europe that is supported
<JEEB>
US does in-stream closed captions so that's a whole separate ball game
Forza has joined #ffmpeg
<znf>
headaches :)
<JEEB>
I think someone posted a simple teletext decoder without using libzvbi, so if one would """just""" do the reverse of that, you would get a very basic teletext encoder for subtitle use cases.
<znf>
That's way above my skillset
<znf>
But nice to know someone might actually write that code in a future far far away
<JEEB>
at least the spec is public vOv
<JEEB>
znf: that's a big IFF with regards to someone wanting to have that feature
memset has quit [Remote host closed the connection]
<znf>
yeah, obviously :)
memset has joined #ffmpeg
<znf>
doesn't seem there's any one-shot/"one-click" utility that can create an mpeg-ts stream with dvb_subs :(
Tano has joined #ffmpeg
dreamon has quit [Ping timeout: 264 seconds]
coldfeet has quit [Remote host closed the connection]
alexherbo2 has quit [Remote host closed the connection]
Tinos has quit [Remote host closed the connection]
Tinos has joined #ffmpeg
<znf>
JEEB, so about dvbsubs, how would I actually make them show up if I mux to ts?
<JEEB>
you need to have the devices that you plan on utilizing have them enabled. it really depends on the device whether it's in the settings or elsewhere.
<znf>
but they don't really show up in VLC -- I can select them, but not showing up
<JEEB>
time to verify whether they contain anything?
<znf>
the actual .sup does contain the images (it ends up being ~9MB), -c:s dvbsub sub.ts *does* also seem to work (a similarly sized .ts file gets written)
<JEEB>
I don't recall the correct incantation to make one of the split muxers force each packet into its own file
<JEEB>
I guess you can look at DVBInspector
<znf>
but muxing them makes the encoder complain about timestamps
<JEEB>
that should show a hex dump of each packet payload
<JEEB>
so then you can look at the hex dump and see whether it makes sense. together with the MPEG-TS level timestamps for that PID
<JEEB>
(which DVB Inspector should show)
<znf>
crap, it's java
<JEEB>
least bad and OSS implementation
<JEEB>
and a nice graphical interface to browse through the contents
<znf>
I don't think there's anything I hate more than Java in the software world :(
<JEEB>
I have no love towards Java, but if it works and does the job... :P I don't need to build it after all, just `java -jar ~/ownapps/DVBInspector/DVBInspector-x.y.z.jar &`
<znf>
I have terrible experiences with having to keep around 1.5, 1.6, 1.7 around
<znf>
plus Oracle being oracle
<znf>
:D
<JEEB>
which is why no-one installs oracle's java
<znf>
it's all fun and games until the IPMI implementation by $whoever doesn't work with openjdk because... "fuck you, that's why"
<JEEB>
so PTS is set. which you should be then able to compare to video packets for example - as well as the PCR/PTS/DTS view graphs it
<JEEB>
then the rest is basically figuring out whether the subtitle packet is correct. seems to start with 0x20 00, and then onwards (the red text is the PES header, and then the rest is data)
<znf>
oh wow, this is slow :D
<JEEB>
another thing to check is whether DVB Inspector can read the descriptors either in PMT or PAT, where the details regarding the DVB subtitle stream should be mentioned
<JEEB>
in theory another way is to utilize sub2video in ffmpeg cli to check whether the packets can be decoded
<JEEB>
that would take care of possibly some bits of "whether the contents of the packets are good"
<JEEB>
I don't think merging is the right word you're saying there :P unless you did the exact same codec copy and with other streams
<JEEB>
unless that's literally what's happening, of course
<znf>
-i file.mkv -i sub.ts ... -c:s copy
<JEEB>
in both? funky
<znf>
wait, actually you're right, I copy/paste -c:s dvbsub
<znf>
let me try again
<JEEB>
ok that then matches my expectation :P
<znf>
oh, ok, that works
<znf>
why does it seem to work?
<JEEB>
since the subtitle encoding API based on AVSubtitles is quite old, I'm pretty sure you just get a buffer back out of it and the API caller's (in this case ffmpeg cli)'s job is to then make an AVPacket out of it
<JEEB>
see fftools/ffmpeg_enc.c::do_subtitle_out. and also the results should get logged in `encode_frame` if you set `-debug_ts`
<JEEB>
it will spam a *lot*, but you only need to see enough that a couple of subtitle packets pass
<JEEB>
it should log the timestamps in the packet that is received from the input, any adjustments, then decoded result and how it gets thrown into encoder and received from it
<JEEB>
most likely the way to improve your situation is to verify that stuff comes out of `do_subtitle_out` correctly
<JEEB>
at least it looks like the logic there attempts to set pkt->dts
<JEEB>
also at least two^Wthree special cases for AV_CODEC_ID_DVB_SUBTITLE
<znf>
I don't actually see anything regarding subtitles until the end of the mux
<JEEB>
(also I would still check based on that sub2video test I linked how the result looks if you output those decoded DVB subtitles into PNG with image2 muxer or so
<JEEB>
just to verify that the stuff seems decode'able
<JEEB>
because otherwise you first fix timestamps just to learn that the content is barf
<JEEB>
:D
<znf>
I'm fine with the images being crap right now, I just did a quick convert with Subtitle Edit, so I don't expect them to be great
<znf>
guess I could always en.sup -> en.ts -> mux them to the final version *shrug*
<znf>
I've given up on it anyway, but I wanted to satisfy my curiosity
System_Error has quit [Ping timeout: 260 seconds]
<JEEB>
anyways if you only receive the subtitle packets even from the input at the very end then it sounds like they're badly interleaved or for whatever reason buffered until the end...
<znf>
this could also *maybe* be fixed
<znf>
haven't even checked what ffmpeg version I run here
<znf>
ffmpeg version N-110496
<znf>
oh yeah, old
System_Error has joined #ffmpeg
kron has quit [Ping timeout: 260 seconds]
<znf>
thank you for your time, as always, JEEB :)
HerbY_NL has joined #ffmpeg
vlt has joined #ffmpeg
kron has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<vlt>
nemo_magneet: Can you clarify “not working”? What error message do you get?
<nemo_magneet>
Input link in0:v0 parameters (size 856x480, SAR 0:1) do not match the corresponding output link in0:v0 parameters (856x480, SAR 320:321)
<nemo_magneet>
[Parsed_concat_0 @ 0x5564c69ae0] Failed to configure output pad on Parsed_concat_0
<nemo_magneet>
Error reinitializing filters!
<nemo_magneet>
what i did was i converted some mp4's into flv's with the command
<nemo_magneet>
and in the end with a command i want to add other vids behind 1 video like chaining them into 1 big
<nemo_magneet>
normally it works
<nemo_magneet>
i made 1bigvideo with 15 inti,
<nemo_magneet>
now i started over with another project (another video-chained)
Sketch has quit [Ping timeout: 260 seconds]
<nemo_magneet>
am i using the wrong command?
<vlt>
nemo_magneet: Works for me with an SAR 1:1 input video I had for a quick test.
darkapex_ has quit [Remote host closed the connection]
<nemo_magneet>
i use that to make 10 in 1 video chains
<nemo_magneet>
Oh i said it wrong
<nemo_magneet>
i use the bla bla bla scale=856:480 to convert videos
<nemo_magneet>
and then i use -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" 10in1.mp4
<nemo_magneet>
to chain them
<nemo_magneet>
and the -filter code is causing issyes
<nemo_magneet>
(sorry , my f ault)
Sketch has joined #ffmpeg
fedosgad has quit [Ping timeout: 260 seconds]
beaver has quit [Quit: Demain est un autre jour o/]
yawkat has quit [Ping timeout: 248 seconds]
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
kron has quit [Quit: kron]
yawkat has joined #ffmpeg
coldfeet has quit [Quit: leaving]
Haxxa has quit [Quit: Haxxa flies away.]
coldfeet has joined #ffmpeg
kron has joined #ffmpeg
rv1sr has quit []
Haxxa has joined #ffmpeg
jemius has quit [Remote host closed the connection]
alphalpha has joined #ffmpeg
drew has joined #ffmpeg
<drew>
I have 1080p video that is from a 4:3 analog source that is getting stretched to 16:9. When I do -vf scale=1440:1080 the video looks the same afterward. Windows details suggests that the first video is 1920:1080 and the second is 1440:1080, but mpv for both look exactly the same
<drew>
is there something more that I have to do than scale?
<vlt>
drew: SAR and DAR might be interesting. What does ffprobe say?
kron has quit [Quit: kron]
<nemo_magneet>
kn kron
Sketch has quit [Ping timeout: 272 seconds]
vlm has quit [Ping timeout: 248 seconds]
kron has joined #ffmpeg
Sketch has joined #ffmpeg
dreamon has quit [Ping timeout: 258 seconds]
Sketch has quit [Ping timeout: 272 seconds]
halvut has quit [Ping timeout: 264 seconds]
l4yer has quit [Ping timeout: 248 seconds]
Fischmiep has quit [Remote host closed the connection]
Sketch has joined #ffmpeg
Sketch has quit [Ping timeout: 248 seconds]
l4yer has joined #ffmpeg
FlorianBad has quit [Remote host closed the connection]
Fischmiep has joined #ffmpeg
FlorianBad has joined #ffmpeg
Sketch has joined #ffmpeg
coldfeet has quit [Remote host closed the connection]
Sketch has quit [Read error: Connection reset by peer]
<drew>
vlt: the first video is the source, and the others are test runs I tried: https://bpa.st/IX5A
<drew>
it looks like SAR is 1:1 for source and the 1080p one and SAR is 4:3 for the 1440:1080 one but 19:9 for DAR
<drew>
16:9*
celmor has joined #ffmpeg
<vlt>
drew: If the source is 4:3 you can just set that without having to re-encode anything: `ffmpeg -i src.mkv -aspect 4:3 -codec copy output.mkv` Re-encoding in most cases means quality loss (and lots of wasted cpu cycles).
Sketch has joined #ffmpeg
<drew>
vlt: that seemed to work! but when I ffprobe -i test2_1.mp4 I still see some references to DAR 16:9: https://bpa.st/2NUA
<drew>
After [SAR 4:3 DAR16:9] I see SAR 1:1 DAR 4:3, so I'm kind of thrown as to what it all is referring to
<furq>
-aspect sets it in the container which normally overrides what's set in the stream
<furq>
depends on the player
<drew>
ok I see
<vlt>
drew: I’m not sure, either. Maybe the stuff in brackets is tied to the untouched actual video stream data while the part after it sits in some kind of metadata header (and has precedence).
<vlt>
furq: Thanks!
<drew>
I have to make more encodes at different points from the source, how do I encode the correct SAR this time? This is all I have so far: ffmpeg -ss 00:01:08 -i "2024-08-20 22-18-32.mkv" -vf scale=1920:1080 -t 00:00:49 test.mp4
<drew>
err scale should be 1440:1080, not 1920:1080