BtbN changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.0 is released
wyatt8740 has quit [Remote host closed the connection]
wyatt8740 has joined #ffmpeg
iive has quit [Quit: They came for me...]
wyatt8750 has joined #ffmpeg
wyatt8740 has quit [Ping timeout: 264 seconds]
Unit640 has quit [Quit: Leaving]
System_Error has quit [Remote host closed the connection]
nasso has quit [Read error: Connection reset by peer]
five6184803391 has quit [Remote host closed the connection]
five6184803391 has joined #ffmpeg
YuGiOhJCJ has joined #ffmpeg
Suchiman has quit [Quit: Connection closed for inactivity]
System_Error has joined #ffmpeg
System_Error has quit [Remote host closed the connection]
zsoltiv_ has quit [Ping timeout: 252 seconds]
HarshK23 has quit [Quit: Connection closed for inactivity]
rvalue- has joined #ffmpeg
rvalue has quit [Ping timeout: 265 seconds]
rvalue- is now known as rvalue
StephenLynx has quit [Quit: Leaving]
gvg_ has quit [Ping timeout: 260 seconds]
ewomer has quit [Ping timeout: 276 seconds]
whatsupdoc has quit [Quit: Connection closed for inactivity]
alexherbo2 has quit [Remote host closed the connection]
alexherbo2 has joined #ffmpeg
acovrig6012 has joined #ffmpeg
Copy_of_nrg has joined #ffmpeg
nrg has quit [Ping timeout: 248 seconds]
Copy_of_nrg is now known as nrg
microchip_ has quit [Ping timeout: 252 seconds]
System_Error has joined #ffmpeg
StephenLynx has joined #ffmpeg
microchip_ has joined #ffmpeg
<rvalue>
can someone tell me how subtitles are encoded in a video? this is just for my understanding, not to run any ffmpeg command. If a video is an hour long, are the parts of the subtitle along with every frame of the video or the subtitles are at some location in the video file all at one?
<rvalue>
s/at one/at once/
<JEEB>
the answer for that would be "it depends" since there's so many different possible ways of passing on captions or subtitles. so would be better to mention an example that then could be discussed
<JEEB>
in general though, streams' packets on container level are supposed to be interleaved
<rvalue>
JEEB: assume i know nothing about container formats. a few days back i was using ffmpeg to extract subtitles out from an mkv file and save to a srt file
<rvalue>
it felt like the whole file was being read from start to end to generate the subtitles, but then again i was accessing the file over a network so i dont know if it was network speed that was the issue
<rvalue>
but the process was time consuming
<DeHackEd>
the container is responsible for allowing audio and video to exist side-by-side despite each is otherwise processed independently. you can add "subtitles" to that alongside audio and video. that's a container's job.
<rvalue>
my command was: ffmpeg -i file.mkv -map 0:3 sub.srt
<DeHackEd>
yeah, that'll have to scan the whole file, discarding the unwanted streams as it goes.... partially because ffmpeg isn't optimized for selection like that, and partially because you'd have to read most of the file anyway as you skip over audio and video as you find them
<rvalue>
DeHackEd: are there container formats that keep subtitle information localized?
<rvalue>
meaning everything together
<another|>
that would be kind of inconvenient
<DeHackEd>
players would have to seek around a lot... that would suck
<DeHackEd>
I have heard of some doing that... like, old formats nobody uses any more.
<DeHackEd>
but it still has the issue that I don't think the ffmpeg library has stream selection like that...
alexherbo2 has quit [Remote host closed the connection]
Forza has joined #ffmpeg
vlm has joined #ffmpeg
jemius has joined #ffmpeg
arbitercoin has quit [Read error: Connection reset by peer]
snoriman has joined #ffmpeg
<snoriman>
hey, I'm opening a mp4 file using a AVFormatContext and creating a AVCodecContext to decode the video stream. How/when can I determine the selected output pixel format of the decoder? After `avcodec_open2()` the pix_fmt member is still invalid.
<snoriman>
hey, I'm opening a mp4 file using a AVFormatContext and creating a AVCodecContext to decode the video stream. How/when can I determine the selected output pixel format of the decoder? After `avcodec_open2()` the pix_fmt member is still invalid.
<snoriman>
oops sorry
vampirefrog has quit [Remote host closed the connection]
ewomer has quit [Quit: WeeChat 4.4.2]
<StephenLynx>
for me I can get it after avcodec_open2.
<StephenLynx>
maybe you are not setting something, snoriman ?
<StephenLynx>
did you use avcodec_alloc_context3 before that?
<snoriman>
StephenLynx: yes I am indeed
<BtbN>
not every decoders knows the pixel format ahead of time
<BtbN>
It at the very least needs to parse the extradata
<snoriman>
BtbN: ah thanks, then it's probably best to wait to get the first decoded frame?
<BtbN>
I'd guess that's how you usually do it at least
<snoriman>
ok thanks
<BtbN>
If you need a specific one, throw in a format/scale filter
<Marth64>
maybe bump up your analyzeduration?
<Marth64>
or probesize
<Marth64>
hard to see without code snippet
<JEEB>
if the person is already wanting to decode, then just waiting for first decoded frame would be the most robust way
<snoriman>
yeah I'm going for that solution
rv1sr has joined #ffmpeg
makidoll has quit [Remote host closed the connection]
makidoll has joined #ffmpeg
<MisterMinister>
Greetings and Salutations! Has anyone had any success intergrating headeless Chromium of Webkit(safari) as FFmpeg filter (to add overlays with tranparencies)? Would work great especially with live camera feeds ))
coldfeet has joined #ffmpeg
<kepstin>
MisterMinister: that's something that OBS can do (they use CEF).
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #ffmpeg
coldfeet has quit [Remote host closed the connection]
TheSilentLink has quit [Quit: Good Bye! My bouncer has probably crashed or lost connection to the internet...]
lavaball has quit [Remote host closed the connection]
System_Error has quit [Remote host closed the connection]
ewomer has quit [Quit: WeeChat 4.4.2]
acovrig6012 has quit [Quit: Ping timeout (120 seconds)]
acovrig6012 has joined #ffmpeg
sihloo has quit [Remote host closed the connection]
turlando has quit [Quit: No Ping reply in 180 seconds.]
System_Error has joined #ffmpeg
sihloo has joined #ffmpeg
emmanuelux has joined #ffmpeg
turlando has joined #ffmpeg
StephenLynx has quit [Ping timeout: 265 seconds]
SuicideShow has quit [Ping timeout: 265 seconds]
SuicideShow has joined #ffmpeg
StephenLynx has joined #ffmpeg
Traneptora has quit [Quit: Quit]
Dagger has quit [Ping timeout: 260 seconds]
Dagger has joined #ffmpeg
five61848033917 has quit [Remote host closed the connection]
five61848033917 has joined #ffmpeg
<MisterMinister>
kepstin: yeah, OBS is a bit too process heavy even compared to gstreamer with wpesrc. Thought that FFmepg with a headless HTML renderer would be lighter...
<BtbN>
"process heavy"?
<MisterMinister>
CPU consumption
<BtbN>
The thing spamming processes like crazy is chromium, so if someone were to create a CEF based filter for ffmpeg, it'd spawn the exact same army of processes
<MisterMinister>
WPE seems to be better in that regard
<MisterMinister>
WebKit Portable Edition
<BtbN>
And yes, CEF can't use hardware acceleration, and rendering websites at 60 FPS is quite intense, specially when the site is playing a video or has animation
ewomer has joined #ffmpeg
ewomer has quit [Client Quit]
ewomer has joined #ffmpeg
yuckey2d3 has quit [Quit: Ping timeout (120 seconds)]
yuckey2d3 has joined #ffmpeg
<MisterMinister>
BtbN: it seems a simple log animation with fade-out at 30 fps is about 1 core on a dated Xeon. Which is reasonble. But its gstreamer only right now...
<MisterMinister>
logo*
<BtbN>
Using up an entire CPU core for a trivial animation seems insane
x_x has quit [Ping timeout: 260 seconds]
talismanick has joined #ffmpeg
talismanick has quit [Remote host closed the connection]