BtbN changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.0 is released
xx has quit [Ping timeout: 260 seconds]
function1 has quit [Ping timeout: 252 seconds]
Lazenca has quit [Remote host closed the connection]
lusciouslover has quit [Ping timeout: 260 seconds]
lusciouslover has joined #ffmpeg
Juest has quit [Ping timeout: 265 seconds]
Juest has joined #ffmpeg
intrac has quit [Quit: Konversation terminated!]
intrac has joined #ffmpeg
Tano has joined #ffmpeg
MrZeus has joined #ffmpeg
MrZeus__ has quit [Ping timeout: 260 seconds]
function1 has joined #ffmpeg
MrZeus_ has joined #ffmpeg
MrZeus has quit [Ping timeout: 246 seconds]
lucasta has quit [Quit: Leaving]
rex has quit [Ping timeout: 248 seconds]
rex has joined #ffmpeg
minimal has quit [Quit: Leaving]
Kei_N_ has joined #ffmpeg
Kei_N has quit [Ping timeout: 272 seconds]
zsoltiv_ has quit [Ping timeout: 252 seconds]
sugoi has quit [Ping timeout: 260 seconds]
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 265 seconds]
<echelon> is var_stream_map only used in conjunction with hls streams?
<echelon> how do i output to multiple rtmp streams at different resolutions?
<echelon> any ideas what would cause these broken pipes?
<JEEB> for each muxer you can list its options with `-h muxer=blah` (like mp4 or hls or flv)
rv1sr has joined #ffmpeg
<echelon> don't think it's an issue with the muxer options
<JEEB> well you asked about var_stream_map
<JEEB> so that would show you that it shows up in the private options for a single muxer or possibly two (dash maybe also has it?)
<echelon> oh sorry
<echelon> i've passed var_stream_map
<echelon> not using it
<JEEB> yes, but I still attempted to teach you how to answer that question yourself by telling you how to list the options for a specific module
<echelon> i ran ffmpeg -h muxer=flv, and it doesn't show all the required fields
<JEEB> it lists the options defined by that module
<JEEB> `-h protocol=rtmp` is then the protocol level stuff
<echelon> yeah, i don't think the module options are what's causing the issue
<echelon> oh
<JEEB> anyways no idea what sort of broken pipes you are getting :P I just told you how to check if an option is module specific or what
earthwormjim has joined #ffmpeg
jarthur has quit [Quit: jarthur]
fling has quit [Ping timeout: 260 seconds]
fling has joined #ffmpeg
<echelon> this is what i get on the rtmp receiver side [19/Sep/2024:23:28:16 -0600] PUBLISH "stream" "main" "" - 329 409 "" "FMLE/3.0 (compatible; Lavf60.16" (0s)
fling has quit [Remote host closed the connection]
fling has joined #ffmpeg
ttys000 has quit [Ping timeout: 260 seconds]
MrZeus_ has quit [Ping timeout: 260 seconds]
acryo has quit [Quit: ZNC 1.8.2 - https://znc.in]
acryo has joined #ffmpeg
ttys000 has joined #ffmpeg
drv has quit [Quit: No Ping reply in 180 seconds.]
drv has joined #ffmpeg
beastd has joined #ffmpeg
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 252 seconds]
beastd has quit [Ping timeout: 244 seconds]
lavaball has joined #ffmpeg
Derlg has quit [Quit: Lost terminal]
earthwormjim has quit [Quit: Toodles]
dreamon has joined #ffmpeg
<snoriman> I've got a rough implementation working that uses hwaccell with libav; I'm now looking into transfering the data from gpu > cpu. Though when I compare the hw_decode.c and demux_decode.c examples, I see a difference in the way that frames are allocated. hw_decode.c allocates/frees frames where demux_decode.c users refs and allocate one frame. Is this just a style thing, or should is there an
<snoriman> important reasons why I would use one of the other?
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 276 seconds]
xx has joined #ffmpeg
rsx has joined #ffmpeg
HerbY_NL_ has joined #ffmpeg
HerbY_NL_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
HerbY_NL_ has joined #ffmpeg
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 265 seconds]
JanC has quit [Remote host closed the connection]
HerbY_NL_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
JanC has joined #ffmpeg
Blacker47 has joined #ffmpeg
fling has quit [Ping timeout: 260 seconds]
fling has joined #ffmpeg
<BtbN> You likely want av_hwframe_transfer_data
<snoriman> Thanks BtbN! Yes, I've read up on that function based on the `hw_decode.c` example.
<snoriman> and I guess the different allocation methods is just "style" I guess?
nobiz has quit [Ping timeout: 255 seconds]
hussein1 has quit [Ping timeout: 260 seconds]
<BtbN> I'm not quite sure what you mean
dreamon has quit [Ping timeout: 265 seconds]
<BtbN> the AVFrame struct itself is tiny, and the frame data inside will always be pooled
<BtbN> well, at least usually
<snoriman> When I look at the demux_decode.c example, e.g. this line https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/demux_decode.c#L123 and line 140, I see they author allocated the frame once then used unref. The hw_decode.c example, allocates a new frame each time.
<BtbN> But you can receive the decoded data into the same AVFrame struct repeatedly if you don't need to access the old data again
<snoriman> Ok, so the difference is not "important"
<BtbN> Well, one avoids pointless memory allocations and frees
<snoriman> yeah, that has my preference tbh.
fling has quit [Ping timeout: 260 seconds]
fling has joined #ffmpeg
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
nobiz has joined #ffmpeg
tranzistor has quit [Quit: Ping timeout (120 seconds)]
tranzistor has joined #ffmpeg
fling has quit [Ping timeout: 260 seconds]
rsx has quit [Quit: rsx]
nobiz has quit [Ping timeout: 255 seconds]
lucasta has joined #ffmpeg
chiselfu1e is now known as chiselfuse
sugoi has joined #ffmpeg
nobiz has joined #ffmpeg
hussein1 has joined #ffmpeg
sugoi has quit [Ping timeout: 276 seconds]
HerbY_NL has joined #ffmpeg
EmleyMoor has quit [Ping timeout: 264 seconds]
EmleyMoor has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
rvalue has quit [Remote host closed the connection]
rvalue has joined #ffmpeg
iconoclast_hero has joined #ffmpeg
<iconoclast_hero> what should I take from this error message?
<iconoclast_hero> $ ffplay -v error 03-11\ -\ Ruth\ Brown\ --\ One\ More\ Time.mp3
<iconoclast_hero> [mp3float @ 0x77a8d000f700] Header missing / Last message repeated 4 times
<snoriman> I'm using a `configure` that's disables most of ffmpegs / libav's features and just found out that I'm not using hwaccel at all ^.^
<snoriman> So I want to enable it and if I'm correct each (or most) hwaccel options also have a codec implementation; e.g. vdpau has vdpau_h264.
<snoriman> When I use: ./configure --list-decoders I see `h264, h264_cuvid` etc. but no `h264_vdpau`; what could cause this?
<snoriman> OH... it's under --list-hwaccels
<snoriman> why is `h264_cuvid` under --list-decoders and `h264_vdpau` under `--list-hwaccels`
minimal has joined #ffmpeg
lavaball has quit [Remote host closed the connection]
<galad> because one is a standalone decoder, the other is an hwaccel
<galad> meaning ffmpeg will use its own parsers and friends and use the hw decoder only to decode the final picture
<galad> and it will fall back automatically to a sw decoder if the hw decoder can't handle it
<snoriman> galad: ah awesome! thanks, that makes sense
<snoriman> so a hwaccel is a "full blown" pipeline that e.g. can parse h264, manage buffers and do the decoding as well for example?
<JEEB> snoriman: basically hwaccel utilizes the basic decoder infra (parsing etc), and then slices are sent out for decoding to the HW API
<JEEB> cuvid I think might have been there before hwaccels?
<snoriman> Ah ok that makes sense
<JEEB> so now you have the CUDA hwaccel as well as cuvid decoder. latter then probably lacks a bunch of functionality that the built-in decoder framework would provide
<JEEB> either due to the HW full decoder library not supporting it, or due to the non-hwaccels bein utilized so rarely nobody mapped the HW API stuff into FFmpeg stuff in the decoder wrapper
nasso has joined #ffmpeg
<BtbN> cuvid just uses nvidias parser
<snoriman> I'm also trying to wrap my head around how the codecs and hwaccels (and device types?) are tight together. Basically this loop: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c#L200-L212. From my (limited) understanding, a AVCodec has a member `hw_configs` that holds "hardware configurations supported by the codec". So I guess which hwaccel is supported is stored there.
<BtbN> since the native hwaccel can't deal with some features the cuvid decoder has, like deinterlacing (and in turn doubling the framerate) or scaling/cropping
<JEEB> yea, which sound like filtering in the context of FFmpeg
<snoriman> ah ok
<BtbN> Problem is, it's impossible to access the native deinterlacer of nvidia cards any other way
<BtbN> you can't turn it into a filter. Only tell the decoder to deinterlace
<JEEB> ebin design
<JEEB> which is why I guess we have the cuda deinterlacer (yadif etc)
<BtbN> Well, it makes sense to integrate the two from a hardware design point
<BtbN> The API is in theory prepared to "decode" raw yuv data, hence API wise, a standalone deinterlacer is possible
<JEEB> sure, having an interface doing N things makes sense API-wise
<BtbN> but that interface is defunct
<snoriman> How does a AVCodec know which hardware "device type" it supports?
<snoriman> To me it feels like e.g. the vdpau hwaccel, for example, updates the `hw_configs`
<snoriman> so that's how they're linked
ewomer has joined #ffmpeg
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 276 seconds]
coldfeet has joined #ffmpeg
luc4 has joined #ffmpeg
kasper93 has quit [Remote host closed the connection]
jarthur has joined #ffmpeg
kasper93 has joined #ffmpeg
luc4 has quit [Ping timeout: 265 seconds]
HerbY_NL has joined #ffmpeg
<echelon> i think i forgot to provide the paste of what i'm trying to do https://paste.ee/r/PUgaA
minimal has quit [Quit: Leaving]
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 260 seconds]
Traneptora has joined #ffmpeg
luva888 has joined #ffmpeg
fling has joined #ffmpeg
lucasta has quit [Ping timeout: 260 seconds]
kasper93 has quit [Remote host closed the connection]
kasper93 has joined #ffmpeg
<echelon> what causes Error writing trailer: Broken pipe when sending to rtmp?
<BtbN> a network issue
<echelon> i can telnet to the port just fine.. unless 1935 isn't the default port?
<BtbN> Well, it also connected just fine, and then it broke
<BtbN> If this is to nginx-rtmp, it'll just close the connection on any issue it encounters, which then causes a broken pipe on the client
<BtbN> stuff like not being on the IP ACL, or trying to send to an app that doesn't exist
<echelon> oh
<echelon> BtbN: this is my nginx-rtmp config https://paste.ee/r/37CM0
coldfeet has quit [Remote host closed the connection]
<BtbN> iirc nginx-rtmp is incompatible with more than exactly one worker process
<echelon> really
<BtbN> Just check its logs for issues really
<echelon> facepalm.png the error log
sugoi has joined #ffmpeg
<BtbN> nginx-rtmp is also quite dead
<BtbN> I forgot the name of the modern alternative people recommended here a couple times
<BtbN> mediamtx?
<echelon> oh
<echelon> will check it out, thanks
<echelon> so my other question was whether rtmp will work in browser without hls
luva888 has quit [Ping timeout: 265 seconds]
<echelon> hls implies the stream gets broken down into multiple .ts files?
<echelon> i wanted to get the latency down as much as possible
<echelon> hence why i'm shifting from hls to rtmp
<BtbN> Browsers do not support rtmp
<BtbN> never did, only via flash
Dagger has quit [Ping timeout: 265 seconds]
<echelon> oh -_-
<echelon> so how do i get my latency down
<echelon> it's like 10s delay
<BtbN> In a browser... not at all really, outside of writing custom JavaScript hacks that play HLS in "illegal" ways like Twitch and YT do
<BtbN> some people also pipe raw ts streams over websockets
<echelon> wow
lavaball has joined #ffmpeg
<BtbN> But the media APIs browsers use want mp4, so you still need to cut it into fragments, and the length of a fragment plus some extra is the minimum latency
<BtbN> WebRTC is also an option, but it's a royal pain and quality is quite bad due to codec limits
<echelon> there's going to be a direct connection to the server, so setting up nat traversal and all that other stuff seems unnecessary
<echelon> for webrtc that is
Dagger has joined #ffmpeg
kasper93 has quit [Remote host closed the connection]
l4yer has quit [Ping timeout: 245 seconds]
<echelon> so if i were to use mediamtx for webrtc, how would i publish to it?
fling has quit [Remote host closed the connection]
<echelon> i do have rtmp working now btw, thanks for telling me to look in the logs :}
<echelon> and you're absolutely sure there's no web/html5 players for rtmp? :/
<echelon> apparently it's possible for rtsp https://github.com/Streamedian/html5_rtsp_player
<echelon> alright, i'll try webrtc with mediamtx
Kei_N_ has quit [Read error: Connection reset by peer]
Kei_N has joined #ffmpeg
kasper93 has joined #ffmpeg
minimal has joined #ffmpeg
stonerl has quit [Quit: ZNC 1.9.0 - https://znc.in]
Sketch has quit [Remote host closed the connection]
stonerl has joined #ffmpeg
l4yer has joined #ffmpeg
mayli4 has quit [Quit: WeeChat 3.7.1]
mayli has joined #ffmpeg
Peetz0r has quit [Quit: restarting for updates (via.pixie.town)]
Peetz0r has joined #ffmpeg
Traneptora has quit [Remote host closed the connection]
Traneptora has joined #ffmpeg
Sketch has joined #ffmpeg
PAUL007 has joined #ffmpeg
PAUL007 has quit [Client Quit]
PAUL007 has joined #ffmpeg
Kei_N has quit [Read error: Connection reset by peer]
Kei_N has joined #ffmpeg
PAUL007 has quit [Quit: Client closed]
<snoriman> When I use `av_hwframe_transfer_data()` to retrieve a accelerator/gpu frame to cpu, and it's a NV12 format, is there a helper function which can give me pointers to the planes and strides?
<snoriman> oh... sorry
<snoriman> ChatGPT just explained that it's data[0], data[1] and linesize[{0,1}] that I want ^.^
HerbY_NL has joined #ffmpeg
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jarthur has quit [Read error: Connection reset by peer]
jarthur has joined #ffmpeg
System_Error has quit [Ping timeout: 260 seconds]
fling has joined #ffmpeg
<snoriman> Ok, got my first naive version working ... takes 18ms to download + upload a decode frame :(
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
System_Error has joined #ffmpeg
sugoi has quit [Ping timeout: 260 seconds]
beastd has joined #ffmpeg
puff has quit [Read error: Connection reset by peer]
fling has quit [Ping timeout: 260 seconds]
SakuraChan has joined #ffmpeg
Sakura`Kinomoto has quit [Ping timeout: 245 seconds]
beastd has quit [Quit: KVIrc 5.2.0 Quasar http://www.kvirc.net/]
<snoriman> oh interesting it's the `av_hwframe_transfer_data` which takes so long! 10ms where an upload of the same data takes ~3ms
Dagger has quit [Ping timeout: 244 seconds]
Dagger has joined #ffmpeg
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 252 seconds]
luva888 has joined #ffmpeg
<snoriman> JEEB: when you mentioned VAAPI + GL interop in MPV, did you also mean that VAAPI is now supported on Windows too? ... does that mean that I only have to implement interop between VAAPI + GL once? (this would be great)
HarshK23 has quit [Quit: Connection closed for inactivity]
fling has joined #ffmpeg
Copy_of_nrg has joined #ffmpeg
nrg has quit [Ping timeout: 252 seconds]
Copy_of_nrg is now known as nrg
fling has quit [Ping timeout: 260 seconds]
talismanick has joined #ffmpeg
lavaball has quit [Quit: lavaball]
sugoi has joined #ffmpeg
sugoi has quit [Ping timeout: 265 seconds]
Dagger has quit [Ping timeout: 248 seconds]
SuicideShow has quit [Ping timeout: 260 seconds]
Dagger has joined #ffmpeg
SuicideShow has joined #ffmpeg
minimal has quit [Quit: Leaving]
sugoi has joined #ffmpeg
rv1sr has quit []
darkapex has quit [Remote host closed the connection]
darkapex has joined #ffmpeg
five6184803391 has joined #ffmpeg
jarthur has quit [Quit: jarthur]