<FlorianBad>
ah, or should I use libfdk_aac ? (for web)
<FlorianBad>
I'm writing a script that will re-encode my videos to many resolutions and quality which I will then host on the servers and my javascript will pick source file based on bandwidth (and adapt in real time as it plays)
<FlorianBad>
So right now I'm trying to figure out the best ffmpeg encoding options (considering that GPU and encoding time won't be a factor at all, only final quality matters)
<FlorianBad>
So e.g. when it resizes to another resolution I'm really looking for the best rescale algorithm.
<FlorianBad>
So far I am starting with this: ffmpeg -i INPUT_FILE -vcodec libx264 -crf 25 -pix_fmt yuv420p -movflags faststart -acodec libfdk_aac -vbr 5 OUTPUT_FILE.mp4
<FlorianBad>
but I haven't tested anything yet. So if based on that there are some useful options I should know about please let me know so I can research them
<furq>
you'll probably want -preset and maybe -tune for x264
<furq>
and -crf 25 is pretty high
<FlorianBad>
furq, ok thanks, well crf will be a variable that outputs something like 20 different files for each quality level (which is picked automatically as it plays)
<FlorianBad>
Of course that means I'll have to figure out how to seek at the right spot in these files (like P-frames or whatever that's called)
hpkn has quit [Remote host closed the connection]
epony has quit [Remote host closed the connection]
<FlorianBad>
ok so I guess that's going to be -preset veryslow :)
epony has joined #ffmpeg
TheSashmo has quit [Quit: Leaving...]
<FlorianBad>
I probably need to set -profile too, correct? To make sure I can transition from one file to another during the html5 video play? would that be baseline then?
lucasta has joined #ffmpeg
<FlorianBad>
Does `-preset veryslow` affect the way the pixels are resampled for `-vf scale` ? Audio resampling too?
<furq>
profile will default to high which is what you want
<furq>
you might want to set -level 42 for hwdec compatibility
<furq>
preset only affects the encoder which gets already-scaled frames
<furq>
if you want to change the scaler then -vf scale=1920:1080:flags=lanczos
<furq>
or zscale but that's another rabbit hole
<furq>
also testing 20 crf levels is overkill, 16-24 should be enough to test
<FlorianBad>
It's not for testing, it's that the bandwidth as the video is played will be constantly measured to see how much the connection can handle, and I will switch from a file to another as the user plays, so have 100 different levels of quality wouldn't hurt, although of course I won't go that far
thilo has quit [Ping timeout: 264 seconds]
<FlorianBad>
So there's not the slighest chance that -profile becomes something else from file to file so that when I transition the data in the middle of the file it fails? (need to make sure all files are exactly the same so I can find the P-frames and read from one file, then another, then back to a lower quality file, as the video is played depending on the bandwidth
thilo has joined #ffmpeg
five618480 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
<furq>
it will always be high if you use -preset veryslow
<furq>
there's no harm explicitly setting it though
<FlorianBad>
ok and you're talking about high, not high422 ?
<furq>
if you're using -pix_fmt yuv420p then yeah
<FlorianBad>
ok
<furq>
it'll use hi10p/422p/444p if you have that kind of input
<FlorianBad>
Ah! lanczos, I remember using that with ImageMagick, yeah! awesome thanks
jarthur has quit [Quit: jarthur]
MrZeus_ has quit [Ping timeout: 264 seconds]
lucasta has quit [Remote host closed the connection]
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
theracermaster has quit [Quit: theracermaster]
theracermaster has joined #ffmpeg
theracermaster has quit [Client Quit]
theracermaster has joined #ffmpeg
fling has quit [Remote host closed the connection]
Ogobaga has quit [Ping timeout: 264 seconds]
fling has joined #ffmpeg
navi has quit [Quit: WeeChat 4.0.4]
jarthur has joined #ffmpeg
<aaabbb>
scale=whatever:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp even better, if you want to improve scaling quality at the slight expensive of speed (which you probably do if you're using the veryslow preset anyway). lanczos and spline and both great for downscaling
<FlorianBad>
ok, thanks aaabbb :) Yes it's the kind of thing I will run overnight on an excellent GPU, so time really doesn't matter
<aaabbb>
also the seek spots are I frames not P frames
<aaabbb>
more I frames means faster seeking, but larger file size (I frames are not as efficient as other frame types)
<aaabbb>
is your goal to just have best quality you can?
<aaabbb>
best quality supported by web browsers that is
Vonter has quit [Ping timeout: 245 seconds]
<aaabbb>
could increase compression slightly by using an open gop. some h264 decoders won't support that but i think all web browsers do?
Vonter has joined #ffmpeg
ossifrage has quit [Remote host closed the connection]
<aaabbb>
actually that's a bad idea if you're streaming, ignore me i'm dumb
<FlorianBad>
aaabbb, ok, I haven't figured out this part yet, it will be in a next step.
ossifrage has joined #ffmpeg
<FlorianBad>
Yes, goal is highest possible quality without ruining user BW, so I'm going to have to find a way to open that file on the server and find exactly where the bytes I can use for transition are (of course I'll store that just after encoding so I already know where it is).
<FlorianBad>
So then server will read file-quality1.mp4 up to a certain point, then realize there's more BW available so I continue reading at the exact same spot but in file-quality2.mp4
<aaabbb>
as for audio, i find that nearly all devices will play opus, so h264 + opus should be fine
<aaabbb>
FlorianBad: ah, for that you would cut at I frames
<FlorianBad>
Well it's not technically streaming, I will use the Javascript video API but feed the bytes manually via MediaSource API instead of using the video.src=url thing
<aaabbb>
IDR frames specifically (so I frames with a closed gop, which x264 does by default)
<aaabbb>
so if your video is 60fps and your gop is 120 frames then you can switch between qualities at 2 second intervals
<FlorianBad>
aaabbb, you mean ffmpeg can already cut that into multiple files ready to transition from/to ?
Ogobaga has joined #ffmpeg
<aaabbb>
it can yeah
<FlorianBad>
hmm! What would be the options I should look into for that then?
<aaabbb>
it can only cut at I frames (unless you're transcoding, which you don't want to do)
<FlorianBad>
Of course it still has to have the usual headers at the beginning of the first file though, I assume? (e.g. duration, etc)
vincent has quit [Quit: cappuccigo]
<aaabbb>
you'll want mpegts
<aaabbb>
have you considered using dash?
<aaabbb>
because you might be reinventing the wheel here
<FlorianBad>
aaabbb, I already wrote my own web server, my own js library, etc. so yeah I'll keep reinventing the wheel because it allows me to do all sorts of insane things :)
<FlorianBad>
(I just haven't wrote my own browser yet that people will switch to en-masse lol... Don't plan to do that :D )
<aaabbb>
you could implement an existing standard like dash then
<aaabbb>
or take ideas from it
<FlorianBad>
aaabbb, mpegts is purely for streaming? Meaning that the html5 video object won't have a duration? (my videos all have a fixed duration, technically they are not streaming)
<aaabbb>
it won't have things like duration, that's what dash is for (it gives that information)
<FlorianBad>
ok, well but I will handle the bytes manually anyway (and all the progress bar stuff because I actually put that in canvas) so I guess that's not even important then after all? I can just handle the duration/position separately on my own
<aaabbb>
you can't just cut an existing mp4 file by cutting at bytes
<FlorianBad>
And in terms of quality it's still all the same stuff as mp4, just arranged differently?
<aaabbb>
the quality is whatever you want it to be yeah (the gop will be pretty small but that's unavoidable)
<FlorianBad>
really? So if I was having a file encoded with a certain -crf and another with the exact same parameters except for -crf, switching from one to the other at an I-frame would make the player fail?
<aaabbb>
if you were just switching in the middle of the file (and not doing any kind of remuxing), yes
<FlorianBad>
yes=fail?
<aaabbb>
yes it would fail
<FlorianBad>
ah, damn, ok
<FlorianBad>
well then I don't have choice but to use mpegts
<aaabbb>
that's pretty much exactly what mpegts is for
<aaabbb>
and then you just have to send metadata, which is what dash or hls can do
<aaabbb>
mp4 is a container that basically describes everything necessary to decode it in the moov atom, and it's fully self-contained. mpegts on the other hand splits the video into smaller self-contained packets that can each be independently decoded
fling has quit [Remote host closed the connection]
<FlorianBad>
I see, so mpegts means no movflags faststart obviously?
<aaabbb>
so if you have high quality video A and the same video at lower quality B, then you can send mpegts packets A1 A2 A3 A4 B5 B6 A7 A8 for example, where it switches to B is when the connection quality is low and it switches back to A when connection quality is faster
fling has joined #ffmpeg
<FlorianBad>
yeah, exactly what I want to do
<aaabbb>
yeah no faststart because there's no moov atom, because it's not an mp4 container
<FlorianBad>
now how do I instruct ffmpeg to slit that into many output files at the smallest possible interval? (like 1s e,g,)
<aaabbb>
if you want to specify an average bitrate btw insted of just using crf, you can use 2pass abr
<FlorianBad>
hmm, if it's really -f segment then I can't specify mpegts in the -f option, might be something else then
<aaabbb>
i've never used hls or dash so it's new to me as well, i just know how it works, not exactly how to do it
<aaabbb>
but for hls for example, ffmpeg can create the m3u8 file on its own
<FlorianBad>
Ah, you would use abr to prevent having a situation where 2 different qualities based on -crf would just be too close in bitrates?
LionEagle has quit [Read error: Connection reset by peer]
<aaabbb>
no they won't be too close in bitrate, but it would prevent a problem where a complex video is 10x larger and requires 10x more bitrate at the same crf
<aaabbb>
sometimes it's just nicer to be able to say that a stream is targeted for 800kbit and another stream for 1400kbit for example
<FlorianBad>
I was planning on having something that generates e.g. 24 files per video, 3 resolutions (720, 1080, 1440) and each 8 -crf, e.g. maybe 15 17 19 21 23 25 27 29
<aaabbb>
you'll want to do tests
<FlorianBad>
ok I see
<aaabbb>
and a crf of 15 is way overkill
<aaabbb>
18 is already pretty much visually lossless
<FlorianBad>
I certainly will, just trying to gather all the info possible for now
lemourin has quit [Read error: Connection reset by peer]
<aaabbb>
most streaming sites use 2pass instead of crf
lemourin has joined #ffmpeg
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
<aaabbb>
you'll get more consistent bitrate
<aaabbb>
because "crf 25" does't tell you much. is that 800kbit? 1400kbit? 100kbit? 9000kbit? it all depends on the input
<FlorianBad>
but it's still variable, right? Just variable around that average?
<FlorianBad>
yeah I See
<aaabbb>
right
<aaabbb>
you can set constraints
<aaabbb>
the tighter the constraints, the more "cbr-like" it is, you don't want hard cbr
<aaabbb>
but you could say (just a random example) that you want it to be 1400kbit average, and never exceeding 2500kbit at any time
<FlorianBad>
So far I have this: https://pastebin.com/2fd3QbAp (just for my notes, until I start making some tests and researching things)
<aaabbb>
pastebin blocks me, can't ope it with my ip, try bpa.st
TheSashmo has joined #ffmpeg
<FlorianBad>
BTW, I'm quite confused with the -foo:bar options format versus the -barfoo type... What am I supposed to use? It's not always clear :)
<FlorianBad>
so the data of the pass 1 goes.... where?
<aaabbb>
the actual video content isn't saved, but you'll get some log files created that contains stats that pass 2 will need
<aaabbb>
usually ffmpeg2pass-0.log and ffmpeg2pass-0.log.mbtree
<aaabbb>
ideally you want to use the exact same settings in the 1st pass as the 2nd, for best bitrate accuracy and quality
<aaabbb>
for your testing you can do just regular abr, all 2pass does is make the quality more consistent and causes the actual result to match your target bitrate more accurately
<FlorianBad>
ok I see. And I assume we can specify the path of these log files? (in the end I'm writing a program so I'm not going to rely on ./ or something :D )
<aaabbb>
-passlogfile for libx264
<aaabbb>
according to ffmpeg -h encoder=libx264
<FlorianBad>
PS: yes I meant things like -vcodec vs -codec:v, it seems almost every option has different ways of being written :)
<FlorianBad>
even -c:v in this case lol
<FlorianBad>
so that's 3 different ways for the same thing. I'm just wondering what's the best way for clarity
<aaabbb>
doing -c:v is better, -vcodec is deprecated (same with -vbsf is depricated, vs -bsf:v)
<aaabbb>
btw if you're going to be doing a lot of transcoding into multiple bitrates, you might want to check out https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs which can improve performance a bit by removing redundant steps (like decoding the source, doing whatever pre processing etc)
<FlorianBad>
ok thanks, I want my scripts to be written correctly and clear + future proof
<aaabbb>
then you'll want -c:v instead of -vcodec
<FlorianBad>
maybe -ar is something else too then? -r:a? :)
<FlorianBad>
ah! indeed, but what about the multiple passes then? :-/
<aaabbb>
-ar is just something on its own
gnucode has quit [Remote host closed the connection]
<aaabbb>
and you can probably do multiple passes with multiple outputs if you specify a different -passlogfile, i've never tried
<FlorianBad>
hmm, does -pass 1 care about the output format? or does it just log the input data into passlogfile so I can just run it once?
<aaabbb>
yeah it does care about the output format, -pass 1 is the syntax for libx264
<aaabbb>
it's a feature of the encoder not of ffmpeg itself
<FlorianBad>
ok
emmanuelux has quit [Quit: au revoir]
<aaabbb>
it doesn't care about the muxer i mean but it does care about the codec
<FlorianBad>
and that's why you said -f null ?
<aaabbb>
right that means it's not using a muxer and there's no container, but it's still h264
<FlorianBad>
ok
<aaabbb>
i meant that it cares about the codec not the -f format
<FlorianBad>
Well, I have a lot of work, research and testing to do now :D lol
<FlorianBad>
First I need to compile ffmpeg with libx264 because my Slackware doesn't have it in the bin so I couldn't even test anything yet
<aaabbb>
and yes just tested, it's easy to do 2 pass even with multiple outputs
trillion_exabyte has quit [Ping timeout: 268 seconds]
<FlorianBad>
ah, I was going to ask you :) thanks a lot
<FlorianBad>
hmm, lavfi + split, taking good note of that
<aaabbb>
then you can do whatever scaling or different bitrates you want for each output, as well as improve performance by not doing redundant steps like if you want to put something before the split=2, such as denoising, deinterlacing, scaling or whatever
<aaabbb>
-lavfi is the same as -filter_complex btw
<aaabbb>
you should learn how filtergraphs work, so you can do [0:v]split=2[v1][v2] or you could do [0:v]split=3[v1][v2][v3] etc, or if you want to denoise before, you would do [0:v]hqdn3d,split=2[v1][v2] just for example. it really really helps if you are doing intensive and slow preprocessing, like if you want to use the nlmeans denoiser
<FlorianBad>
ok, probably nothing will be needed because all I will use it for is for my script that I will call every time I upload a video. And the input will be a final export from a blender project e.g. in insane quality, so it generates everything automatically in different folders that then my web server will use
<FlorianBad>
yeah but then it's really video editing, no longer just encoding :)
<aaabbb>
btw if you *really* don't care about time and just want quality, -preset placebo helps slightly
<aaabbb>
but as the name implies, it takes a lot longer to process for only a slight increase in compression efficiency
<aaabbb>
if you have access to user agent before you stream, you can use better codecs than h264
<aaabbb>
h265 (hevc) and av1 both will have much better quality at the same bitrate compared to h264 so it's a bit of a waste to give everyone h264 if you know that their browser will play something else
<FlorianBad>
yeah, I was considering vp9 + opus for everyone, then h264 for all mobile devices
<aaabbb>
vp9's quality is similar to h264
<FlorianBad>
but one thing to consider is that I won't really know the power of the machine itself, only the network BW, so if a codec is heavier to decode by the hardware it's an issue
<aaabbb>
heavier codecs have a fast decode tune setting
<FlorianBad>
I might end up sending a massive file with amazing quality, only to have the client struggle to keep the fps because of cpu/gpu limitations
<FlorianBad>
actually I will have access to that final fps... hmmm I could actually use that info as well
<aaabbb>
but even mobile phones that support hevc will use hardware decoding, so unless you're doing 10 bit encodes, it probably won't struggle. hevc with -tune fastdecode will still be much better quality for the same bitrate
<aaabbb>
so for 4k encodes or 10 bit encodes you might have weaker hardware struggling to decode it, and for that -tune fastdecode is good, but for 1080p or 720p hevc isn't going to struggle to decode on any hardware that supports it
<FlorianBad>
ok, taking note of these. So you would use h265 in av1 container with opus or acc?
<aaabbb>
av1 isn't a container, av1 is another codec like h265
<FlorianBad>
ah
<aaabbb>
you'd still use mpegts
five618480 has quit [Remote host closed the connection]
<FlorianBad>
ah, indeed ok
five618480 has joined #ffmpeg
<aaabbb>
i don't think av1 works in mpegts though
<FlorianBad>
All right, so Ideally, based on user-agent, who should get h265, h264, and av1, and then acc vs opus?
<FlorianBad>
ok so that'd be h265 vs h264 then?
<aaabbb>
yeah, so someone might support h264 but not h265, but they may support opus so you can do h264+opus
<aaabbb>
opus is just better than aac lc in every way, you can get better quality opus at 96kbit than aac at 160kbit
<FlorianBad>
Honestly at this point I don't really care about how many combinations of formats and bitrates I will have because 1- I will have few videos, I'm not Youtube. 2- I will probably encode overnight on an excellent GPU+CPU 3- All I will have to do really is add a few lines in my scripts
<aaabbb>
gpu encoding is often lower quality than cpu encoding just fyi
<FlorianBad>
So if I end up with 4 combinations of format x 3 resolutions x 8 bitrates = 96 files per video I actually don't care lol
<aaabbb>
it's much faster but you'll always get worse quality for the same bitrate when doing gpu encoding
<aaabbb>
you don't need to redo format combinations, you don't have to re-encode h264 twice if you want h264+aac and h264+opus
omegatron has quit [Quit: Power is a curious thing. It can be contained, hidden, locked away, and yet it always breaks free.]
<FlorianBad>
aaabbb, yeah but I still need to store the files for each combination
<FlorianBad>
(I was referring mostly to storage size)
<aaabbb>
you don't have to store them separately
<FlorianBad>
hmm?
<aaabbb>
there's no reason you can't remux on the fly
<aaabbb>
transcoding is slow but remuxing is (usually) pretty much instant
<FlorianBad>
video and audios "blocks" (or whatever that's called) can be concatenated just like that?
<aaabbb>
not concatenated but remuxed
<FlorianBad>
ah, but still need some kind of ffmpeg command?
<aaabbb>
there are other programs that can do it but ffmpeg doe it too
foul_owl has quit [Ping timeout: 252 seconds]
<FlorianBad>
I'd rather have it all stored already so it's all fast and predictable. Don't want to worry about forgetting to install ffmpeg with the right codecs and so on too :0
<aaabbb>
there's no generation loss from doing a stream copy so you can do it as many times as you want
<aaabbb>
and you don't need to know the codecs either, ffmpeg can remux without having the encoders installed
<aaabbb>
if it's an hevc (h265) stream but you don't have libx265 available, you can still streram copy it. same if the audio is opus and you don't have libopus
<FlorianBad>
ok, but still don't really want to call any command. I'd rather just read the bytes straight
<FlorianBad>
So many 96 files is a little too much :) But I don't mind if I end up with 40 versions of the video, each in a folder having the blocks of mpegts
<aaabbb>
FlorianBad: you don't need to do that either, mpeg-dash will do separate audio
<FlorianBad>
but I would still need to call a command?
<aaabbb>
no
<aaabbb>
only if you want to have a full mp4 or mkv or whatever
<FlorianBad>
oh so now you're talking about sending audio and vidoe separate to the client? Then some Javascript handles the remuxing?
<aaabbb>
well it depends on if you want to use mpeg-dash or if you want to reinvent the wheel
<FlorianBad>
I'd rather reinvent ;)
<aaabbb>
then you don't have to remux it yoursef, web browsers' media source extensions should be able to do all of that (but i'm not very familiar with it)
<aaabbb>
any browser that supports mse will support dash
<FlorianBad>
I wonder how they're synced...
<aaabbb>
mpeg-dash takes care of all of that
<FlorianBad>
Yeah but it might mux everything on the back-end, not letting the browser do that
<aaabbb>
it does't have to remux it into a container
<aaabbb>
because you aren't putting it in a single file, you're just streaming it
<FlorianBad>
Honestly I'd rather have everything already muxed in A LOT of files, that's fine. It's a little easier to handle afterwards, and faster for everybody at the moment of the request
<FlorianBad>
storage is cheap
trillion_exabyte has joined #ffmpeg
<aaabbb>
you won't be able to do that if you want to be able to adaptively change bitrate
foul_owl has joined #ffmpeg
<FlorianBad>
of course I can, except that both video and audio will follow up/down
<FlorianBad>
so for each video bitrate average I have an equivalent audio bitrate
<aaabbb>
you really don't need to do that, audio requires so little compared to video
<FlorianBad>
then it becomes a mpegts segment of a particular quality
<aaabbb>
is this like high quality music?
<FlorianBad>
Well, that actually allows me to tune based on the type of content. e.g. if it's a music video I keep the audio quality at its max almost all the time, but if that's speach then I can allow it to drop (relatively to video quality) And all that can be done at encoding
<FlorianBad>
I will have both actually yes
<aaabbb>
ah ok
<aaabbb>
you'll certainly not want to keep a bunch of mp4 files
<FlorianBad>
What's great is that I know it at the moment I run the script to encode the video for the server, so that's actually pretty handy to make my encoding-script pick that video/audio quality ratio based on what I specify
<aaabbb>
just have the audio and video separate
<aaabbb>
i don't know the inner workings of mpeg-dash but if you reinvent it you'll have to learn it
<FlorianBad>
ok, it may not be that complicated after all
<aaabbb>
you can use open source dash javascript libraries
<aaabbb>
since it's designed exactly for what you want to do
<aaabbb>
adaptive content streaming
<FlorianBad>
well, then it will be a mess to put my video in canvas and do everything I want, I already wrote plenty of things for that. And also the data will not come via normal HTTP requests, it's part of a format I designed to encode objects. So I'll really end up with the bytes in a variable, and then the video obeject. So I'll just need to figure out what's in between
<FlorianBad>
but yeah there's probably all the answers in their code
<aaabbb>
you can read the open standard
<FlorianBad>
I think it might be a little complicated for almost no gain. The only thing it saves is disk space on the server, which I don't really care too much about
<FlorianBad>
I'd rather just have all combinations already muxed on the server ready to just read in segments of mpegts
<FlorianBad>
Even if that's somewhat redundant, that's ok
<aaabbb>
well you'll have to use dash or something similar for mpegts to play in the first place
<aaabbb>
the adaptive streaming part is going to be a lot more complex than having separate video and audio
<aaabbb>
if you're reinventing it manually
<FlorianBad>
you mean mpegts is not supported by the video API of browsers?
epony has quit [Remote host closed the connection]
<aaabbb>
it is but it's not like some file you throw at a browser
epony has joined #ffmpeg
<FlorianBad>
yeah but I will already use the MediaSource to feed the bytes of video data, instead of the classic .src parameter
<FlorianBad>
so I'm assuming that dumping these blocks of mpegts one after the other should work?
<aaabbb>
i don't know, i've never done it manually like that
<FlorianBad>
I guess one thing I should try as a quick test is to use .src with a path where the file is a concat of all mpegts blocks
<FlorianBad>
just to see what happens
<aaabbb>
this is already pretty far beyond the stuff i'm familiar with, since i've never done it that way
<aaabbb>
if you are reinventing it, you really gotta test everything a lot
<FlorianBad>
yeah. I already have taken plenty of notes, so I'll research more of all that tomorrow :) I'm done for today
<FlorianBad>
thanks a lot aaabbb for your help :D
<aaabbb>
just take the stuff i say about mpeg-dash with a grain of salt because i rarely use it outside of watching videos in my browser :D
<FlorianBad>
sure, I'll just do my homework of researching everything
<FlorianBad>
I'm not going to be able to make this work without having an excellent full understanding of how it works
<aaabbb>
learning is a goal on its own sometimes :)
<FlorianBad>
yeah, but I have to be careful, I'm the type to get stuck in details and forget the big picture, that's why reinventing the wheel is so dangerous
<FlorianBad>
but I never regretted coding my own stuff from scratch, I do that all the time and it's amazing because everything is absurdly optimized and works extremely well. When you reinvent the wheel you can make it more round ;)
<aaabbb>
since you said you use slackware, i totally understand
<aaabbb>
it's a great way to learn and optimize things
<FlorianBad>
haha, exactly. Slackware users have to be somewhat masochistic :)
<aaabbb>
as a gentoo user, i totally understand haha
lavaball has joined #ffmpeg
Ogobaga has quit [Quit: Konversation terminated!]
five618480 has joined #ffmpeg
Ogobaga has joined #ffmpeg
Ogobaga has quit [Client Quit]
Ogobaga has joined #ffmpeg
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
fling has quit [Ping timeout: 240 seconds]
fling has joined #ffmpeg
fling has quit [Remote host closed the connection]
fling has joined #ffmpeg
MisterMinister has joined #ffmpeg
AbleBacon has quit [Read error: Connection reset by peer]
applecuckoo has joined #ffmpeg
five618480 has quit [Remote host closed the connection]
five6184803 has joined #ffmpeg
<applecuckoo>
Hi! I've got a file with a few different language tracks and a backing track muxed in.
<applecuckoo>
I'm trying to remux and transcode them to a Matroska file with the backing track pre-mixed, but the problem is that a couple of the languages have mono tracks and not stereo tracks.
<applecuckoo>
Those tracks are being mixed in by FFmpeg like they're separate left and right tracks, not a 'center' track.
Estrella has quit [Quit: No Ping reply in 180 seconds.]
Estrella has joined #ffmpeg
cristian_c has joined #ffmpeg
brianp has joined #ffmpeg
<brianp>
Not sure this is the right place but figured I'd ask anyway. I'm using ffmpeg-kit with flutter, and have noticed ffprobe gets different results for the same file on MacOS vs iOS. I'm not really sure where to go from here for debugging etc. Specifically the difference is the tag `com.apple.quicktime.creationdate` is missing on iOS. And oddly only on
<brianp>
physical devices. The simulator doesn't have a problem
<brianp>
I've sent the file from my physical device to my laptop and then to the ios simulator. ffprobe on the laptop returns everything as expected. Not surprisingly ios simulator does as well. So the original file on the physical device definitely has the tag.
junaid_ has joined #ffmpeg
<galad>
Is the similator running on an intel cpu or arm?
<brianp>
arm
fling_ has joined #ffmpeg
fling has quit [Ping timeout: 240 seconds]
<brianp>
Laptop is an M1, physical phone is an XR
<galad>
I guess run ffprobe in a debugger and set some breakpoints in mov.c
junaid_ has quit [Client Quit]
fling_ is now known as fling
<brianp>
`mov.c` check and check. Cheers. I'll see what I can do with that
cristian_c has quit [Read error: Connection reset by peer]
AbleBacon has joined #ffmpeg
cristian__c has joined #ffmpeg
cristian__c has quit [Read error: Connection reset by peer]
five6184803 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
Starz0r has quit [Ping timeout: 245 seconds]
psykose has joined #ffmpeg
cristian_c has joined #ffmpeg
LionEagle has joined #ffmpeg
waleee has joined #ffmpeg
LionEagle has quit [Remote host closed the connection]
user03 has joined #ffmpeg
user03 is now known as gchound
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
<galad>
Any admin of ffmpeg-devel irc channel around here? I think my ip was banned months ago when trying to contain you-know-who from spamming on ffmpeg-devel, and my irc bouncer rebooted some days ago and I can't join anymore.
rvalue has quit [Ping timeout: 268 seconds]
ivanich has quit [Ping timeout: 264 seconds]
hussein1 has quit [Ping timeout: 240 seconds]
LionEagle has joined #ffmpeg
hussein1 has joined #ffmpeg
epony has joined #ffmpeg
omegatron has joined #ffmpeg
rvalue has joined #ffmpeg
Starz0r has joined #ffmpeg
kus has quit [Read error: Connection reset by peer]
fling has quit [Ping timeout: 240 seconds]
fling has joined #ffmpeg
sentriz has quit [Read error: Connection reset by peer]
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
five618480 has quit [Remote host closed the connection]
five618480 has joined #ffmpeg
cristian_c has quit [Read error: Connection reset by peer]
minimal has quit [Quit: Leaving]
sentriz has joined #ffmpeg
emanuele6 has quit [Quit: irc]
emanuele6 has joined #ffmpeg
brianp has quit [Quit: Client closed]
cristian_c has joined #ffmpeg
rv1sr has quit []
gchound has quit [Quit: WeeChat 3.8]
yoO0Oo has quit [Ping timeout: 240 seconds]
sentriz has left #ffmpeg [#ffmpeg]
waeking1 is now known as waeking
whatsupdoc has quit [Ping timeout: 255 seconds]
whatsupdoc has joined #ffmpeg
<another|>
galad: should be fixed now
<galad>
thanks
user03 has joined #ffmpeg
user03 is now known as gchound
lavaball has quit [Remote host closed the connection]
Ogobaga has quit [Quit: Konversation terminated!]
Ogobaga has joined #ffmpeg
LionEagle has quit [Ping timeout: 252 seconds]
omegatron has quit [Quit: Power is a curious thing. It can be contained, hidden, locked away, and yet it always breaks free.]
Vonter has quit [Ping timeout: 256 seconds]
Vonter has joined #ffmpeg
applecuckoo has joined #ffmpeg
<applecuckoo>
Hello! Didn't get a response yesterday, so I'm trying again now. I have a file with multiple different language tracks and a background track that's mixed in with said language tracks.
<applecuckoo>
Most of the language tracks are stereo, but some are mono
<applecuckoo>
The mono language tracks are acting in a weird way where the background and language tracks are acting like left and right.
<applecuckoo>
thanks again furq - was just about to shut down my client
applecuckoo has quit [Quit: Time to work on something else]
applecuckoo has joined #ffmpeg
<applecuckoo>
furc: I was thinking of using something like amerge+pan, how would I do that?
<furq>
[0:1][0:2]amerge=inputs=2,pan=...
<furq>
etc
minimal has joined #ffmpeg
<BtbN>
What are you actually trying to do?
<BtbN>
mix each language track with the music track respectively?
yoo has joined #ffmpeg
<BtbN>
As long as you use amerge, you will always end up with one of them on the left, and the other on the right. Cause that's what amerge does. It merges multiple inputs into one, by putting stuff into differnt channels.
<applecuckoo>
BtbN: yeah, that's what I'm doing
<BtbN>
amerge is the wrong filter then indeed
<applecuckoo>
it works with all the stereo language tracks, just not the mono ones.
<BtbN>
It won't work with them either. The language channel just will end up in some random third channel
<BtbN>
the issue is more audible with it being mono
<BtbN>
If you want to mix two audio sources, use amix
<BtbN>
amerge you use if you want to combine multiple sources into a single source with multiple channels
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
<applecuckoo>
basically I want a bunch of pre-mixed language tracks in a single Matroska file. I'll give amix a shot.
<BtbN>
mixing stereo and mono does not work in general
<BtbN>
if the music source is stereo, you'll want to upmix the mono ones to stereo
<BtbN>
and then mix
<FlorianBad>
Is there any ./configure options that would get me EVERYTHING that ffmpeg has to offer? Some kind of --enable-everything :) I'm compiling a bin that 1- Will only stay on my new high-end laptop (so it can have native flags) 2- I don't care about the size, I just never want to be limited by something it lacks.
<applecuckoo>
yep, amix works. Thanks y'all!
<FlorianBad>
Also, I had these flags a while ago that I used when compiling native, would these be a good idea for ffmpeg?
<FlorianBad>
My main use will be mostly a lot of experimentation with various codecs, and then some serious encoding overnight for web use. Anything obvious I could be missing based on that?
<furq>
theora and xvid are ancient and poorly supported nowadays
<furq>
and you're missing av1
<furq>
and libopus
<FlorianBad>
ah! Thanks, indeed
<furq>
also you probably don't want enable-shared
<furq>
there's no harm building with march=native and lto or whatever but remember the external encoders are doing 99% of the work
<furq>
so you'd need to build those with your custom cflags as well
<furq>
most hot paths are asm and cpu feature detection for those is done at runtime, so you won't see big gains by doing that
<FlorianBad>
ah, --cpu will not affect the building of the shared libs? Which means in that case they will use CFLAGS ?
<furq>
well that command doesn't build any external libs
<FlorianBad>
what about --enable-shared ?
<furq>
that just creates libavcodec.so etc
<furq>
and i think that also makes ffmpeg prefer to link with external shared libs instead of static libs
<furq>
prefer but not force
<FlorianBad>
which doesn't affect performance of course?
<furq>
no
<furq>
but it might break your build if you link against system libs that you then update
<FlorianBad>
ok. However I probably had some good reason to have them shared in the past, but since I don't remember why then maybe I don't need that anymore :) (this is inspired from very old options I used in the past)
<furq>
shared builds are just higher maintenance in general
<furq>
it doesn't really make sense unless you're a distro maintainer
<FlorianBad>
ok
<FlorianBad>
I use Slackware (that will be a high-end laptop). So I should probably get most of the codecs as packages, but the ones I'm absolutely certain to use a lot (like libx264 e.g.) I should then compile manually with CFLAGS='-mtune=native -march=native -O3' ?
<furq>
if you want it to make any difference at all then yeah
<furq>
idk if i'd say should, last time i bothered doing that i got <1% speedup
<FlorianBad>
ok :)
<FlorianBad>
BTW which av1 should I use? There's like 3 or 4...
holgersson has quit [Quit: “Format C:........[Done]“]