BtbN changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.0 is released
Tinos has quit [Remote host closed the connection]
emmanuelux has quit [Quit: au revoir]
iconoclasthero has joined #ffmpeg
<iconoclasthero>
so my bug seems to have been patched...
<DHE>
if you look at the Summary view, there's a URL shown in the information which you can use `git clone` with
<DHE>
... oh he's gone..
dostoyevsky2 has quit [Quit: leaving]
dostoyevsky2 has joined #ffmpeg
finsternis has quit [Read error: Connection reset by peer]
Suchiman has quit [Quit: Connection closed for inactivity]
Tinos has joined #ffmpeg
lusciouslover has quit [Ping timeout: 260 seconds]
lusciouslover has joined #ffmpeg
MrZeus_ has joined #ffmpeg
yans has quit [Ping timeout: 248 seconds]
MrZeus__ has quit [Ping timeout: 255 seconds]
<iconoclasthero>
i'm here
<iconoclasthero>
I built it with, among other things: configuration: --bindir=/usr/local/bin --prefix=/cache/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/cache/ffmpeg_build/include --extra-ldflags=-L/cache/ffmpeg_build/lib --extra-libs='-lpthread -lm' --ld=g++ --enable-libzmq
<iconoclasthero>
but that --static flag means that i can move the ffplay to a computer with the same OS, kernel, etc., and it'll run or i need libzmq onboth machines?
walee_ has quit [Ping timeout: 252 seconds]
<iconoclasthero>
well, if i can't, it's working anyway! it's a shame that this was broken when I went to use it. i probably wouldn't've ended up finding mediamtx though.
Tinos has quit [Remote host closed the connection]
rsx has joined #ffmpeg
jagannatharjun has joined #ffmpeg
Tinos has joined #ffmpeg
HarshK23 has joined #ffmpeg
CarlFK has joined #ffmpeg
dreamon has joined #ffmpeg
Tinos has quit [Remote host closed the connection]
microchip_ has quit [Quit: There is no spoon!]
microchip_ has joined #ffmpeg
tp_ has quit [Quit: WeeChat 4.1.1]
<iconoclasthero>
so with the zmq protocl, it's working great if i use a named pipe from mpd: `screen ffmpeg -y -nostdin -f s16le -ar 48000 -ac 2 -i /tmp/mpd.fifo -f mpegts zmq:tcp://127.0.0.1:5555`
<iconoclasthero>
but if i try to switch to an unnamed pipe and call the command directly from mpd as an audio output (which I do w/o problem with rtsp://), I get an error, "Address family not supported by protocol (src/ip_resolver.cpp:542)"
<iconoclasthero>
/var/log/mpd/mpd.log has less to offer: Sep 03 02:03 : output: Failed to play on "ffmpeg zmq" (pipe): Write error on pipe: Broken pipe
<iconoclasthero>
i guess the mpd output breaks on the address family not supported by protocol... but the ffmpeg command works just fine when it's using the named pipe on the cli instead of an unnamed pipe called by mpd.
<iconoclasthero>
I'm using `cat /tmp/mpd.fifo| ffmpeg -y -f s16le -ar 48000 -ac 2 -i - -f mpegts zmq:tcp://127.0.0.1:5555` right now, so using stdin is not inherently the problem.
coldfeet has joined #ffmpeg
<iconoclasthero>
if i change localhost to 127.0.0.1, same errors, no output.
MrZeus_ has quit [Ping timeout: 246 seconds]
dreamon has quit [Ping timeout: 252 seconds]
mven979 has joined #ffmpeg
mven97 has quit [Ping timeout: 255 seconds]
mven979 is now known as mven97
<iconoclasthero>
same with `output "ffmpeg -hide_banner -f s16le -ar 48000 -ac 2 -i - -f mpegts zmq:tcp://0.0.0.0:5555 2>>/var/log/mpd/ffmpeg.log 1>&2"`
Magissia has quit [Quit: On the hunt]
rv1sr has joined #ffmpeg
microchip__ has joined #ffmpeg
microchip_ has quit [Ping timeout: 260 seconds]
MrZeus_ has joined #ffmpeg
Suchiman has joined #ffmpeg
coldfeet has quit [Remote host closed the connection]
lavaball has joined #ffmpeg
dreamon has joined #ffmpeg
markh has quit [Ping timeout: 260 seconds]
Tinos has joined #ffmpeg
cryptic has quit [Ping timeout: 255 seconds]
lavaball has quit [Remote host closed the connection]
buzel has quit [Quit: bye]
microchip__ is now known as microchip_
Kei_N has quit [Read error: Connection reset by peer]
Kei_N has joined #ffmpeg
Tinos has quit [Ping timeout: 256 seconds]
acovrig6012 has quit [Ping timeout: 252 seconds]
buzel has joined #ffmpeg
acovrig6012 has joined #ffmpeg
acovrig6012 has quit [Ping timeout: 246 seconds]
acovrig6012 has joined #ffmpeg
rvalue has quit [Read error: Connection reset by peer]
rvalue has joined #ffmpeg
coldfeet has joined #ffmpeg
xx has joined #ffmpeg
Tinos has joined #ffmpeg
mrelcee has quit [Quit: I want Waffles!]
mrelcee has joined #ffmpeg
Sketch has quit [Read error: Connection reset by peer]
Sketch has joined #ffmpeg
jagannatharjun has quit [Quit: Connection closed for inactivity]
lavaball has joined #ffmpeg
luva88 has joined #ffmpeg
markh has joined #ffmpeg
mccobsta has quit [Ping timeout: 252 seconds]
Blacker47 has joined #ffmpeg
<tykling>
hello :) I could use someone to point me in the right direction. I have a camera on my microscope and I want to overlay a crosshair so I can find the center. For some reason I thought this would be simple, but my lack of knowledge about video formats and streaming is making it very difficult
<tykling>
the camera is a https://www.theimagingsource.com/en-us/product/industrial/33u/dfk33ur0521/ and by default it outputs a resolution of 2592×1944, how "big" of a job is it to re-encode live video at this size with a static picture overlay? can it be done easily on a rpi? desktop cpu? does it need hw accel with a gpu?
<tykling>
I might be doing it wrong, or I might be using wildly insufficient hardware, because I keep getting very long delays (so video is far from live anymore) and very low framerates
<tykling>
"Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 2592x1944, 2982998 kb/s, 37 fps, 37 tbr, 1000k tbn, 1000k tbc" this is the video format the camera outputs by default, but it supports many other formats, I can do with monochrome and maybe 5-10 fps, if that makes the job easier. But I can also throw a big gpu at it if that helps. I just need a bit of an idea of what kind of hardware this should
<tykling>
be possible on.. thanks!
rsx has quit [Quit: rsx]
lavaball has quit [Quit: lavaball]
WereSquirrel has quit [Ping timeout: 260 seconds]
arbitercoin has joined #ffmpeg
NaviTheFairy has joined #ffmpeg
<intrac>
is there a way to have ffmpeg encode a file that's being appended to, so that ffmpeg doesn't quit when the end of the file is reached?
<intrac>
so you have to manually quit at the terminal
<intrac>
or perhaps with a timeout? ie, no appends to the file in 60 seconds, etc
MrZeus__ has joined #ffmpeg
rvalue- has joined #ffmpeg
rvalue has quit [Ping timeout: 264 seconds]
MrZeus_ has quit [Ping timeout: 246 seconds]
rvalue- is now known as rvalue
jagannatharjun has joined #ffmpeg
<intrac>
tykling: several things for you to check;
<intrac>
- that the camera itself can output at 2592x1944 in rawvideo/YUY2 at the frame rate you want (as there might be limits from the USB max data rate)
<intrac>
if this is the case, then you might want to switch to MJPEG or H264 from the camera (if it supports those) as this would probably bypass any limitations with the USB
<tykling>
intrac: the camera works very smooth when just playing with ffplay
<intrac>
ah, then that would rule out that issue
<tykling>
"8-Bit Bayer (GR), 10-Bit Bayer Packed (GR), 16-Bit Bayer (GR), 8-Bit Monochrome, RGB24, YUV 4:2:2, YUV 4:1:1" these are the output formats the camera supports
<intrac>
I don't think an rpi is going to be able to encode at the full res at a reasonable frame rate
<intrac>
depending on the Pi version, you might be able to use hardware encoding (iirc)
<intrac>
the Pi4 has HW h264, iirc (I think I've used this in the past)
<intrac>
but again, iirc (as this is all from memory) it might bel limited to around HD resolutions, eg roughly 1920x1080 or a sqare format of a similar number of pixels, eg 1600x1200
rv1sr has quit []
<intrac>
the most recent Pi unfortunately has removed several of the HW encoding options
<tykling>
I am not at all limited to a PI, a modern desktop computer with a gpu is probably a better fit
<intrac>
yes, that's my feeling. especially for any overlays
<intrac>
is full resolution important, or could you reduce to 1600x1200 or 1280x960, etc?
<intrac>
(at least for the cross-hairs overlay)
xx has quit [Remote host closed the connection]
<intrac>
you could probably dump the full resolution and also separately downscale it for the overlay and preview
xx has joined #ffmpeg
<tykling>
it is not hugely important, framerate and latency is more critical.. imagine trying to solder under a microscope with 2 fps and 5 secs dalay, unplayable
<tykling>
*delay
<intrac>
right
<tykling>
I am half tempted to just give up and draw a cross on the monitor where the center is haha
<intrac>
that's what I was actually going to suggest, heh. but I thought you might not appreciate that
<intrac>
there are some options and ways to get ffmpeg latency down
<tykling>
but I see people on blogs etc giving "-filter_complex overlay" examples when they are streaming and such, where they seem to have no issues overlaying some text or whatever to their stream
<intrac>
but there's probably still going to be an awkward delay there
<tykling>
I really appreciate the input
<intrac>
tykling: but you're probably not aware of the delay with their streams
<tykling>
no, true!
<tykling>
2 or 5 secs delay on a twitch or whatever stream doesn't matter as much
<intrac>
depending on where you download ffmpeg, it can support hardware gpu encoding
<tykling>
yeah. say I want to give it a final go before I give in and find the magic marker and draw on my tv
<intrac>
but unfortunately some versions (eg from Linux repos) don't always have that supported
<tykling>
I will find the biggest gpu I have around and make hwaccel work with it
<tykling>
ok, so do I build it or can I get it prebuilt from a reputable place?
<intrac>
I think the syntax has changed a bit over the last few years, so if something doesn't appear to work it might be because of that
<intrac>
but the last time I checked, ffmpeg v4 had HW encoding support, but the later versions there didn't (so I downgraded back to v4)
<tykling>
I see
<intrac>
you can check if a version of ffmpeg has support with something liKe: ffmpeg -codecs | grep -i nvenc
<tykling>
is there something about the format the overlay picture is in? I think I read someone talking about them converting their overlay image to the right.. pixel format? color format? or something, so ffmpeg didn't have to do it
<intrac>
importantly, you should see something like 'DEV.LS h264 ... H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m h264_cuvid ) (encoders: libx264 libx264rgb h264_nvenc h264_v4l2m2m h264_vaapi nvenc nvenc_h264 )'
<intrac>
the 'DEV' means Decode, Encode
<intrac>
if you only get 'D.V' or similar, then it doesn't support encode
<intrac>
tykling: I'm not sure about that. yuv420p is the most common for video itself
<tykling>
I might try to convince the camera to switch to 8 bit monochrome and see what that does, that should be a lot fewer bytes to transport
<intrac>
you can change the pixel format from YUV to RGB and back again
<tykling>
in the camera or ffmpeg or?
<intrac>
8 bit should be more than enough for this
<tykling>
yes
<intrac>
well, you can possibly set the camera to output RGB
<intrac>
but imo, I wouldn't expect any issue with using YUV and letting ffmpeg convert internally as needed (eg if you overlay an RGB PNG crosshairs image)
<tykling>
got it
<intrac>
otherwise you're creating more complexity to manage
<tykling>
so you would expect pretty live video and decent fps with a modern computer for this task?
<intrac>
I would consider dropping the resolution to something like 1280x960 early on in the filters
<intrac>
which would reduce the processing requirements a lot
<tykling>
right
<intrac>
in theory, it should be possible. although I never like to say yes unless I've tried it myself :)
<intrac>
I would think you should be able to get around 0.5 seconds latency with some care
<tykling>
that would be fine
<intrac>
or just put a small white sticky label on the monitor :)
<tykling>
:D yes
<intrac>
also, the filter_complex possibly isn't needed
<intrac>
it makes the syntax a bit more complicated to understand
<tykling>
I'll say
<intrac>
you can probably just use the regular -vf option
<tykling>
I see
<intrac>
to resize down to 1280x960, load a PNG image and overlay
<intrac>
which is really only the two needed filters
<tykling>
great
intrac has quit [Quit: Konversation terminated!]
<tykling>
bbl
intrac has joined #ffmpeg
<intrac>
sorry, my IRC client froze
<intrac>
tykling: but if you just want a realtime overlay, then why would you be encoding?
<tykling>
I... don't know? I thought that was the way to do it
<tykling>
maybe that is what I am doing wrong!
<tykling>
damn I am getting pulled into a meeting at $dayjob, I have to run for a few hours, will be on later
<intrac>
well you can probably use ffplay to take the source video from the USB camera, resize, apply overlay
<intrac>
but that would all be on the same PC
<intrac>
if you need to send the video to a remote computer over a network, then you would need to route it differently
<tykling>
no I just have to show it via hdmi on the same computer
<intrac>
ok, I need to sleep for a bit. I should be around later
<intrac>
ok, then ffplay might work for you and no need to encode at all
<tykling>
oh really
<tykling>
I will try that later, thanks so much for taking the time to explain all this, I really apprciate it
<intrac>
no probs
<intrac>
:)
<tykling>
:) I might ping you later to let you know how it went, or maybe I have more questions :)
<intrac>
ok, no problem
<intrac>
and you might actually need to use filter_complex (to load the image as a second source)
<intrac>
I'll see if I can figure it out for later
alexherbo2 has joined #ffmpeg
lemoniter has joined #ffmpeg
mccobsta has joined #ffmpeg
SystemError has quit [Remote host closed the connection]
<intrac>
the first set of parameters configure the webcam itself (you might want to set your camera to a lower resolution, although it might only allow a set of predefined resolutions. (there are v4l commands that can list supported resolutions/formats of a camera)
<intrac>
I deliberately created a video filter chain to adjust the resolution of the usb camera down further (to match the overlay resolution) just to show what can be done
<intrac>
but if you create the overlay image exactly the same size as the video from the camera, then you can avoid having to resize, which would help reduce processing
<intrac>
I realised that ffplay can't open more than one source (a known limitation of it), so using ffmpeg is needed, but you can then pipe the output directly into ffplay (see from -f yuv4mpegpipe onwards)
<intrac>
latency on my computer is actually much better (less) than 0.5 seconds. it seems to be more like 1/10 second or so.
<intrac>
CPU usage seems low, even on my 2012 PC
<intrac>
if you can make do with something like 1024x768 resolution and avoid any scaling, then you may well be able to get it to run on a Pi
intrac_ has joined #ffmpeg
evilscreww has joined #ffmpeg
yans has joined #ffmpeg
Tinos has quit [Ping timeout: 256 seconds]
thomas_D88 has quit [Ping timeout: 252 seconds]
alexherbo2 has quit [Remote host closed the connection]
EmleyMoor has quit [Read error: Connection reset by peer]
alexherbo2 has joined #ffmpeg
EmleyMoor has joined #ffmpeg
alexherbo2 has quit [Remote host closed the connection]
alexherbo2 has joined #ffmpeg
alexherbo2 has quit [Remote host closed the connection]
lucasta has joined #ffmpeg
thomas_D88 has joined #ffmpeg
cryptic has joined #ffmpeg
SystemError has quit [Remote host closed the connection]
<EmleyMoor>
I 9wast to "splice" some video files from my dashcam together to make a continuous video - how can I do this? Also, is there a good way to mute out little bits of the sound (a few seconds here and thene)?
<EmleyMoor>
want*
travisghansen has joined #ffmpeg
<DHE>
you probably want to use the "concat" feature... check the ffmpeg wiki for more information...
<EmleyMoor>
Working from that now - will let you know how it pans out
<vlt>
EmleyMoor: And for the audio part: you can use the volume filter with something like :enable='between(t,10,20).
<EmleyMoor>
Instead of -c copy ,-c:v copy -c:a alac works
<EmleyMoor>
(just checking the joined videos play as required now, Will then deal with the audio silences and trim the ends... would also like to overlay the rear camera video in a corner of the front.
<EmleyMoor>
vlt: I could do with some kind of worked example....
<vlt>
EmleyMoor: Not tested but should be close: -i front.mp4 -i back.mp4 -filter_complex "[1:v]scale=iw/4:-1[1_small];[0:v][1_small]overlay=x=iw/2:y=ih/2"
<EmleyMoor>
vlt: That looks like the video bit..,. the audio bit still puzzles
<EmleyMoor>
furq: It doesn't seem to understand those betweens einter
<EmleyMoor>
either
<EmleyMoor>
It may need me to do them one portion at a time until I find the format to combine
vlm has joined #ffmpeg
<EmleyMoor>
(i.e. it accepts those volume, betweens each on their own but not together
Traneptora has quit [Quit: Quit]
<vlt>
EmleyMoor: The variable names for input_width and height don't seem consistent throughout the filters. It's iw, iW, main_w and other all over the place.
alexherbo2 has quit [Remote host closed the connection]
evilscreww has quit [Ping timeout: 240 seconds]
MrZeus has joined #ffmpeg
MrZeus_ has joined #ffmpeg
MrZeus has quit [Ping timeout: 246 seconds]
five6184803391 has quit [Quit: Ping timeout (120 seconds)]
five6184803391 has joined #ffmpeg
emanuele6 has quit [Ping timeout: 272 seconds]
fannagoganna has joined #ffmpeg
MrZeus__ has joined #ffmpeg
emanuele6 has joined #ffmpeg
MrZeus_ has quit [Ping timeout: 244 seconds]
mccobsta9 has joined #ffmpeg
mccobsta9 is now known as mccobsta
mccobsta has quit [Ping timeout: 246 seconds]
Traneptora has joined #ffmpeg
HerbY_NL has joined #ffmpeg
phantomics_ has quit [Quit: Leaving]
HerbY_NL has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
rvalue has quit [Ping timeout: 252 seconds]
rvalue has joined #ffmpeg
HerbY_NL has joined #ffmpeg
HerbY_NL has quit [Client Quit]
vampirefrog has quit [Quit: Leaving]
lexano has quit [Remote host closed the connection]
jemius has joined #ffmpeg
lexano has joined #ffmpeg
xoip has quit [Quit: WeeChat 4.3.4]
xoip has joined #ffmpeg
jagannatharjun has quit [Quit: Connection closed for inactivity]
coldfeet has joined #ffmpeg
<EmleyMoor>
With a bit of luck I may be at the "in earnest" stage with that soon.
jemius has quit [Quit: Leaving]
MrZeus_ has joined #ffmpeg
fannagoganna has quit [Quit: Connection closed for inactivity]
MrZeus__ has quit [Ping timeout: 276 seconds]
MrZeus__ has joined #ffmpeg
MrZeus has joined #ffmpeg
MrZeus_ has quit [Ping timeout: 260 seconds]
MrZeus_ has joined #ffmpeg
MrZeus__ has quit [Ping timeout: 260 seconds]
MrZeus has quit [Ping timeout: 260 seconds]
coldfeet has quit [Remote host closed the connection]
MrZeus__ has joined #ffmpeg
MrZeus_ has quit [Ping timeout: 246 seconds]
MrZeus__ has quit [Ping timeout: 276 seconds]
Warcop has joined #ffmpeg
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
sugoi has joined #ffmpeg
iconoclast_hero has joined #ffmpeg
<EmleyMoor>
Trying to work out how to overlay and how much if any to scale rigbht now - overlay filter being tried at its most basic
<EmleyMoor>
I do already have a small use of overlay - to put a white box over my license plate caption - tuned for both front and rear views
Haxxa has quit [Quit: Haxxa flies away.]
<iconoclasthero>
I'm trying to figure out why ffmpeg is failing with `Address family not supported by protocol (src/ip_resolver.cpp:542)` when it is called by mpd to use an unamed pipe for stdin input:
<iconoclasthero>
when called cli as `$ cat /tmp/mpd.fifo | "ffmpeg -hide_banner -f s16le -ar 48000 -ac 2 -i - -f mpegts zmq:tcp://localhost:5555` it works with localhost, 127.0.0.1, 0
<iconoclasthero>
0.0.0.0
Haxxa has joined #ffmpeg
<iconoclasthero>
i'm able to call ffmpeg to rtsp from mpd without issue, e.g., ` command "ffmpeg -loglevel error -hide_banner -y -f s16le -ar 48000 -ac 2 -vn -i - -c libopus -b:a 64k -f rtsp
<iconoclasthero>
so the unnamed pipe via std isn't an issue; calling ffmpeg from mpd via command in the pipe output works for rtsp:// but i can't get zmq to work.
Akosmo has joined #ffmpeg
<Akosmo>
Hi, I have a question regarding the LGPL FFmpeg uses and how FFmpeg could be used in an app I'm developing. Is this the right channel or should I move to the dev channel?
MrZeus has joined #ffmpeg
<DHE>
this would be preferable. but in general, you might just want to see legal information about the LGPL specifically. and watch how you build ffmpeg, the license can vary.
<iconoclasthero>
"Address family not supported by protocol (src/ip_resolver.cpp:542)" follows... I thought that was a good indication it was getting the stream but 1536 IS the bitrate for s16le/48 kHz.
<iconoclasthero>
not sure if the fact that 'fd:' is blank is relevant though.
<EmleyMoor>
Had to do soome calculations myself but should now be generating the video I require
vlm has quit [Quit: Leaving]
<Akosmo>
I read the mini-FAQ, the LGPL license and its FAQ as well. But I'm still lost in some points. I've also searched around and couldn't find much answers that helps with the way I plan to use FFmpeg.
<Akosmo>
Basically, I have this app that exports video as AVI (built-in method from the game engine I'm using for this). I'm not even going to try to change to exporting as MP4 right away since that's beyond my skill level.
<Akosmo>
What I have in mind is to convert the AVI video to MP4. The way I was thinking of doing that is having a code that executes FFmpeg and passes some hard-coded or user-defined arguments.
<Akosmo>
So that's all I'd do. I wouldn't use any of the source code within my app. But I don't know if having the commands in my code counts as using the library, or makes it a work based on the library instead of just a work that uses the library.
<Akosmo>
App would be free, and open-source, if that matters. Haven't decided on the license yet.
<iconoclasthero>
are you going to package ffmpeg with the app?
<iconoclasthero>
if you make it a dependency that someone else has to go get then I would think that you wash your hands of the problem... IANAL though
* iconoclasthero
just learned that means "i am not a lawyer" this morning.
<furq>
it doesn't sound like you need to worry about licensing at all
<Akosmo>
I've heard this kind of thing is not the best idea to go for as a developer, but I was just gonna have like a button or link that sends the user to a site to download FFmpeg. I thought of distributing the executable along with the app, but I assume that makes it more complicated for me.
<furq>
that would just mean you need to make the ffmpeg source available for any version you distribute
<furq>
there's no hard and fast rule for what constitutes a derivative work but the thing you described doesn't sound like one to me
<furq>
but then i don't earn a living arguing about this stuff
<iconoclasthero>
i don't see how it's not much harder for you to package it with your app from a compatibility, version control, potential licence issue, standpoint.
dreamon has quit [Ping timeout: 246 seconds]
<iconoclasthero>
link to static builds that you know do what you want and call it a day.
<Akosmo>
Mhm, I'm aware that it's risky to follow advices or whatever about legal stuff from people online, but it'd be very unfortunate from me to stick to AVI of huge sizes. So if I'm gonna take any risk, I'd prefer to get info from a place like this.
<Akosmo>
Also, why wouldn't I need to worry about the licensing at all, furq?
<iconoclasthero>
his operative words were "derivative work"
<furq>
if you're just running an ffmpeg binary from an application whose principal function has nothing to do with ffmpeg then you wouldn't even need to release your application source code
<furq>
and you wouldn't need to release any source code at all if you don't ship ffmpeg
<EmleyMoor>
Is it possible to set the position for an overlay to "top right" easily? e.g. something like "larger width - smaller width" for the x?
<furq>
but also even for derived works it's pretty rare for someone to care if your part is mit licensed or whatever
<furq>
unless you're modifying the ffmpeg source at which point people will care a lot
l4yer has quit [Ping timeout: 246 seconds]
l4yer has joined #ffmpeg
<furq>
EmleyMoor: main_w-overlay_w
<furq>
or more confusingly W-w
<Akosmo>
I see. Thank you, furq and iconoclasthero. I'm new to programming so that makes things harder too haha.
<EmleyMoor>
Ah, handy. It will make it easier since, by default, rear videos are half width compared to front, but sometimes scaling them furtter (to quarter width (half their original) makes sense. Of course, if I needed "rear main, front in corner" I'd have to s-tart from scaled down to quarter width in the first place.
<Akosmo>
I'd still release the source code because yea I think that'd be neat.
<Akosmo>
I have no idea how I'd ship it with the source code if I distribute my app with FFmpeg, but I wanna hope that information isn't hard to find around.
<Akosmo>
But yea, all this makes me feel a little more relieved and confident in implementing my video converting idea into my app.
<EmleyMoor>
Which corner I need to use depends on where the action is happening
lavaball has quit [Remote host closed the connection]
<iconoclasthero>
Akosmo: the other thing to consider is that if you distribute ffmpeg with your app then you're going to be on the hook f/version control and support and all that good stuff.
<iconoclasthero>
make it a dependency and your responsibility diminishes at that point.
<furq>
idk if i agree with that but other successful oss things take that approach
<furq>
e.g. audacity expects you to supply your own ffmpeg
<Akosmo>
Sorry for the newbie question, but what do you mean about "making it a dependency"? And also, what is "binary" in this context?
<furq>
the binary is ffmpeg.exe or whatever extension your os uses
<furq>
making it a dependency means making the user supply their own binary
iive has joined #ffmpeg
<furq>
if you're mostly targeting windows and osx then it might make more sense to ship ffmpeg to avoid people asking you what a $PATH is
rv1sr has quit []
<furq>
it doesn't make sense on linux
Dagger has joined #ffmpeg
<Akosmo>
I see I see. So, so far, if I were to just send FFmpeg args with my code, then it seems I don't have anything to worry about. But if I ship my app with the FFmpeg binary (which I guess the recommended option, esp. if targetting Windows), I need to ship it with the source code, as well as setting FFmpeg up with my app. That right?
<furq>
you would also need to provide sources for any dependencies your ffmpeg build uses
<furq>
so presumably libx264
<Akosmo>
Is there an easy way to check what dependencies I'd be using?
<furq>
if you build it yourself it's very easy
<furq>
it's the ones you --enable
<furq>
and if you use an ffmpeg someone else built then they have to provide that source code bundle
<furq>
so you can just copy that
<Akosmo>
Yea I was thinking of doing the latter, idk how I'd do the "build" stuff, not familiar with that
yans has quit [Ping timeout: 264 seconds]
yans has joined #ffmpeg
SuicideShow has quit [Ping timeout: 276 seconds]
SuicideShow has joined #ffmpeg
MrZeus has quit [Read error: Connection reset by peer]
SystemError has quit [Remote host closed the connection]
chandash has joined #ffmpeg
chandash has quit [Read error: Connection reset by peer]
SystemError has joined #ffmpeg
Traneptora has quit [Quit: Quit]
SystemError has quit [Remote host closed the connection]