dellas has quit [Remote host closed the connection]
jamrial has joined #ffmpeg-devel
b50d has joined #ffmpeg-devel
SystemError has quit [Ping timeout: 256 seconds]
SystemError has joined #ffmpeg-devel
<thardin>
yes M is number of channels
<thardin>
I don't know where you're getting 40*128 from. I guess number of samples per channel per frame
<thardin>
2*40*128 for a stereo frame?
<durandal_1707>
yes, for 44100 Hz sample rate
<durandal_1707>
that is max possible one
<thardin>
but basically if there's no correlation at all PCA should give you the identity matrix. if they're completely correlated (L == R) then it should give you [1,1;1,-1] or so
<thardin>
maybe scaled by sqrt(1/2)
<thardin>
inbetween these two extremes there are many possibilities, but they will all concentrate as much variance as possible into the first component
<thardin>
that is, some frames may have lots of correlation, others perhaps only 25% correlation
<thardin>
the eigenvalues tell you that
noonien has joined #ffmpeg-devel
<durandal_1707>
thardin: getting 0 for eigenvalue, and all vectors elements are 0.707107 for mono in stereo
<durandal_1707>
for out of phase i get -nan
<thardin>
hmm
signalhunter has joined #ffmpeg-devel
<thardin>
what PCA implementation? because octave gives what I'd expect
<thardin>
flipping the right column of ones to -1 gives V=[-0.7071 0.7071; 0.7071 0.7071]
<thardin>
note how the first column chances sign in the second row
<thardin>
and if I do A=[ones(n,1),-ones(n,1)+rand(n,1)]
<thardin>
V=[-0.8882 -0.4594; 0.4594 -0.8882] diagonally dominant but the off-diagonals reveal there is correlation
<thardin>
the singular values as 3.4518 and 0.8377 which is even more telling
<thardin>
are*
<durandal_1707>
i even have nans in covariance matrix....
<BBB>
mkver: for the vp9 change, I thought the show-existing-frame case was protected by a frame_wait_progress(UINT_MAX)?
<BBB>
or is your point that it's not tested by that test, as opposed to it not being handled by this new implementation?
<mkver>
Where is this frame_wait_progress?
<BBB>
lemmesee
<BBB>
I'm looking, may take me a few minutes, the code has changed a lot since back in the day so it's a bit of a search for me
<BBB>
hm...
<BBB>
you're making a good point
<BBB>
where is it...
<BBB>
:)
<BBB>
so ... strictly speaking, it's probably broken. however, in practice, this happens to work for the simple reason that there's usually another frame in between the invisible frame and the show-existing-frame that makes it visible
<BBB>
if this frame has any inter coding (which is normal), then it would have called wait_progress() already
<BBB>
so we're probably relying on that
<BBB>
but strictly speaking this is broken, yes
<mkver>
No, it works because the generic frame-threading introduces a delay so that the frame is actually finished decoding when it reaches the user.
<BBB>
that's only true if there's other frames in between the invisible frame and the show-exsiting-frame
<mkver>
But it does not work when the decoding thread adds side-data late.
<BBB>
if they follow right after, this breaks
<BBB>
the question is whether that's an issue, since that was already the case before anyway
<BBB>
it's probably fine
<mkver>
It's fine? it's not fine. There is a race in the vp9-encparams test.
<mkver>
Adding the side data is racy with the av_frame_ref from the thread that actually outputs the frame.
<BBB>
I'm not talking about the side-data
<BBB>
I'm talking about the decoding itself
<BBB>
the actual pixel data
<mkver>
This is fine, because the frame from thread N is only ever returned to the user after earlier threads have finished decoding.
<BBB>
hm... I guess you're right
<BBB>
ok then
<BBB>
tnx
<BBB>
will we continue to require that delay? with send/receive api, it's conceivable we could return frames earlier
<BBB>
or dontcare notmyproblem?
<BBB>
I guess not relevant here
<mkver>
Right now we do. And I think this is even documented.
dellas has quit [Remote host closed the connection]
dellas has joined #ffmpeg-devel
dellas has quit [Remote host closed the connection]
<mkver>
And anyway: Given that we have to return frames in the correct order (I don't think anyone wants users to reorder them manually by looking at the pts field), we would always have to wait for earlier threads to finish decoding even if we did not did not enforce the delay.
<elenril>
in principle we could reorder in the generic code
<nevcairiel>
if you want to re-use the thread early, store it in a smart way in the right order as the decoder would output it, "re-ordering" is not something you should engage in, just dont forget the order it came as, even if you retrieve them early