| Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
On a test adding subs from an MKV to an existing DCP this reduces
the processing time from ~2h to ~1m because it doesn't resample the
audio from the whole of the MKV, only to discard it.
|
|
This reverts a change made in
8ca6fd6d97e6d42492afddb655fa85130946853c
"Fix doubled subtitles if subtitle stop times are specified."
That change breaks the case where a subtitle _does_ have a stop time,
but it's wrong (30s from the start time) and we want the next subtitle
to clear the previous one.
I can't now see how reverting this could cause doubled subtitles,
so maybe that problem wlil come back. At least now there's a test
for #2792.
|
|
This means we can fix the case of a VF having no known size in a nice way,
in turn fixing problems caused by the fix to #2775.
|
|
|
|
Previously we would scale the bitmap size/position to a proportion
of the original video frame, then scale it back up again to the DCP
container. This didn't take into account some cropped cases where
the picture would end up the same shape but the subtitles would be
stretched.
|
|
|
|
I'm not 100% sure about this but they seem to end up giving audio
packets with no channels and no frames. Here we handle such packets
better.
|
|
|
|
Since e29ce33a36c2e20444d57196defc86d5072bce81 channels is the
number of channels in the frame, and also the number in data,
so we don't need to check this any more.
|
|
On switching to the new FFmpeg send/receive API in
e29ce33a36c2e20444d57196defc86d5072bce81
the channels variable in deinterleave_audio() was switched from
stream channels to frame channels.
I'm not sure if this is right, but it does mean that audio has
`channels` channels, so calling make_silent() up to the stream
channel count is clearly wrong if the stream has more channels
than the frame.
|
|
Previously a call to flush() could result in a lot of audio being
emitted from the decoder (if there is a big gap between the end
of the audio and the video). This would end up being emitted in
one chunk from the player, crashing the audio analyser with an OOM
in some cases.
|
|
|
|
|
|
|
|
The comment says that we're handling differences between channel
counts in the frame and stream but the code wasn't doing that.
|
|
|
|
|
|
for fixes to \c tags in SSA files.
|
|
be emitted, instead of the time that the last thing was (#2268).
This is to avoid problems with the example shown in the test, where
just because a subtitle in source A comes before a subtitle in source B,
source A is pass()ed next and may then emit a subtitle which should
be after the next one in B.
|
|
Previously if there were two images at the same time we would start
them both, then the stop time would be set in the second one but
not the first. This meant that the first one would hang around
forever.
|
|
|
|
|
|
The docs say on EAGAIN we should call avcodec_receive_frame()
and then re-send the same packet again. This should do that.
This is a fix for errors trigged by the accompanying test.
|
|
|
|
|
|
This seems to be what ffplay does and it feels like it makes sense
as frames may be built from multiple packets AFAICS.
|
|
After seeking it appears that we often get irrelevant errors from this
method. ffplay.c seems to ignore them, and this commit means that
we do too (just logging them).
I think these errors during a non-seeking "encoding" run could be
cause for concern; perhaps we should take more note of them in that
case.
|
|
|
|
|
|
Since the FFmpeg 4.4 update it seems that AVSubtitle::pts is no longer
set (it's AV_NOPTS_VALUE, i think).
Instead we apparently need to get the PTS from the packet, which in
turn requires the stream's timebase.
|
|
|
|
The comments discuss this in a bit more depth, but basically we see
errors from avcodec_send_packet after seek. ffplay etc. seem basically
to ignore all errors from avcodec_send_packet, and I can't find a
"proper" fix, so here's a half-way house hack: ignore some errors
after seek. Nasty.
|
|
The test fails if we don't do this; it doesn't really seem 100%
convincing but we are already doing this for audio.
|
|
|
|
|
|
|
|
|
|
|
|
This seems necessary with the multi-threaded decoding; it looks
like we were doing it quite wrong before but getting away with it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|