| Age | Commit message (Collapse) | Author |
|
|
|
|
|
It turns out that FFmpeg decoders (e.g. flv, see FFmpeg
25faaa311a74efdfdc4fed56996d7338ed807488) check stream IDs and sometimes
create new streams if they see one that they didn't see before. If we
change stream IDs we break this.
Here we try to use stream indices in cases where the IDs are duplicated.
We also account for the case where a new stream appears during
examination. This wasn't covered by tests until the FFmpeg commit
mentioned above, were the flv decoder creates a new stream during
examination of boon_telly.mkv.
|
|
sed -i "/include.*compose.hpp/d;" src/lib/*.cc src/wx/*.cc src/wx/*.h src/tools/*.cc src/lib/*.h test/*.cc
|
|
sed -i "/Plural-Forms/n;/%100/n;/scanf/n;s/%[123456789]/{}/g" src/lib/*.cc src/lib/*.h src/wx/*.cc src/tools/*.cc src/lib/po/*.po src/wx/po/*.po src/tools/po/*.po test/*.cc
sed -i "s/String::compose */fmt::format/g" src/lib/*.cc src/lib/*.h src/wx/*.cc src/tools/*.cc test/*.cc
|
|
|
|
Otherwise we do the wrong thing at the end of a file on the second
run-through.
|
|
Suddenly we have 8 commas, not 9, perhaps because of
29412821241050c846dbceaad4b9752857659977
in ffmpeg (although that's strange, because it was a long time ago).
|
|
This was re-introduced when
94618a724124cbf5fe9f0b47a3fdce601fcd5581
reverted a previous attempt at a fix.
At the time I couldn't understand the doubled-subtitles problem,
but it's apparent in the test introduced in the next commit.
This is another attempt to fix it by only sending a "stop" for
a subtitle if we didn't already stop the subtitle because the
next one arrived.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
On a test adding subs from an MKV to an existing DCP this reduces
the processing time from ~2h to ~1m because it doesn't resample the
audio from the whole of the MKV, only to discard it.
|
|
|
|
This reverts a change made in
8ca6fd6d97e6d42492afddb655fa85130946853c
"Fix doubled subtitles if subtitle stop times are specified."
That change breaks the case where a subtitle _does_ have a stop time,
but it's wrong (30s from the start time) and we want the next subtitle
to clear the previous one.
I can't now see how reverting this could cause doubled subtitles,
so maybe that problem wlil come back. At least now there's a test
for #2792.
|
|
|
|
This means we can fix the case of a VF having no known size in a nice way,
in turn fixing problems caused by the fix to #2775.
|
|
This commit changes the approach with video timing. Previously,
we would (more-or-less) try to use every video frame from the content
in the output, hoping that they come at a constant frame rate.
This is not always the case, however. Here we preserve the PTS
of video frames, and then when one arrives we output whatever
DCP video frames we can (at the regular DCP frame rate).
Hopefully this will solve a range of sync problems, but it
could also introduce new ones.
|
|
|
|
|
|
Previously we would scale the bitmap size/position to a proportion
of the original video frame, then scale it back up again to the DCP
container. This didn't take into account some cropped cases where
the picture would end up the same shape but the subtitles would be
stretched.
|
|
|
|
I'm not 100% sure about this but they seem to end up giving audio
packets with no channels and no frames. Here we handle such packets
better.
|
|
|
|
Since e29ce33a36c2e20444d57196defc86d5072bce81 channels is the
number of channels in the frame, and also the number in data,
so we don't need to check this any more.
|
|
On switching to the new FFmpeg send/receive API in
e29ce33a36c2e20444d57196defc86d5072bce81
the channels variable in deinterleave_audio() was switched from
stream channels to frame channels.
I'm not sure if this is right, but it does mean that audio has
`channels` channels, so calling make_silent() up to the stream
channel count is clearly wrong if the stream has more channels
than the frame.
|
|
Previously a call to flush() could result in a lot of audio being
emitted from the decoder (if there is a big gap between the end
of the audio and the video). This would end up being emitted in
one chunk from the player, crashing the audio analyser with an OOM
in some cases.
|
|
|
|
|
|
|
|
The comment says that we're handling differences between channel
counts in the frame and stream but the code wasn't doing that.
|
|
|
|
|
|
for fixes to \c tags in SSA files.
|
|
be emitted, instead of the time that the last thing was (#2268).
This is to avoid problems with the example shown in the test, where
just because a subtitle in source A comes before a subtitle in source B,
source A is pass()ed next and may then emit a subtitle which should
be after the next one in B.
|
|
Previously if there were two images at the same time we would start
them both, then the stop time would be set in the second one but
not the first. This meant that the first one would hang around
forever.
|
|
|
|
|
|
The docs say on EAGAIN we should call avcodec_receive_frame()
and then re-send the same packet again. This should do that.
This is a fix for errors trigged by the accompanying test.
|
|
|
|
|
|
This seems to be what ffplay does and it feels like it makes sense
as frames may be built from multiple packets AFAICS.
|
|
After seeking it appears that we often get irrelevant errors from this
method. ffplay.c seems to ignore them, and this commit means that
we do too (just logging them).
I think these errors during a non-seeking "encoding" run could be
cause for concern; perhaps we should take more note of them in that
case.
|
|
|
|
|
|
Since the FFmpeg 4.4 update it seems that AVSubtitle::pts is no longer
set (it's AV_NOPTS_VALUE, i think).
Instead we apparently need to get the PTS from the packet, which in
turn requires the stream's timebase.
|
|
|