summaryrefslogtreecommitdiff
path: root/doc/design/decoder_structures.tex
blob: 588b33695ac2a24300d87c074c3f7da0cebe9ff2 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
\documentclass{article}
\title{Decoder structures}
\author{}
\date{}
\begin{document}
\maketitle

At the time of writing we have a get-stuff-at-this-time API which
hides a decode-some-and-see-what-comes-out approach.

\section{Easy and hard extraction of particular pieces of content}

With most decoders it is quick, easy and reliable to get a particular
piece of content from a particular timecode.  This applies to the DCP,
DCP subtitle, Sndfile and Image decoders.  With FFmpeg, however, this is not easy.

This suggests that it would make more sense to keep the
decode-and-see-what-comes-out code within the FFmpeg decoder and not
use it anywhere else.

However resampling screws this up, as it means all audio requires
decode-and-see.  I don't think you can't resample in neat blocks as
there are fractional samples other complications.  You can't postpone
resampling to the end of the player since different audio may be
coming in at different rates.

This suggests that decode-and-see is a better match, even if it feels
a bit ridiculous when most of the decoders have slightly clunky seek
and pass methods.


\section{Multiple streams}

Another thing unique to FFmpeg is multiple audio streams, possibly at
different sample rates.

There seem to be two approaches to handling this:

\begin{enumerate}
\item Every audio decoder has one or more `streams'.  The player loops
  content and streams within content, and the audio decoder resamples
  each stream individually.
\item Every audio decoder just returns audio data, and the FFmpeg
  decoder returns all its streams' data in one block.
\end{enumerate}

The second approach has the disadvantage that the FFmpeg decoder must
resample and merge its audio streams into one block.  This is in
addition to the resampling that must be done for the other decoders,
and the merging of all audio content inside the player.

These disadvantages suggest that the first approach is better.

One might think that the logical conclusion is to take streams all the
way back to the player and resample them there, but the resampling
must occur on the other side of the get-stuff-at-time API.

\end{document}