Network Working Group | A. Fuldseth |
Internet-Draft | G. Bjontegaard |
Intended status: Standards Track | S. Midtskogen |
Expires: September 19, 2016 | T. Davies |
M. Zanaty | |
Cisco | |
March 18, 2016 |
Thor Video Codec
draft-fuldseth-netvc-thor-02
This document provides a high-level description of the Thor video codec. Thor is designed to achieve high compression efficiency with moderate complexity, using the well-known hybrid video coding approach of motion-compensated prediction and transform coding.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on September 19, 2016.
Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This document provides a high-level description of the Thor video codec. Thor is designed to achieve high compression efficiency with moderate complexity, using the well-known hybrid video coding approach of motion-compensated prediction and transform coding.
The Thor video codec is a block-based hybrid video codec similar in structure to widespread standards. The high level encoder and decoder structures are illustrated in Figure 1 and Figure 2 respectively.
+---+ +-----------+ +-----------+ +--------+ Input--+-->| + |-->| Transform |-->| Quantizer |-->| Entropy| Video | +---+ +-----------+ +-----------+ | Coding | | ^ - | +--------+ | | v | | | +-----------+ v | | | Inverse | Output | | | Transform | Bitstream | | +-----------+ | | | | | v | | +---+ | +------------------------>| + | | | +-------------+ +---+ | | ___| Intra Frame | | | | / | Prediction |<-----+ | | / +-------------+ | | |/ v | \ +-------------+ +---------+ | \ | Inter Frame | | Loop | | \___| Prediction | | Filters | | +-------------+ +---------+ | ^ | | | v | +------------+ +---------------+ | | Motion | | Reconstructed | +----------->| Estimation |<--| Frame Memory | +------------+ +---------------+
Figure 1: Encoder Structure
+----------+ +-----------+ Input ------->| Entropy |----->| Inverse | Bitstream | Decoding | | Transform | +----------+ +-----------+ | v +---+ +------------------------>| + | | +-------------+ +---+ | ___| Intra Frame | | | / | Prediction |<-----+ | / +-------------+ | |/ v \ +-------------+ +---------+ \ | Inter Frame | | Loop | \___| Prediction | | Filters | +-------------+ +---------+ ^ |-------------> Output | v Video +--------------+ +---------------+ | Motion | | Reconstructed | | Compensation |<--| Frame Memory | +--------------+ +---------------+
Figure 2: Decoder Structure
The remainder of this document is organized as follows. First, some requirements language and terms are defined. Block structures are described in detail, followed by intra-frame prediction techniques, inter-frame prediction techniques, transforms, quantization, loop filters, entropy coding, and finally high level syntax.
An open source reference implementation is available at github.com/cisco/thor.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
This document frequently uses the following terms.
Each frame is divided into 64x64 or 128x128 Super Blocks (SB) which are processed in raster-scan order. The SB size is signaled in the sequence header. Each SB can be divided into Coding Blocks (CB) using a quad-tree structure. The smallest allowed CB size is 8x8 luma pixels. The four CBs of a larger block are coded/signaled in the following order; upleft, downleft, upright, and downright.
The following modes are signaled at the CB level:
At frame boundaries some square blocks might not be complete. For example, for 1920x1080 resolutions, the bottom row would consist of rectangular blocks of size 64x56. Rectangular blocks at frame boundaries are handled as follows. For each rectangular block, send one bit to choose between:
For the bottom part of a 1920x1080 frame, this implies the following:
Two examples of handling 64x56 blocks at the bottom row of a 1920x1080 frame are shown in Figure 3 and Figure 4 respectively.
64 +-------------------------------+ | | | | | | | | | | | | | | 64 | 56 64x56 | | SKIP | | | | | | | | | - - - - - - - - - + - - - - - - - - - - - - - - - + - - - Frame boundary | 8 | +-------------------------------+
Figure 3: Super block at frame boundary
64 +---------------+---------------+ | | | | | | | | | | | | | | | | | | | | | 64 +---------------+-------+-------+ | | | | | | | | | 32x24 | | | | SKIP +---+---+-------+ | | | | 16x8 | - - - - - - - - - + - - - - - - - +---+---+ - - - + - - - Frame boundary | 8 | | | SKIP | +---------------+---+---+-------+
Figure 4: Coding block at frame boundary
A coding block (CB) can be divided into four smaller transform blocks (TBs).
A coding block (CB) can also be divided into smaller prediction blocks (PBs) for the purpose of motion-compensated prediction. Horizontal, vertical and quad split are used.
8 intra prediction modes are used:
The definition of DC, vertical, and horizontal modes are straightforward.
The upleft direction is exactly 45 degrees.
The upupright, upupleft, and upleftleft directions are equal to arctan(1/2) from the horizontal or vertical direction, since they are defined by going one pixel horizontally and two pixels vertically (or vice versa).
For the 5 angular intra modes (i.e. angle different from 90 degrees), the pixels of the neighbor blocks are filtered before they are used for prediction:
y(n) = (x(n-1) + 2*x(n) + x(n+1) + 2)/4
For the angular intra modes that are not 45 degrees, the prediction sometimes requires sample values at a half-pixel position. These sample values are determined by an additional filter:
z(n + 1/2) = (y(n) + y(n+1))/2
Multiple reference frames are currently implemented as follows.
Combined with re-ordering, this allows for MPEG-1 style B frames.
A desirable future extension is to allow long-term reference frames in addition to the short-term reference frames defined by the sliding-window process.
In case of bi-prediction, two reference indices and two motion vectors are signaled per CB. In the current version, PB-split is not allowed in bi-prediction mode. Sub-pixel interpolation is performed for each motion vector/reference index separately before doing an average between the two predicted blocks:
p(x,y) = (p0(x,y) + p1(x,y))/2
Frames may be transmitted out of order. Reference frames are selected from the sliding window buffer as normal.
A flag is sent in the sequence header indicating that interpolated reference frames may be used.
If a frame is using an interpolated reference frame, it will be the first reference in the reference list, and will be interpolated from the second and third reference in the list. It is indicated by a reference index of -1 and has a frame number equal to that of the current frame.
The interpolated reference is created by a deterministic process common to the encoder and decoder, and described in the separate IRFVC draft [I-D.davies-netvc-irfvc].
Inter prediction uses traditional block-based motion compensated prediction with quarter pixel resolution. A separable 6-tap poly-phase filter is the basis method for doing MC with sub-pixel accuracy. The luma filter coefficients are as follows:
When bi-prediction is enabled in the sequence header:
1/4 phase: [2,-10,59,17,-5,1]/64
2/4 phase: [1,-8,39,39,-8,1]/64
3/4 phase: [1,-5,17,59,-10,2]/64
When bi-prediction is disabled in the sequence header:
1/4 phase: [1,-7,55,19,-5,1]/64
2/4 phase: [1,-7,38,38,-7,1]/64
3/4 phase: [1,-5,19,55,-7,1]/64
With reference to Figure 5, a fractional sample value, e.g. i0,0 which has a phase of 1/4 in the horizontal dimension and a phase of 1/2 in the vertical dimension is calculated as follows:
a0,j = 2*A-2,i - 10*A-1,i + 59*A0,i + 17*A1,i - 5*A2,i + 1*A3,i
where j = -2,...,3
i0,0 = (1*a0,-2 - 8*a0,-1 + 39*a0,0 + 39*a0,1 - 8*a0,2 + 1*a0,3 + 2048)/4096
The minimum sub-block size is 8x8.
+-----+-----+-----+-----+-----+-----+-----+-----+-----+ |A | | | |A |a |b |c |A | |-1,-1| | | | 0,-1| 0,-1| 0,-1| 0,-1| 1,-1| +-----+-----+-----+-----+-----+-----+-----+-----+-----+ | | | | | | | | | | | | | | | | | | | | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ | | | | | | | | | | | | | | | | | | | | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ | | | | | | | | | | | | | | | | | | | | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ |A | | | |A |a |b |c |A | |-1,0 | | | | 0,0 | 0,0 | 0,0 | 0,0 | 1,0 | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ |d | | | |d |e |f |g |d | |-1,0 | | | | 0,0 | 0,0 | 0,0 | 0,0 | 1,0 | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ |h | | | |h |i |j |k |h | |-1,0 | | | | 0,0 | 0,0 | 0,0 | 0,0 | 1,0 | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ |l | | | |l |m |n |o |l | |-1,0 | | | | 0,0 | 0,0 | 0,0 | 0,0 | 1,0 | +-----+-----+-----+-----+-----+-----+-----+-----+-----+ |A | | | |A |a |b |c |A | |-1,1 | | | | 0,1 | 0,1 | 0,1 | 0,1 | 1,1 | +-----+-----+-----+-----+-----+-----+-----+-----+-----+
Figure 5: Sub-pixel positions
For the fractional pixel position having exactly 2 quarter pixel offsets in each dimension, a non-separable filter is used to calculate the interpolated value. With reference to Figure 5, the center position j0,0 is calculated as follows:
j0,0 =
[0*A-1,-1 + 1*A0,-1 + 1*A1,-1 + 0*A2,-1 +
1*A-1,0 + 2*A0,0 + 2*A1,0 + 1*A2,0 +
1*A-1,1 + 2*A0,1 + 2*A1,1 + 1*A2,1 +
0*A-1,2 + 1*A0,2 + 1*A1,2 + 0*A2,2 + 8]/16
Chroma interpolation is performed with 1/8 pixel resolution using the following poly-phase filter.
1/8 phase: [-2, 58, 10, -2]/64
2/8 phase: [-4, 54, 16, -2]/64
3/8 phase: [-4, 44, 28, -4]/64
4/8 phase: [-4, 36, 36, -4]/64
5/8 phase: [-4, 28, 44, -4]/64
6/8 phase: [-2, 16, 54, -4]/64
7/8 phase: [-2, 10, 58, -2]/64
Inter0 and inter1 modes imply signaling of a motion vector index to choose a motion vector from a list of candidate motion vectors with associated reference frame index. A list of motion vector candidates are derived from at most two different neighbor blocks, each having a unique motion vector/reference frame index. Signaling of the motion vector index uses 0 or 1 bit, dependent on the number of unique motion vector candidates. If the chosen neighbor block is coded in bi-prediction mode, the inter0 or inter1 block inherits both motion vectors, both reference indices and the bi-prediction property of the neighbor block.
For block sizes less than 64x64, inter0 has only one motion vector candidate, and its value is always zero.
Which neighbor blocks to use for motion vector candidates depends on the availability of the neighbor blocks (i.e. whether the neighbor blocks have already been coded, belong to the same slice and are not outside the frame boundaries). Four different availabilities, U, UR, L, and LL, are defined as illustrated in Figure 6. If the neighbor block is intra it is considered to be available but with a zero motion vector.
| | | U | UR -----------+-----------+----------- | | | current | L | block | | | | | -----------+-----------+ | | LL | |
Figure 6: Availability of neighbor blocks
Based on the four availabilities defined above, each of the motion vector candidates is derived from one of the possible neighbor blocks defined in Figure 7.
+----+----+ +----+ +----+----+ | UL | U0 | | U1 | | U2 | UR | +----+----+------+----+----+----+----+ | L0 | | +----+ | | | | | +----+ current | | L1 | block | +----+ | | | +----+ | | L2 | | +----+--------------------------+ | LL | +----+
Figure 7: Motion vector candidates
The choice of motion vector candidates depends on the availability of neighbor blocks as shown in Table 1.
U | UR | L | LL | Motion vector candidates |
---|---|---|---|---|
0 | 0 | 0 | 0 | zero vector |
1 | 0 | 0 | 0 | U2, zero vector |
0 | 1 | 0 | 0 | NA |
1 | 1 | 0 | 0 | U2,zero vector |
0 | 0 | 1 | 0 | L2, zero vector |
1 | 0 | 1 | 0 | U2,L2 |
0 | 1 | 1 | 0 | NA |
1 | 1 | 1 | 0 | U2,L2 |
0 | 0 | 0 | 1 | NA |
1 | 0 | 0 | 1 | NA |
0 | 1 | 0 | 1 | NA |
1 | 1 | 0 | 1 | NA |
0 | 0 | 1 | 1 | L2, zero vector |
1 | 0 | 1 | 1 | U2,L2 |
0 | 1 | 1 | 1 | NA |
1 | 1 | 1 | 1 | U2,L2 |
Motion vectors are coded using motion vector prediction. The motion vector predictor is defined as the median of the motion vectors from three neighbor blocks. Definition of the motion vector predictor uses the same definition of availability and neighbors as in Figure 6 and Figure 7 respectively. The three vectors used for median filtering depends on the availability of neighbor blocks as shown in Table 2. If the neighbor block is coded in bi-prediction mode, only the first motion vector (in transmission order), MV0, is used as input to the median operator.
U | UR | L | LL | Motion vectors for median filtering |
---|---|---|---|---|
0 | 0 | 0 | 0 | 3 x zero vector |
1 | 0 | 0 | 0 | U0,U1,U2 |
0 | 1 | 0 | 0 | NA |
1 | 1 | 0 | 0 | U0,U2,UR |
0 | 0 | 1 | 0 | L0,L1,L2 |
1 | 0 | 1 | 0 | UL,U2,L2 |
0 | 1 | 1 | 0 | NA |
1 | 1 | 1 | 0 | U0,UR,L2,L0 |
0 | 0 | 0 | 1 | NA |
1 | 0 | 0 | 1 | NA |
0 | 1 | 0 | 1 | NA |
1 | 1 | 0 | 1 | NA |
0 | 0 | 1 | 1 | L0,L2,LL |
1 | 0 | 1 | 1 | U2,L0,LL |
0 | 1 | 1 | 1 | NA |
1 | 1 | 1 | 1 | U0,UR,L0 |
Motion vectors referring to reference frames later in time than the current frame are stored with their sign reversed, and these reversed values are used for coding and motion vector prediction.
Transforms are applied at the TB or CB level, implying that transform sizes range from 4x4 to 128x128. The transforms form an embedded structure meaning the transform matrix elements of the smaller transforms can be extracted from the larger transforms.
For the 32x32, 64x64 and 128x128 transform sizes, only the 16x16 low frequency coefficients are quantized and transmitted.
The 64x64 inverse transform is defined as a 32x32 transform followed by duplicating each output sample into a 2x2 block. The 128x128 inverse transform is defined as a 32x32 transform followed by duplicating each output sample into a 4x4 block.
A flag is transmitted in the sequence header to indicate whether quantization matrices are used. If this flag is true, a 6 bit value qmtx_offset is transmitted in the sequence header to indicate matrix strength.
If used, then in dequantization a separate scaling factor is applied to each coefficient, so that the dequantized value of a coefficient ci at position i is:
(ci * d(q) * IW(i,c,s,t,q) + 2^(k + 5)) >> (k + 6)
Figure 8: Equation 1
where IW is the scale factor for coefficient position i with size s, frame type (inter/inter) t, component (Y, Cb or Cr) c and quantizer q; and k=k(s,q) is the dequantization shift. IW has scale 64, that is, a weight value of 64 is no different to unweighted dequantization.
The current luma qp value qpY and the offset value qmtx_offset determine a quantisation matrix set by the formula:
qmlevel = max(0,min(11,((qpY + qmtx_offset) * 12) / 44))
Figure 9: Equation 2
This selects one of the 12 different sets of default quantization matrix, with increasing qmlevel indicating increasing flatness.
For a given value of qmlevel, different weighting matrices are provided for all combinations of transform block size, type (intra/inter), and component (Y, Cb, Cr). Matrices at low qmlevel are flat (constant value 64). Matrices for inter frames have unity DC gain (i.e. value 64 at position 0), whereas those for intra frames are designed such that the inverse weighting matrix has unity energy gain (i.e. normalized sum-squared of the scaling factors is 1).
Further details on the quantization matrix and implementation can be found in the separate QMTX draft [I-D.davies-netvc-qmtx].
Luma deblocking is performed on an 8x8 grid as follows: Figure 10.
The relative positions of the samples, a, b, c, d and the motion vectors, MV, are illustrated in
| | block edge | +---+---+---+---+ | a | b | c | d | +---+---+---+---+ | mv | mv x,left | x,right | mv mv y,left y,right
Figure 10: Deblocking filter pixel positions
Chroma deblocking is performed on a 4x4 grid as follows:
A low-pass filter is applied after the deblocking filter if signaled in the sequence header. It can still be switched off for individual frames in the frame header. Also signaled in the frame header is whether to apply the filter for all qualified 128x128 blocks or to transmit a flag for each such block. A super block does not qualify if it only contains Inter0 (skip) coding block and no signal is transmitted for these blocks.
The filter is described in the separate CLPF draft [I-D.midtskogen-netvc-clpf].
The following information is signaled at the sequence level:
The following information is signaled at the frame level:
The following information is signaled at the CB level:
The following information is signaled at the TB level:
The following information is signaled at the PB level:
super-mode (inter0/split/inter1/inter2-ref0/intra/inter2-ref1/inter2-ref2/inter2-ref3,..) if (mode == inter0 || mode == inter1) mv_idx (one of up to 2 motion vector candidates) else if (mode == INTRA) intra_mode (one of up to 8 intra modes) tb_split (NONE or QUAD, coded jointly with CBP for tb_split=NONE) else if (mode == INTER) pb_split (NONE,VER,HOR,QUAD) tb_split_and_cbp (NONE or QUAD and CBP) else if (mode == BIPRED) mvd_x0, mvd_y0 (motion vector difference for first vector) mvd_x1, mvd_y1 (motion vector difference for second vector) ref_idx0, ref_idx1 (two reference indices)
if (mode == INTER2 || mode == BIPRED) mvd_x, mvd_y (motion vector differences)
if (mode != INTER0 and tb_split == 1) cbp (8 possibilities for CBPY/CBPU/CBPV) if (mode != INTER0) transform coefficients
For each block of size NxN (64>=N>8), the following mutually exclusive events are jointly encoded using a single VLC code as follows (example using 4 reference frames):
INTER0 1 SPLIT 01 INTER1 001 INTER2-REF0 0001 BIPRED 00001 INTRA 000001 INTER2-REF1 0000001 INTER2-REF2 00000001 INTER2-REF3 00000000
If there is no interpolated reference frame:
INTER0 1 SPLIT 01 INTER1 001 BIPRED 0001 INTRA 00001 INTER2-REF1 000001 INTER2-REF2 0000001 INTER2-REF3 00000001 INTER2-REF0 00000000
If there is an interpolated reference frame:
If less than 4 reference frames is used, a shorter VLC table is used. If bi-pred is not possible, or split is not possible, they are omitted from the table and shorter codes are used for subsequent elements.
SPLIT 1 INTER1 01 INTER2-REF0 001 INTER0 0001 INTRA 00001 INTER2-REF1 000001 INTER2-REF2 0000001 INTER2-REF3 00000001 BIPRED 00000000
Additionally, depending on information from the blocks to the left and above (meta data and CBP), a different sorting of the events can be used, e.g.:
if (tb-split == 0) N = 4*CBPV + 2*CBPU + CBPY else N = 8
Calculate code as follows:
Map the value of N to code through a table lookup:
code = table[N]
where the purpose of the table lookup is the sort the different values of code according to decreasing probability (typically CBPY=1, CBPU=0, CBPV=0 having the highest probability).
Use a different table depending on the values of CBPY in neighbor blocks (left and above).
Encode the value of code using a systematic VLC code.
Transform coefficient coding uses a traditional zig-zag scan pattern to convert a 2D array of quantized transform coefficients, coeff, to a 1D array of samples. VLC coding of quantized transform coefficients starts from the low frequency end of the 1D array using two different modes; level-mode and run-mode, starting in level-mode:
Example
Figure 11 illustrates an example where 16 quantized transform coefficients are encoded.
4 3 2 | 2 1 | 1 1 | 1 | | 0 0 0 0 | | 0 0 0 0 |__|__|__|________|________|__|________|_______
Figure 11: Coefficients to encode
Table 3 shows the mode, VLC number and symbols to be coded for each coefficient.
Index | abs(coeff) | Mode | Encoded symbols |
---|---|---|---|
0 | 2 | level-mode | level=2,sign |
1 | 1 | level-mode | level=1,sign |
2 | 4 | level-mode | level=4,sign |
3 | 1 | level-mode | level=1,sign |
4 | 0 | level-mode | level=0 |
5 | 0 | run-mode | |
6 | 1 | run-mode | (run=1,level=1) |
7 | 0 | run-mode | |
8 | 0 | run-mode | |
9 | 3 | run-mode | (run=1,level>1), 2*(3-2)+sign |
10 | 2 | level-mode | level=2, sign |
11 | 0 | level-mode | level=0 |
12 | 0 | run-mode | |
13 | 1 | run-mode | (run=1,level=1) |
14 | 0 | run-mode | EOB |
15 | 0 | run-mode |
High level syntax is currently very simple and rudimentary as the primary focus so far has been on compression performance. It is expected to evolve as functionality is added.
This document has no IANA considerations yet. TBD
This document has no security considerations yet. TBD
[I-D.davies-netvc-irfvc] | Davies, T., "Interpolated reference frames for video coding", Internet-Draft draft-davies-netvc-irfvc-00, October 2015. |
[I-D.davies-netvc-qmtx] | Davies, T., "Quantisation matrices for Thor video coding", Internet-Draft draft-davies-netvc-qmtx-00, March 2016. |
[I-D.midtskogen-netvc-clpf] | Midtskogen, S., Fuldseth, A. and M. Zanaty, "Constrained Low Pass Filter", Internet-Draft draft-midtskogen-netvc-clpf-01, March 2016. |
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. |