Internet DRAFT - draft-ietf-netvc-testing
draft-ietf-netvc-testing
Network Working Group T. Daede
Internet-Draft Mozilla
Intended status: Informational A. Norkin
Expires: August 3, 2020 Netflix
I. Brailovskiy
Amazon Lab126
January 31, 2020
Video Codec Testing and Quality Measurement
draft-ietf-netvc-testing-09
Abstract
This document describes guidelines and procedures for evaluating a
video codec. This covers subjective and objective tests, test
conditions, and materials used for the test.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 3, 2020.
Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
Daede, et al. Expires August 3, 2020 [Page 1]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Subjective quality tests . . . . . . . . . . . . . . . . . . 3
2.1. Still Image Pair Comparison . . . . . . . . . . . . . . . 3
2.2. Video Pair Comparison . . . . . . . . . . . . . . . . . . 4
2.3. Mean Opinion Score . . . . . . . . . . . . . . . . . . . 4
3. Objective Metrics . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Overall PSNR . . . . . . . . . . . . . . . . . . . . . . 5
3.2. Frame-averaged PSNR . . . . . . . . . . . . . . . . . . . 5
3.3. PSNR-HVS-M . . . . . . . . . . . . . . . . . . . . . . . 6
3.4. SSIM . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.5. Multi-Scale SSIM . . . . . . . . . . . . . . . . . . . . 6
3.6. CIEDE2000 . . . . . . . . . . . . . . . . . . . . . . . . 6
3.7. VMAF . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Comparing and Interpreting Results . . . . . . . . . . . . . 7
4.1. Graphing . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2. BD-Rate . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3. Ranges . . . . . . . . . . . . . . . . . . . . . . . . . 8
5. Test Sequences . . . . . . . . . . . . . . . . . . . . . . . 8
5.1. Sources . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.2. Test Sets . . . . . . . . . . . . . . . . . . . . . . . . 8
5.2.1. regression-1 . . . . . . . . . . . . . . . . . . . . 9
5.2.2. objective-2-slow . . . . . . . . . . . . . . . . . . 9
5.2.3. objective-2-fast . . . . . . . . . . . . . . . . . . 12
5.2.4. objective-1.1 . . . . . . . . . . . . . . . . . . . . 14
5.2.5. objective-1-fast . . . . . . . . . . . . . . . . . . 17
5.3. Operating Points . . . . . . . . . . . . . . . . . . . . 19
5.3.1. Common settings . . . . . . . . . . . . . . . . . . . 19
5.3.2. High Latency CQP . . . . . . . . . . . . . . . . . . 19
5.3.3. Low Latency CQP . . . . . . . . . . . . . . . . . . . 19
5.3.4. Unconstrained High Latency . . . . . . . . . . . . . 20
5.3.5. Unconstrained Low Latency . . . . . . . . . . . . . . 20
6. Automation . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.1. Regression tests . . . . . . . . . . . . . . . . . . . . 21
6.2. Objective performance tests . . . . . . . . . . . . . . . 21
6.3. Periodic tests . . . . . . . . . . . . . . . . . . . . . 22
7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22
8. Security Considerations . . . . . . . . . . . . . . . . . . . 22
9. Informative References . . . . . . . . . . . . . . . . . . . 22
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 23
Daede, et al. Expires August 3, 2020 [Page 2]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
1. Introduction
When developing a video codec, changes and additions to the codec
need to be decided based on their performance tradeoffs. In
addition, measurements are needed to determine when the codec has met
its performance goals. This document specifies how the tests are to
be carried about to ensure valid comparisons when evaluating changes
under consideration. Authors of features or changes should provide
the results of the appropriate test when proposing codec
modifications.
2. Subjective quality tests
Subjective testing uses human viewers to rate and compare the quality
of videos. It is the preferable method of testing video codecs.
Subjective testing results take priority over objective testing
results, when available. Subjective testing is recommended
especially when taking advantage of psychovisual effects that may not
be well represented by objective metrics, or when different objective
metrics disagree.
Selection of a testing methodology depends on the feature being
tested and the resources available. Test methodologies are presented
in order of increasing accuracy and cost.
Testing relies on the resources of participants. If a participant
requires a subjective test for a particular feature or improvement,
they are responsible for ensuring that resources are available. This
ensures that only important tests be done; in particular, the tests
that are important to participants.
Subjective tests should use the same operating points as the
objective tests.
2.1. Still Image Pair Comparison
A simple way to determine superiority of one compressed image is to
visually compare two compressed images, and have the viewer judge
which one has a higher quality. For example, this test may be
suitable for an intra de-ringing filter, but not for a new inter
prediction mode. For this test, the two compressed images should
have similar compressed file sizes, with one image being no more than
5% larger than the other. In addition, at least 5 different images
should be compared.
Daede, et al. Expires August 3, 2020 [Page 3]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
Once testing is complete, a p-value can be computed using the
binomial test. A significant result should have a resulting p-value
less than or equal to 0.5. For example:
p_value = binom_test(a,a+b)
where a is the number of votes for one video, b is the number of
votes for the second video, and binom_test(x,y) returns the binomial
PMF (probability mass function) with x observed tests, y total tests,
and expected probability 0.5.
If ties are allowed to be reported, then the equation is modified:
p_value = binom_test(a+floor(t/2),a+b+t)
where t is the number of tie votes.
Still image pair comparison is used for rapid comparisons during
development - the viewer may be either a developer or user, for
example. As the results are only relative, it is effective even with
an inconsistent viewing environment. Because this test only uses
still images (keyframes), this is only suitable for changes with
similar or no effect on inter frames.
2.2. Video Pair Comparison
The still image pair comparison method can be modified to also
compare vidoes. This is necessary when making changes with temporal
effects, such as changes to inter-frame prediction. Video pair
comparisons follow the same procedure as still images. Videos used
for testing should be limited to 10 seconds in length, and can be
rewatched an unlimited number of times.
2.3. Mean Opinion Score
A Mean Opinion Score (MOS) viewing test is the preferred method of
evaluating the quality. The subjective test should be performed as
either consecutively showing the video sequences on one screen or on
two screens located side-by-side. The testing procedure should
normally follow rules described in [BT500] and be performed with non-
expert test subjects. The result of the test will be (depending on
the test procedure) mean opinion scores (MOS) or differential mean
opinion scores (DMOS). Confidence intervals are also calculated to
judge whether the difference between two encodings is statistically
significant. In certain cases, a viewing test with expert test
subjects can be performed, for example if a test should evaluate
technologies with similar performance with respect to a particular
artifact (e.g. loop filters or motion prediction). Unlike pair
Daede, et al. Expires August 3, 2020 [Page 4]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
comparisions, a MOS test requires a consistent testing environment.
This means that for large scale or distributed tests, pair
comparisons are preferred.
3. Objective Metrics
Objective metrics are used in place of subjective metrics for easy
and repeatable experiments. Most objective metrics have been
designed to correlate with subjective scores.
The following descriptions give an overview of the operation of each
of the metrics. Because implementation details can sometimes vary,
the exact implementation is specified in C in the Daala tools
repository [DAALA-GIT]. Implementations of metrics must directly
support the input's resolution, bit depth, and sampling format.
Unless otherwise specified, all of the metrics described below only
apply to the luma plane, individually by frame. When applied to the
video, the scores of each frame are averaged to create the final
score.
Codecs must output the same resolution, bit depth, and sampling
format as the input.
3.1. Overall PSNR
PSNR is a traditional signal quality metric, measured in decibels.
It is directly drived from mean square error (MSE), or its square
root (RMSE). The formula used is:
20 * log10 ( MAX / RMSE )
or, equivalently:
10 * log10 ( MAX^2 / MSE )
where the error is computed over all the pixels in the video, which
is the method used in the dump_psnr.c reference implementation.
This metric may be applied to both the luma and chroma planes, with
all planes reported separately.
3.2. Frame-averaged PSNR
PSNR can also be calculated per-frame, and then the values averaged
together. This is reported in the same way as overall PSNR.
Daede, et al. Expires August 3, 2020 [Page 5]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
3.3. PSNR-HVS-M
The PSNR-HVS [PSNRHVS] metric performs a DCT transform of 8x8 blocks
of the image, weights the coefficients, and then calculates the PSNR
of those coefficients. Several different sets of weights have been
considered. The weights used by the dump_pnsrhvs.c tool in the Daala
repository have been found to be the best match to real MOS scores.
3.4. SSIM
SSIM (Structural Similarity Image Metric) is a still image quality
metric introduced in 2004 [SSIM]. It computes a score for each
individual pixel, using a window of neighboring pixels. These scores
can then be averaged to produce a global score for the entire image.
The original paper produces scores ranging between 0 and 1.
To linearize the metric for BD-Rate computation, the score is
converted into a nonlinear decibel scale:
-10 * log10 (1 - SSIM)
3.5. Multi-Scale SSIM
Multi-Scale SSIM is SSIM extended to multiple window sizes [MSSSIM].
The metric score is converted to decibels in the same way as SSIM.
3.6. CIEDE2000
CIEDE2000 is a metric based on CIEDE color distances [CIEDE2000]. It
generates a single score taking into account all three chroma planes.
It does not take into consideration any structural similarity or
other psychovisual effects.
3.7. VMAF
Video Multi-method Assessment Fusion (VMAF) is a full-reference
perceptual video quality metric that aims to approximate human
perception of video quality [VMAF]. This metric is focused on
quality degradation due to compression and rescaling. VMAF estimates
the perceived quality score by computing scores from multiple quality
assessment algorithms, and fusing them using a support vector machine
(SVM). Currently, three image fidelity metrics and one temporal
signal have been chosen as features to the SVM, namely Anti-noise SNR
(ANSNR), Detail Loss Measure (DLM), Visual Information Fidelity
(VIF), and the mean co-located pixel difference of a frame with
respect to the previous frame.
Daede, et al. Expires August 3, 2020 [Page 6]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
The quality score from VMAF is used directly to calculate BD-Rate,
without any conversions.
4. Comparing and Interpreting Results
4.1. Graphing
When displayed on a graph, bitrate is shown on the X axis, and the
quality metric is on the Y axis. For publication, the X axis should
be linear. The Y axis metric should be plotted in decibels. If the
quality metric does not natively report quality in decibels, it
should be converted as described in the previous section.
4.2. BD-Rate
The Bjontegaard rate difference, also known as BD-rate, allows the
measurement of the bitrate reduction offered by a codec or codec
feature, while maintaining the same quality as measured by objective
metrics. The rate change is computed as the average percent
difference in rate over a range of qualities. Metric score ranges
are not static - they are calculated either from a range of bitrates
of the reference codec, or from quantizers of a third, anchor codec.
Given a reference codec and test codec, BD-rate values are calculated
as follows:
o Rate/distortion points are calculated for the reference and test
codec.
* At least four points must be computed. These points should be
the same quantizers when comparing two versions of the same
codec.
* Additional points outside of the range should be discarded.
o The rates are converted into log-rates.
o A piecewise cubic hermite interpolating polynomial is fit to the
points for each codec to produce functions of log-rate in terms of
distortion.
o Metric score ranges are computed:
* If comparing two versions of the same codec, the overlap is the
intersection of the two curves, bound by the chosen quantizer
points.
* If comparing dissimilar codecs, a third anchor codec's metric
scores at fixed quantizers are used directly as the bounds.
Daede, et al. Expires August 3, 2020 [Page 7]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o The log-rate is numerically integrated over the metric range for
each curve, using at least 1000 samples and trapezoidal
integration.
o The resulting integrated log-rates are converted back into linear
rate, and then the percent difference is calculated from the
reference to the test codec.
4.3. Ranges
For individual feature changes in libaom or libvpx, the overlap BD-
Rate method with quantizers 20, 32, 43, and 55 must be used.
For the final evaluation described in [I-D.ietf-netvc-requirements],
the quantizers used are 20, 24, 28, 32, 36, 39, 43, 47, 51, and 55.
5. Test Sequences
5.1. Sources
Lossless test clips are preferred for most tests, because the
structure of compression artifacts in already-compressed clips may
introduce extra noise in the test results. However, a large amount
of content on the internet needs to be recompressed at least once, so
some sources of this nature are useful. The encoder should run at
the same bit depth as the original source. In addition, metrics need
to support operation at high bit depth. If one or more codecs in a
comparison do not support high bit depth, sources need to be
converted once before entering the encoder.
5.2. Test Sets
Sources are divided into several categories to test different
scenarios the codec will be required to operate in. For easier
comparison, all videos in each set should have the same color
subsampling, same resolution, and same number of frames. In
addition, all test videos must be publicly available for testing use,
to allow for reproducibility of results. All current test sets are
available for download [TESTSEQUENCES].
Test sequences should be downloaded in whole. They should not be
recreated from the original sources.
Each clip is labeled with its resolution, bit depth, color
subsampling, and length.
Daede, et al. Expires August 3, 2020 [Page 8]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
5.2.1. regression-1
This test set is used for basic regression testing. It contains a
very small number of clips.
o kirlandvga (640x360, 8bit, 4:2:0, 300 frames)
o FourPeople (1280x720, 8bit, 4:2:0, 60 frames)
o Narrarator (4096x2160, 10bit, 4:2:0, 15 frames)
o CSGO (1920x1080, 8bit, 4:4:4 60 frames)
5.2.2. objective-2-slow
This test set is a comprehensive test set, grouped by resolution.
These test clips were created from originals at [TESTSEQUENCES].
They have been scaled and cropped to match the resolution of their
category. This test set requires a codec that supports both 8 and 10
bit video.
4096x2160, 4:2:0, 60 frames:
o Netflix_BarScene_4096x2160_60fps_10bit_420_60f
o Netflix_BoxingPractice_4096x2160_60fps_10bit_420_60f
o Netflix_Dancers_4096x2160_60fps_10bit_420_60f
o Netflix_Narrator_4096x2160_60fps_10bit_420_60f
o Netflix_RitualDance_4096x2160_60fps_10bit_420_60f
o Netflix_ToddlerFountain_4096x2160_60fps_10bit_420_60f
o Netflix_WindAndNature_4096x2160_60fps_10bit_420_60f
o street_hdr_amazon_2160p
1920x1080, 4:2:0, 60 frames:
o aspen_1080p_60f
o crowd_run_1080p50_60f
o ducks_take_off_1080p50_60f
o guitar_hdr_amazon_1080p
Daede, et al. Expires August 3, 2020 [Page 9]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o life_1080p30_60f
o Netflix_Aerial_1920x1080_60fps_8bit_420_60f
o Netflix_Boat_1920x1080_60fps_8bit_420_60f
o Netflix_Crosswalk_1920x1080_60fps_8bit_420_60f
o Netflix_FoodMarket_1920x1080_60fps_8bit_420_60f
o Netflix_PierSeaside_1920x1080_60fps_8bit_420_60f
o Netflix_SquareAndTimelapse_1920x1080_60fps_8bit_420_60f
o Netflix_TunnelFlag_1920x1080_60fps_8bit_420_60f
o old_town_cross_1080p50_60f
o pan_hdr_amazon_1080p
o park_joy_1080p50_60f
o pedestrian_area_1080p25_60f
o rush_field_cuts_1080p_60f
o rush_hour_1080p25_60f
o seaplane_hdr_amazon_1080p
o station2_1080p25_60f
o touchdown_pass_1080p_60f
1280x720, 4:2:0, 120 frames:
o boat_hdr_amazon_720p
o dark720p_120f
o FourPeople_1280x720_60_120f
o gipsrestat720p_120f
o Johnny_1280x720_60_120f
o KristenAndSara_1280x720_60_120f
Daede, et al. Expires August 3, 2020 [Page 10]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o Netflix_DinnerScene_1280x720_60fps_8bit_420_120f
o Netflix_DrivingPOV_1280x720_60fps_8bit_420_120f
o Netflix_FoodMarket2_1280x720_60fps_8bit_420_120f
o Netflix_RollerCoaster_1280x720_60fps_8bit_420_120f
o Netflix_Tango_1280x720_60fps_8bit_420_120f
o rain_hdr_amazon_720p
o vidyo1_720p_60fps_120f
o vidyo3_720p_60fps_120f
o vidyo4_720p_60fps_120f
640x360, 4:2:0, 120 frames:
o blue_sky_360p_120f
o controlled_burn_640x360_120f
o desktop2360p_120f
o kirland360p_120f
o mmstationary360p_120f
o niklas360p_120f
o rain2_hdr_amazon_360p
o red_kayak_360p_120f
o riverbed_360p25_120f
o shields2_640x360_120f
o snow_mnt_640x360_120f
o speed_bag_640x360_120f
o stockholm_640x360_120f
o tacomanarrows360p_120f
Daede, et al. Expires August 3, 2020 [Page 11]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o thaloundeskmtg360p_120f
o water_hdr_amazon_360p
426x240, 4:2:0, 120 frames:
o bqfree_240p_120f
o bqhighway_240p_120f
o bqzoom_240p_120f
o chairlift_240p_120f
o dirtbike_240p_120f
o mozzoom_240p_120f
1920x1080, 4:4:4 or 4:2:0, 60 frames:
o CSGO_60f.y4m
o DOTA2_60f_420.y4m
o MINECRAFT_60f_420.y4m
o STARCRAFT_60f_420.y4m
o EuroTruckSimulator2_60f.y4m
o Hearthstone_60f.y4m
o wikipedia_420.y4m
o pvq_slideshow.y4m
5.2.3. objective-2-fast
This test set is a strict subset of objective-2-slow. It is designed
for faster runtime. This test set requires compiling with high bit
depth support.
1920x1080, 4:2:0, 60 frames:
o aspen_1080p_60f
o ducks_take_off_1080p50_60f
Daede, et al. Expires August 3, 2020 [Page 12]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o life_1080p30_60f
o Netflix_Aerial_1920x1080_60fps_8bit_420_60f
o Netflix_Boat_1920x1080_60fps_8bit_420_60f
o Netflix_FoodMarket_1920x1080_60fps_8bit_420_60f
o Netflix_PierSeaside_1920x1080_60fps_8bit_420_60f
o Netflix_SquareAndTimelapse_1920x1080_60fps_8bit_420_60f
o Netflix_TunnelFlag_1920x1080_60fps_8bit_420_60f
o rush_hour_1080p25_60f
o seaplane_hdr_amazon_1080p
o touchdown_pass_1080p_60f
1280x720, 4:2:0, 120 frames:
o boat_hdr_amazon_720p
o dark720p_120f
o gipsrestat720p_120f
o KristenAndSara_1280x720_60_120f
o Netflix_DrivingPOV_1280x720_60fps_8bit_420_60f
o Netflix_RollerCoaster_1280x720_60fps_8bit_420_60f
o vidyo1_720p_60fps_120f
o vidyo4_720p_60fps_120f
640x360, 4:2:0, 120 frames:
o blue_sky_360p_120f
o controlled_burn_640x360_120f
o kirland360p_120f
o niklas360p_120f
Daede, et al. Expires August 3, 2020 [Page 13]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o rain2_hdr_amazon_360p
o red_kayak_360p_120f
o riverbed_360p25_120f
o shields2_640x360_120f
o speed_bag_640x360_120f
o thaloundeskmtg360p_120f
426x240, 4:2:0, 120 frames:
o bqfree_240p_120f
o bqzoom_240p_120f
o dirtbike_240p_120f
1290x1080, 4:2:0, 60 frames:
o DOTA2_60f_420.y4m
o MINECRAFT_60f_420.y4m
o STARCRAFT_60f_420.y4m
o wikipedia_420.y4m
5.2.4. objective-1.1
This test set is an old version of objective-2-slow.
4096x2160, 10bit, 4:2:0, 60 frames:
o Aerial (start frame 600)
o BarScene (start frame 120)
o Boat (start frame 0)
o BoxingPractice (start frame 0)
o Crosswalk (start frame 0)
o Dancers (start frame 120)
Daede, et al. Expires August 3, 2020 [Page 14]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o FoodMarket
o Narrator
o PierSeaside
o RitualDance
o SquareAndTimelapse
o ToddlerFountain (start frame 120)
o TunnelFlag
o WindAndNature (start frame 120)
1920x1080, 8bit, 4:4:4, 60 frames:
o CSGO
o DOTA2
o EuroTruckSimulator2
o Hearthstone
o MINECRAFT
o STARCRAFT
o wikipedia
o pvq_slideshow
1920x1080, 8bit, 4:2:0, 60 frames:
o ducks_take_off
o life
o aspen
o crowd_run
o old_town_cross
o park_joy
Daede, et al. Expires August 3, 2020 [Page 15]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o pedestrian_area
o rush_field_cuts
o rush_hour
o station2
o touchdown_pass
1280x720, 8bit, 4:2:0, 60 frames:
o Netflix_FoodMarket2
o Netflix_Tango
o DrivingPOV (start frame 120)
o DinnerScene (start frame 120)
o RollerCoaster (start frame 600)
o FourPeople
o Johnny
o KristenAndSara
o vidyo1
o vidyo3
o vidyo4
o dark720p
o gipsrecmotion720p
o gipsrestat720p
o controlled_burn
o stockholm
o speed_bag
o snow_mnt
Daede, et al. Expires August 3, 2020 [Page 16]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o shields
640x360, 8bit, 4:2:0, 60 frames:
o red_kayak
o blue_sky
o riverbed
o thaloundeskmtgvga
o kirlandvga
o tacomanarrowsvga
o tacomascmvvga
o desktop2360p
o mmmovingvga
o mmstationaryvga
o niklasvga
5.2.5. objective-1-fast
This is an old version of objective-2-fast.
1920x1080, 8bit, 4:2:0, 60 frames:
o Aerial (start frame 600)
o Boat (start frame 0)
o Crosswalk (start frame 0)
o FoodMarket
o PierSeaside
o SquareAndTimelapse
o TunnelFlag
1920x1080, 8bit, 4:2:0, 60 frames:
Daede, et al. Expires August 3, 2020 [Page 17]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o CSGO
o EuroTruckSimulator2
o MINECRAFT
o wikipedia
1920x1080, 8bit, 4:2:0, 60 frames:
o ducks_take_off
o aspen
o old_town_cross
o pedestrian_area
o rush_hour
o touchdown_pass
1280x720, 8bit, 4:2:0, 60 frames:
o Netflix_FoodMarket2
o DrivingPOV (start frame 120)
o RollerCoaster (start frame 600)
o Johnny
o vidyo1
o vidyo4
o gipsrecmotion720p
o speed_bag
o shields
640x360, 8bit, 4:2:0, 60 frames:
o red_kayak
o riverbed
Daede, et al. Expires August 3, 2020 [Page 18]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
o kirlandvga
o tacomascmvvga
o mmmovingvga
o niklasvga
5.3. Operating Points
Four operating modes are defined. High latency is intended for on
demand streaming, one-to-many live streaming, and stored video. Low
latency is intended for videoconferencing and remote access. Both of
these modes come in CQP (constant quantizer parameter) and
unconstrained variants. When testing still image sets, such as
subset1, high latency CQP mode should be used.
5.3.1. Common settings
Encoders should be configured to their best settings when being
compared against each other:
o av1: -codec=av1 -ivf -frame-parallel=0 -tile-columns=0 -cpu-used=0
-threads=1
5.3.2. High Latency CQP
High Latency CQP is used for evaluating incremental changes to a
codec. This method is well suited to compare codecs with similar
coding tools. It allows codec features with intrinsic frame delay.
o daala: -v=x -b 2
o vp9: -end-usage=q -cq-level=x -lag-in-frames=25 -auto-alt-ref=2
o av1: -end-usage=q -cq-level=x -auto-alt-ref=2
5.3.3. Low Latency CQP
Low Latency CQP is used for evaluating incremental changes to a
codec. This method is well suited to compare codecs with similar
coding tools. It requires the codec to be set for zero intrinsic
frame delay.
o daala: -v=x
o av1: -end-usage=q -cq-level=x -lag-in-frames=0
Daede, et al. Expires August 3, 2020 [Page 19]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
5.3.4. Unconstrained High Latency
The encoder should be run at the best quality mode available, using
the mode that will provide the best quality per bitrate (VBR or
constant quality mode). Lookahead and/or two-pass are allowed, if
supported. One parameter is provided to adjust bitrate, but the
units are arbitrary. Example configurations follow:
o x264: -crf=x
o x265: -crf=x
o daala: -v=x -b 2
o av1: -end-usage=q -cq-level=x -lag-in-frames=25 -auto-alt-ref=2
5.3.5. Unconstrained Low Latency
The encoder should be run at the best quality mode available, using
the mode that will provide the best quality per bitrate (VBR or
constant quality mode), but no frame delay, buffering, or lookahead
is allowed. One parameter is provided to adjust bitrate, but the
units are arbitrary. Example configurations follow:
o x264: -crf-x -tune zerolatency
o x265: -crf=x -tune zerolatency
o daala: -v=x
o av1: -end-usage=q -cq-level=x -lag-in-frames=0
6. Automation
Frequent objective comparisons are extremely beneficial while
developing a new codec. Several tools exist in order to automate the
process of objective comparisons. The Compare-Codecs tool allows BD-
rate curves to be generated for a wide variety of codecs
[COMPARECODECS]. The Daala source repository contains a set of
scripts that can be used to automate the various metrics used. In
addition, these scripts can be run automatically utilizing
distributed computers for fast results, with rd_tool [RD_TOOL]. This
tool can be run via a web interface called AreWeCompressedYet [AWCY],
or locally.
Because of computational constraints, several levels of testing are
specified.
Daede, et al. Expires August 3, 2020 [Page 20]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
6.1. Regression tests
Regression tests run on a small number of short sequences -
regression-test-1. The regression tests should include a number of
various test conditions. The purpose of regression tests is to
ensure bug fixes (and similar patches) do not negatively affect the
performance. The anchor in regression tests is the previous revision
of the codec in source control. Regression tests are run on both
high and low latency CQP modes
6.2. Objective performance tests
Changes that are expected to affect the quality of encode or
bitstream should run an objective performance test. The performance
tests should be run on a wider number of sequences. The following
data should be reported:
o Identifying information for the encoder used, such as the git
commit hash.
o Command line options to the encoder, configure script, and
anything else necessary to replicate the experiment.
o The name of the test set run (objective-1-fast)
o For both high and low latency CQP modes, and for each objective
metric:
* The BD-Rate score, in percent, for each clip.
* The average of all BD-Rate scores, equally weighted, for each
resolution category in the test set.
* The average of all BD-Rate scores for all videos in all
categories.
Normally, the encoder should always be run at the slowest, highest
quality speed setting (cpu-used=0 in the case of AV1 and VP9).
However, in the case of computation time, both the reference and
changed encoder can be built with some options disabled. For AV1, -
disable-ext_partition and -disable-ext_partition_types can be passed
to the configure script to substantially speed up encoding, but the
usage of these options must be reported in the test results.
Daede, et al. Expires August 3, 2020 [Page 21]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
6.3. Periodic tests
Periodic tests are run on a wide range of bitrates in order to gauge
progress over time, as well as detect potential regressions missed by
other tests.
7. IANA Considerations
This document does not require any IANA actions.
8. Security Considerations
This document describes the methodologies an procedures for
qualitative testing, therefore does not iteself have implications for
network of decoder security.
9. Informative References
[AWCY] Xiph.Org, "Are We Compressed Yet?", 2016,
<https://arewecompressedyet.com/>.
[BT500] ITU-R, "Recommendation ITU-R BT.500-13", 2012,
<https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-
BT.500-13-201201-I!!PDF-E.pdf>.
[CIEDE2000]
Yang, Y., Ming, J., and N. Yu, "Color Image Quality
Assessment Based on CIEDE2000", 2012,
<http://dx.doi.org/10.1155/2012/273723>.
[COMPARECODECS]
Alvestrand, H., "Compare Codecs", 2015,
<http://compare-codecs.appspot.com/>.
[DAALA-GIT]
Xiph.Org, "Daala Git Repository", 2015,
<http://git.xiph.org/?p=daala.git;a=summary>.
[I-D.ietf-netvc-requirements]
Filippov, A., Norkin, A., and j.
jose.roberto.alvarez@huawei.com, "Video Codec Requirements
and Evaluation Methodology", draft-ietf-netvc-
requirements-10 (work in progress), November 2019.
[MSSSIM] Wang, Z., Simoncelli, E., and A. Bovik, "Multi-Scale
Structural Similarity for Image Quality Assessment", n.d.,
<http://www.cns.nyu.edu/~zwang/files/papers/msssim.pdf>.
Daede, et al. Expires August 3, 2020 [Page 22]
Internet-Draft Video Codec Testing and Quality Measurement January 2020
[PSNRHVS] Egiazarian, K., Astola, J., Ponomarenko, N., Lukin, V.,
Battisti, F., and M. Carli, "A New Full-Reference Quality
Metrics Based on HVS", 2002.
[RD_TOOL] Xiph.Org, "rd_tool", 2016,
<https://github.com/tdaede/rd_tool>.
[SSIM] Wang, Z., Bovik, A., Sheikh, H., and E. Simoncelli, "Image
Quality Assessment: From Error Visibility to Structural
Similarity", 2004,
<http://www.cns.nyu.edu/pub/eero/wang03-reprint.pdf>.
[TESTSEQUENCES]
Daede, T., "Test Sets", n.d.,
<https://people.xiph.org/~tdaede/sets/>.
[VMAF] Aaron, A., Li, Z., Manohara, M., Lin, J., Wu, E., and C.
Kuo, "VMAF - Video Multi-Method Assessment Fusion", 2015,
<https://github.com/Netflix/vmaf>.
Authors' Addresses
Thomas Daede
Mozilla
Email: tdaede@mozilla.com
Andrey Norkin
Netflix
Email: anorkin@netflix.com
Ilya Brailovskiy
Amazon Lab126
Email: brailovs@lab126.com
Daede, et al. Expires August 3, 2020 [Page 23]