Internet DRAFT - draft-ietf-bmwg-ipflow-meth
draft-ietf-bmwg-ipflow-meth
Internet Engineering Task Force Jan Novak
Internet-Draft Cisco Systems, Inc.
Intended status: Informational
Expires: 23 October, 2012 23 April 2012
IP Flow Information Accounting and Export Benchmarking
Methodology
draft-ietf-bmwg-ipflow-meth-10.txt
Abstract
This document provides a methodology and framework for quantifying
the performance impact of monitoring of IP flows on a network device
and export of this information to a collector. It identifies the rate
at which the IP flows are created, expired, and successfully exported
as a new performance metric in combination with traditional
throughput. The metric is only applicable to the devices compliant
with the Architecture for IP Flow Information Export [RFC5470]. The
methodology quantifies the impact of the IP flow monitoring process
on the network equipment.
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other
documents at any time. It is inappropriate to use Internet-Drafts
as reference material or to cite them other than as "work in
progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on 23 October, 2012.
Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Novak Expires October, 2012
[Page 1]
Internet-Draft Flow Monitoring Benchmarking April 2012
Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described
in RFC 2119 [RFC2119].
Table of Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 Existing Terminology. . . . . . . . . . . . . . . . . . . 4
2.2 New Terminology . . . . . . . . . . . . . . . . . . . . . 4
3. Flow Monitoring Performance Benchmark . . . . . . . . . . . . 6
3.1 Definition. . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Device Applicability. . . . . . . . . . . . . . . . . . . 6
3.3 Measurement Concept . . . . . . . . . . . . . . . . . . . 7
3.4 The Measurement Procedure Overview. . . . . . . . . . . . 8
4. Measurement Set-Up. . . . . . . . . . . . . . . . . . . . . . 9
4.1 Measurement Topology. . . . . . . . . . . . . . . . . . . 9
4.2 Baseline DUT Set Up. . . . . . . . . . . . . . . . . . . 11
4.3 Flow Monitoring Configuration. . . . . . . . . . . . . . 11
4.4 Collector. . . . . . . . . . . . . . . . . . . . . . . . 16
4.5 Sampling . . . . . . . . . . . . . . . . . . . . . . . . 16
4.6 Frame Formats. . . . . . . . . . . . . . . . . . . . . . 16
4.7 Frame Sizes. . . . . . . . . . . . . . . . . . . . . . . 17
4.8 Flow Export Data Packet Sizes. . . . . . . . . . . . . . 17
4.9 Illustrative Test Set-up Examples. . . . . . . . . . . . 17
5. Flow Monitoring Throughput Measurement Methodology . . . . . 19
5.1 Flow Monitoring Configuration. . . . . . . . . . . . . . 19
5.2 Traffic Configuration. . . . . . . . . . . . . . . . . . 20
5.3 Cache Population . . . . . . . . . . . . . . . . . . . . 21
5.4 Measurement Time Interval. . . . . . . . . . . . . . . . 21
5.5 Flow Export Rate Measurement . . . . . . . . . . . . . . 22
5.6 The Measurement Procedure. . . . . . . . . . . . . . . . 23
6. RFC2544 Measurements . . . . . . . . . . . . . . . . . . . . 24
6.1 Flow Monitoring Configuration. . . . . . . . . . . . . . 24
6.2 Measurements With the Flow Monitoring Throughput Set-up. 25
6.3 Measurements With Fixed Flow Export Rate . . . . . . . . 25
7. Flow Monitoring Accuracy . . . . . . . . . . . . . . . . . . 26
8. Evaluating Flow Monitoring Applicability . . . . . . . . . . 27
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 27
10. Security Considerations . . . . . . . . . . . . . . . . . . 27
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . 28
12. References. . . . . . . . . . . . . . . . . . . . . . . . . 28
12.1 Normative References. . . . . . . . . . . . . . . . . . 28
12.2 Informative References. . . . . . . . . . . . . . . . . 28
Appendix A: Recommended Report Format . . . . . . . . . . . . . 30
Appendix B: Miscellaneous Tests . . . . . . . . . . . . . . . . 31
B.1 DUT Under Traffic Load . . . . . . . . . . . . . . . . . 31
B.2 In-band Flow Export. . . . . . . . . . . . . . . . . . . 31
B.3 Variable Packet Rate . . . . . . . . . . . . . . . . . . 32
B.4 Bursty Traffic . . . . . . . . . . . . . . . . . . . . . 32
Novak Expires October, 2012
[Page 2]
Internet-Draft Flow Monitoring Benchmarking April 2012
B.5 Various Flow Monitoring Configurations . . . . . . . . . 32
B.6 Tests With Bidirectional Traffic . . . . . . . . . . . . 33
B.7 Instantaneous Flow Export Rate . . . . . . . . . . . . . 33
1. Introduction
Monitoring of IP flows (Flow monitoring) is defined in the
Architecture for IP Flow Information Export [RFC5470] and related
IPFIX documents specified in section 1.2 of [RFC5470]. It
analyses the traffic using predefined fields from the packet
header as keys and stores the traffic and other internal
information in the DUT (Device Under Test) memory. This cached
flow information is then formatted into records (see section 2.1
for term definitions) and exported from the DUT to an external
data collector for analysis. More details on the measurement
architecture is provided in section 3.3.
Flow monitoring on network devices is widely deployed and has
numerous uses in both service provider and enterprise segments as
detailed in the Requirements for IP Flow Information Export
[RFC3917]. This document provides a methodology for measuring Flow
monitoring performance so that network operators have a framework
for measurements of impact on the network and network equipment.
This document's goal is a series of methodology specifications for
the measurement of Flow monitoring performance, in a way that is
comparable amongst various implementations, platforms, and
vendor's devices.
Flow monitoring is in most cases run on network devices also
forwarding packets. This document therefore provides also the
methodology for [RFC2544] measurements in the presence of Flow
monitoring. It is applicable to IPv6 and MPLS traffic with their
specifics defined in [RFC5180] and [RFC5695] respectively.
This document specifies a methodology to measure the maximum IP
flow export rate that a network device can sustain without
impacting the forwarding plane, without losing any IP flow
information, and without compromising the IP flow accuracy (see
section 7 for details).
[RFC2544], [RFC5180] and [RFC5695] specify benchmarking of network
devices forwarding IPv4, IPv6 and MPLS [RFC3031] traffic,
respectively. The methodology specified in this document stays the
same for any traffic type. The only restriction may be the DUT's
lack of support for Flow monitoring of the particular traffic type.
A variety of different DUT architectures exist that are capable of
Flow monitoring and export. As such, this document does not attempt
to list the various white box variables (CPU load, memory
utilization, hardware resources utilization etc) that could be
gathered as they always help in comparison evaluations. A more
Novak Expires October, 2012
[Page 3]
Internet-Draft Flow Monitoring Benchmarking April 2012
complete understanding of the stress points of a particular device
can be attained using this internal information and the tester MAY
choose to gather this information during the measurement iterations.
2. Terminology
The terminology used in this document is based on [RFC5470],
[RFC2285] and [RFC1242] as summarized in section 2.1. The only new
terms needed for this methodology are defined in section 2.2.
2.1 Existing Terminology
Device Under Test (DUT) [RFC2285, section 3.1.1]
Flow [RFC5101, section 2]
Flow Key [RFC5101, section 2]
Flow Record [RFC5101, section 2]
Template Record [RFC5101, section 2]
Observation Point [RFC5470, section 2]
Metering Process [RFC5470, section 2]
Exporting Process [RFC5470, section 2]
Exporter [RFC5470, section 2]
Collector [RFC5470, section 2]
Control Information [RFC5470, section 2]
Data Stream [RFC5470, section 2]
Flow Expiration [RFC5470, section 5.1.1]
Flow Export [RFC5470, section 5.1.2]
Throughput [RFC1242, section 3.17]
2.2 New Terminology
2.2.1 Cache
Definition:
Memory area held and dedicated by the DUT to store Flow
information prior to the Flow Expiration.
Novak Expires October, 2012
[Page 4]
Internet-Draft Flow Monitoring Benchmarking April 2012
2.2.2 Cache Size
Definition:
The size of the Cache in terms of how many entries the Cache can
hold.
Discussion:
This term is typically represented as a configurable option in
the particular Flow monitoring implementation. Its highest value
will depend on the memory available in the network device.
Measurement units:
Number of Cache entries
2.2.3 Active Timeout
Definition:
For long-running Flows, the time interval after which the Metering
Process expires a Cache entry to ensure Flow data is regularly
updated
Discussion:
This term is typically presented as a configurable option in the
particular Flow monitoring implementation. See section 5.1.1 of
[RFC5470] for more detailed discussion.
Flows are considered long-running when they last longer than
several multiples of the Active Timeout. If the Active Timeout is
zero, then Flows are considered long-running if they contain many
more packets (tens of packets) than usually observed in a single
transaction.
Measurement units:
Seconds
2.2.4 Idle Timeout
Definition:
The time interval used by the Metering Process to expire an entry
from the Cache, when no more packets belonging to that specific
Cache entry have been observed during the interval.
Discussion:
This term is typically represented as a configurable option in the
particular Flow monitoring implementation. See section 5.1.1 of
[RFC5470] for more detailed discussion. Note that some documents
in the industry refer to this "Idle Timeout" as the
"inactive timeout".
Measurement units:
Seconds
Novak Expires October, 2012
[Page 5]
Internet-Draft Flow Monitoring Benchmarking April 2012
2.2.5 Flow Export Rate
Definition:
The number of Cache entries that expire from the Cache (as defined
by the Flow Expiration term) and are exported to the Collector
within a measurement time interval. There SHOULD NOT be any export
filtering, so that all the expired cache entries are exported. If
there is export filtering and it can't be disabled, this MUST be
indicated in the measurement report.
The measured Flow Export Rate MUST include both the Data Stream
and the Control Information, as defined in section 2 of [RFC5470].
Discussion:
The Flow Export Rate is measured using Flow Export data observed
at the Collector by counting the exported Flow Records during the
measurement time interval (see section 5.4). The value obtained is
an average of the instantaneous export rates observed during the
measurement time interval. The smallest possible measurement
interval (if attempting to measure nearly instantaneous export
rate rather than average export rate on the DUT) is limited by the
export capabilities of the particular Flow monitoring
implementation (when possible physical layer issues between the
DUT and the Collector are excluded).
Measurement units:
Number of Flow Records per second
3. Flow Monitoring Performance Benchmark
3.1 Definition
Flow Monitoring Throughput
Definition:
The maximum Flow Export Rate the DUT can sustain without losing a
single Cache entry. Additionally, for packet forwarding devices,
the maximum Flow Export Rate the DUT can sustain without dropping
packets in the Forwarding Plane (see figure 1).
Measurement units:
Number of Flow Records per second
Discussion:
The losses of Cache entries or forwarded packets in this
definition are assumed to happen due to the lack of DUT resources
to process any additional traffic information or lack of resources
to process Flow Export data. The physical layer issues, like
insufficient bandwidth from the DUT to the Collector or lack of
Collector resources MUST be excluded as detailed in section 4.
3.2 Device Applicability
The Flow monitoring performance metric is applicable to network
devices that deploy [RFC5470] architecture. These devices can be
Novak Expires October, 2012
[Page 6]
Internet-Draft Flow Monitoring Benchmarking April 2012
network packet forwarding devices or appliances which analyze the
traffic but do not forward traffic (probes, sniffers, replicators).
This document does not intend to measure Collector performance, it
only requires sufficient Collector resources (as specified in section
4.4) in order to measure the DUT characteristics.
3.3 Measurement Concept
Figure 1 below presents the functional block diagram of the DUT. The
traffic in the figure represents the test traffic sent to the
DUT and forwarded by the DUT, if possible. When testing devices which
do not act as network packet forwarding devices (such as probes,
sniffers and replicators) the forwarding plane is simply an
Observation Point as defined in section 2 of [RFC5470]. The [RFC2544]
Throughput of such devices will always be zero and the only
applicable performance metric is the Flow Monitoring Throughput.
Netflow is specified by [RFC3954].
+------------------------- +
| IPFIX | NetFlow | Others |
+------------------------- +
| ^ |
| Flow Export |
| ^ |
| +-------------+ |
| | Monitoring | |
| | Plane | |
| +-------------+ |
| ^ |
| traffic information |
| ^ |
| +-------------+ |
| | | |
traffic ---|---->| Forwarding |------|---->
| | Plane | |
| +-------------+ |
| |
| DUT |
+------------------------- +
Figure 1. The functional block diagram of the DUT
Flow monitoring is represented in the figure 1 by the Monitoring
Plane. It is enabled as specified in section 4.3. It uses the
traffic information provided by the Forwarding Plane and configured
Flow Keys to create Cache entries representing the traffic
forwarded (or observed) by the DUT in the DUT Cache. The Cache
entries are expired from the Cache depending on the Cache
configuration (the Active and Idle Timeouts, the Cache Size),
number of Cache entries and the traffic pattern. The Cache
entries are used by the Exporting Process to format the Flow Records
which are then exported from the DUT to the Collector (see figure 2
in section 4).
Novak Expires October, 2012
[Page 7]
Internet-Draft Flow Monitoring Benchmarking April 2012
The Forwarding Plane and Monitoring Plane represent two separate
functional blocks, each with its own performance capability. The
Forwarding Plane handles user data packets and is fully characterized
by the metrics defined by [RFC2544].
The Monitoring Plane handles Flows which reflect the analyzed
traffic. The metric for Monitoring Plane performance is Flow Export
Rate, and the benchmark is the Flow Monitoring Throughput.
3.4 The Measurement Procedure Overview
The measurement procedure is fully specified in sections 4, 5 and 6.
This section provides an overview of principles for the measurements.
The basic measurement procedure of performance characteristics of a
DUT with Flow monitoring enabled is a conventional Throughput
measurement using a search algorithm to determine the maximum packet
rate at which none of the offered packets and corresponding Flow
Records are dropped by the DUT as described in [RFC1242] and section
26.1 of [RFC2544].
The DUT with Flow monitoring enabled contains two functional blocks
which need to be measured using characteristics applicable to one or
both blocks (see figure 1). See sections 3.4.1 and 3.4.2 for further
discussion.
On one hand the Monitoring Plane and Forwarding Plane (see
figure 1) need to be looked at as two independent blocks, and the
performance of each of them measured independently. But on the other
hand when measuring the performance of one of them, the status and
performance of the other MUST be known and benchmarked when both are
present.
3.4.1 Monitoring Plane Performance Measurement
The Flow Monitoring Throughput MUST be (and can only be) measured
with one packet per Flow as specified in section 5. This traffic
type represents the most demanding traffic from the Flow monitoring
point of view and will exercise the Monitoring Plane (see figure 1)
of the DUT most. In this scenario every packet seen by DUT creates a
new Cache entry and forces the DUT to fill the Cache instead of just
updating packet and byte counters of an already existing Cache entry.
The exit criteria for the Flow Monitoring Throughput measurement are
one of the following (e.g. if any of the conditions is reached):
a. The Flow Export Rate at which the DUT starts to lose Flow
information or the Flow information gets corrupted
b. The Flow Export Rate at which the Forwarding Plane starts to drop
or corrupt packets (if the Forwarding Plane is present)
A corrupted packet here means the packet header corruption (resulting
in the cyclic redundancy check failure on the transmission level and
Novak Expires October, 2012
[Page 8]
Internet-Draft Flow Monitoring Benchmarking April 2012
consequent packet drop) or the packet payload corruption leading to
the lost application level data.
3.4.2 Forwarding Plane Performance Measurement
The Forwarding Plane (see figure 1) performance metrics are fully
specified by [RFC2544] and MUST be measured accordingly. A detailed
traffic analysis (see below) with relation to Flow monitoring MUST be
performed prior of any [RFC2544] measurements. Most importantly the
Flow Export Rate caused by the test traffic during an [RFC2544]
measurement MUST be known and reported.
The required test traffic analysis mainly involves the following:
a. Which packet header parameters are incremented or changed during
traffic generation
b. Which Flow Keys the Flow monitoring configuration uses to generate
Flow Records
The RFC2544 performance metrics can be measured in one of the three
modes:
a. As a baseline of forwarding performance without Flow monitoring
b. At a certain level of Flow monitoring activity specified by a Flow
Export Rate lower than the Flow Monitoring Throughput
c. At the maximum level of Flow monitoring performance, e.g. using
traffic conditions representing a measurement of Flow Monitoring
Throughput
The above mentioned measurement mode in point a. represents an
ordinary Throughput measurement specified in RFC2544. The details of
how to setup the measurements in points b. and c. are given in
section 6.
4. Measurement Set-Up
This section concentrates on the set-up of all components necessary
to perform Flow monitoring performance measurement. The recommended
reporting format can be found in Appendix A.
4.1 Measurement Topology
The measurement topology described in this section is applicable only
to the measurements with packet forwarding network devices. The
possible architectures and implementation of the traffic monitoring
appliances (see section 3.2) are too various to be covered in this
document. Instead of the Forwarding Plane, these appliances generally
have some kind of feed (an optical splitter, an interface sniffing
traffic on a shared media or an internal channel on the DUT providing
a copy of the traffic) providing the information about the traffic
necessary for Flow monitoring analysis. The measurement topology then
needs to be adjusted to the appliance architecture, and MUST be part
of the measurement report.
Novak Expires October, 2012
[Page 9]
Internet-Draft Flow Monitoring Benchmarking April 2012
The measurement set-up is identical to that used by [RFC2544], with
the addition of a Collector to analyze the Flow Export(see figure 2).
In the measurement topology with unidirectional traffic, the traffic
is transmitted from the sender to the receiver through the DUT. The
received traffic is analyzed to check it is identical to the
generated traffic.
The ideal way to implement the measurement is by using a single
device to provide the sender and receiver capabilities with one
sending port and one receiving port. This allows for an easy check
whether all the traffic sent by the sender was re-transmitted by the
DUT and received at the receiver.
+-----------+
| |
| Collector |
| |
|Flow Record|
| analysis |
| |
+-----------+
^
| Flow Export
|
| Export Interface
+--------+ +-------------+ +----------+
| | | | | traffic |
| traffic| (*)| | | receiver |
| sender |-------->| DUT |--------->| |
| | | | | traffic |
| | | | | analysis |
+--------+ +-------------+ +----------+
Figure 2 Measurement topology with unidirectional traffic
The DUT's export interface (connecting the Collector) MUST NOT be
used for forwarding the test traffic but only for the Flow Export
data containing the Flow Records. In all measurements, the export
interface MUST have enough bandwidth to transmit Flow Export data
without congestion. In other words, the export interface MUST NOT be
a bottleneck during the measurement.
The traffic receiver MUST have sufficient resources to measure all
test traffic transferred successfully by the DUT. This may be
checked through measurements with and without the DUT.
Note that more complex topologies might be required. For example, if
the effects of enabling Flow monitoring on several interfaces are of
concern or the media maximum speed is less than the DUT throughput,
the topology can be expanded with several input and output ports.
However, the topology MUST be clearly written in the measurement
report.
Novak Expires October, 2012
[Page 10]
Internet-Draft Flow Monitoring Benchmarking April 2012
4.2 Baseline DUT Set Up
The baseline DUT set-up and the way the set-up is reported in the
measurement results is fully specified in section 7 of [RFC2544].
The baseline DUT configuration might include other features like
packet filters or quality of service on the input and/or output
interfaces if there is the need to study Flow monitoring in the
presence of those features. The Flow monitoring measurement
procedures do not change in this case. Consideration needs to be made
when evaluating measurement results to take into account the
possible change of packet rates offered to the DUT and Flow
monitoring after application of the features to the configuration.
Any such feature configuration MUST be part of the measurement
report.
The DUT export interface (see figure 2) SHOULD be configured with
sufficient output buffers to avoid dropping the Flow Export data due
to a simple lack of resources in the interface hardware. The applied
configuration MUST be part of the measurement report.
The test designer has the freedom to run tests in multiple
configurations. It is therefore possible to run both non-production
and real deployment configurations in the laboratory, according to
the needs of the tester. All configurations MUST be part of the
measurement report.
4.3 Flow Monitoring Configuration
This section covers all the aspects of the Flow monitoring
configuration necessary on the DUT in order to perform the Flow
monitoring performance measurement. The necessary configuration has
a number of components (see [RFC5470]), namely Observation Points,
Metering Process and Exporting Process as detailed below.
The DUT MUST support the Flow monitoring architecture as specified by
[RFC5470]. The DUT SHOULD support IPFIX [RFC5101] to allow meaningful
results comparison due to the standardized export protocol.
The DUT configuration and any existing Cache and Cache entries MUST
be erased before application of any new configuration for the
currently executed measurement.
4.3.1 Observation Points
The Observation Points specify the interfaces and direction where the
Flow monitoring traffic analysis is to be performed.
The (*) in Figure 2 designates the Observation Points in the default
configuration. Other DUT Observation Points might be configured
depending on the specific measurement needs as follows:
Novak Expires October, 2012
[Page 11]
Internet-Draft Flow Monitoring Benchmarking April 2012
a. ingress port/ports only
b. egress port/ports only
c. both ingress and egress
This test topology corresponds to unidirectional traffic only with
traffic analysis performed on the input and/or output interface.
Testing with Bidirectional traffic is discussed in Appendix B.
Generally, the placement of Observation Points depends upon the
position of the DUT in the deployed network and the purpose of Flow
monitoring. See [RFC3917] for detailed discussion. The measurement
procedures are otherwise the same for all these possible
configurations.
In the case when both ingress and egress Flow monitoring is enabled
on one DUT the results analysis needs to take into account that each
Flow will be represented in the DUT Cache by two Flow Records (one
for each direction). Therefore also the Flow Export will contain
those two Flow Records.
If more than one Observation Point for one direction is defined on
the DUT the traffic passing through each of the Observation Points
MUST be configured in such a way that it creates Flows and Flow
Records which do not overlap. Each packet (or set of packets if
measuring with more than one packet per Flow - see section 6.3.1)
sent to the DUT on different ports still creates one unique Flow
Record.
The specific Observation Points and associated monitoring direction
MUST be included as part of the measurement report.
4.3.2 Metering Process
The Metering Process MUST be enabled in order to create the Cache in
the DUT and configure the Cache related parameters.
The Cache Size available to the DUT MUST be known and taken into
account when designing the measurement as specified in section 5.
Typically Cache Size will be present in the "show" commands of the
Flow monitoring process, in the actual configuration or in the
product documentation from the DUT vendor. The Cache Size MUST have
a fixed value for the entire duration of the measurement. This
method is not applicable to benchmarking any Flow monitoring
applications which dynamically change their Cache Size.
The configuration of the Metering Process MUST be included as part
of the measurement report. For example, when a Flow monitoring
implementation uses timeouts to expire entries from the Cache, the
Cache's Idle and Active Timeouts MUST be known and taken into
account when designing the measurement as specified in section 5.
If the Flow monitoring implementation allows only timeouts equal to
zero (e.g. immediate timeout or non-existent Cache) then the
measurement conditions in section 5 are fulfilllled inherently
Novak Expires October, 2012
[Page 12]
Internet-Draft Flow Monitoring Benchmarking April 2012
without any additional configuration. The DUT simply exports
information about every packet immediately, subject to the Flow
Export Rate definition in section 2.2.5.
If the Flow monitoring implementation allows configuration of
multiple Metering Processes on a single DUT, the exact configuration
of each process MUST be included in the measurement report. Only
measurements with the same number of Metering Processes can be
compared.
The Cache Size, the Idle and Active Timeouts MUST be included in
the measurement report.
4.3.3 Exporting Process
The Exporting Process MUST be configured in order to export the Flow
Record data to the Collector.
The Exporting Process MUST be configured in such a way that all Flow
Records from all configured Observation Points are exported towards
the Collector, after the expiration policy composed of the Idle
and Active Timeouts and Cache Size.
The Exporting Process SHOULD be configured with IPFIX [RFC5101] as
the protocol to use to format the Flow Export data. If the Flow
monitoring implementation does not support IPFIX, proprietary
protocols MAY be used. Only measurements with same export protocol
SHOULD be compared since the protocols may differ in their export
efficiency. The export efficiency might also be influenced by used
Template Record and ordering of the individual export fields within
the template. The Template Records used by the tested
implementations SHOULD be analyzed and documented as part of the
measurement report. Ideally only tests with same Template Records
should be compared.
Various Flow monitoring implementations might use different default
values regarding the export of Control Information [RFC5470] and
therefore Flow Export corresponding to Control Information SHOULD
be analyzed and reported as a separate item on the measurement
report. The export of Control Information SHOULD always be
configured consistently across all testing and configured to the
minimal possible value. Ideally just one set of Control Information
should be exported during each measurement. Note that Control
Information includes options and Template Records [RFC5470].
Section 10 of [RFC5101] and section 8.1 of [RFC5470] discuss the
possibility of deploying various transport layer protocols to deliver
Flow Export data from the DUT to the Collector. The selected protocol
MUST be included in the measurement report. Only benchmarks with the
same transport layer protocol SHOULD be compared. If the Flow
monitoring implementation allows the use of multiple the transport
layer protocols, each of the protocols SHOULD be measured in a
separate measurement run and the results reported independently in
the measurement report.
Novak Expires October, 2012
[Page 13]
Internet-Draft Flow Monitoring Benchmarking April 2012
If a reliable transport protocol is used for the transmission of
the Flow Export data from the DUT, the configuration of the
Transport session MUST allow for non-blocking data transmission.
An example of parameters to look at would be TCP window size and
maximum segment size (MSS). The most substantial transport layer
parameters should be included in the measurement report.
4.3.4 Flow Records
A Flow Record contains information about a specific Flow that was
observed at an Observation Point. A Flow Record contains measured
properties of the Flow (e.g., the total number of bytes for all the
Flow packets) and usually characteristic properties of the Flow
(e.g., source IP address).
The Flow Record definition is implementation specific. A Flow
monitoring implementation might allow for only a fixed Flow Record
definition, based on the most common IP parameters in the IPv4 or
IPv6 headers - for example source and destination IP addresses, IP
protocol numbers or transport level port numbers. Another
implementation might allow the user to define their own arbitrary
Flow Record to monitor the traffic. The requirement for the
measurements defined in this document is only the need for a large
number of Cache entries in the Cache. The Flow Keys needed to
achieve that will typically be source and destination IP addresses
and transport level port numbers.
The recommended full IPv4, IPv6 or MPLS Flow Record is shown
below. Where IP address is indicated, it means either IPv4 or IPv6
depending on the traffic type being tested. The Flow Record
configuration is Flow monitoring implementation-specific and the
examples below can not therefore provide an exact specification
of individual entries in each Flow Record. The best key/field set
to use is left to the test designer using the capabilities of the
specific Flow monitoring implementation.
Flow Keys:
Source IP address
Destination IP address
MPLS label (for MPLS traffic type only)
Transport layer source port
Transport layer destination port
IP protocol number (IPv6 next header)
IP type of service (IPv6 traffic class)
Other fields:
Packet counter
Byte counter
Table 1: Recommended Configuration
Novak Expires October, 2012
[Page 14]
Internet-Draft Flow Monitoring Benchmarking April 2012
If the Flow monitoring allows for user defined Flow Records, the
minimal Flow Record configurations allowing large numbers of Cache
entries are for example:
Flow Keys:
Source IP address
Destination IP address
Other fields:
Packet counter
or:
Flow Keys:
Transport layer source port
Transport layer destination port
Other fields:
Packet counter
Table 2: User-defined Configuration
The Flow Record configuration MUST be clearly noted in the
measurement report. The Flow Monitoring Throughput measurements on
different DUTs or different Flow monitoring implementations MUST be
compared only for exactly same Flow Record configuration.
4.3.5 Flow Monitoring With Multiple Configurations
The Flow monitoring architecture as specified in [RFC5470] allows for
more complicated configurations with multiple Metering and Exporting
Processes on a single DUT. Depending on the particular Flow
monitoring implementation it might affect the measured DUT
performance. The measurement report should therefore contain
information about how many Metering and Exporting processes were
configured on the DUT for the selected Observation Points.
The examples of such possible configurations are:
a. Several Observation Points with a single Metering Process and a
single Exporting Process
b. Several Observation Points, each with one Metering Process but
all using just one instance of Exporting Process
c. Several Observation Points with per Observation Point Metering
Process and Exporting Process
4.3.6 MPLS Measurement Specifics
The Flow Record configuration for measurements with MPLS encapsulated
traffic SHOULD contain the MPLS label. For this document's purposes,
"MPLS Label" is the entire 4 byte MPLS header. Typically the label of
the interest will be at the top of the label stack, but this depends
on the details of the MPLS test set-up.
Novak Expires October, 2012
[Page 15]
Internet-Draft Flow Monitoring Benchmarking April 2012
The tester SHOULD ensure that the data received by the Collector
contains the expected MPLS labels.
The MPLS forwarding performance document [RFC5695] specifies a number
of possible MPLS label operations to test. The Observation Points
MUST be placed on all the DUT test interfaces where the particular
MPLS label operation takes place. The performance measurements SHOULD
be performed with only one MPLS label operation at the time.
The DUT MUST be configured in such a way that all the traffic is
subject to the measured MPLS label operation.
4.4 Collector
The Collector is needed in order to capture the Flow Export data
which allows the Flow Monitoring Throughput to be measured.
The Collector can be used as exclusively capture device providing
just hexadecimal format of the Flow Export data. In such a case it
does not need to have any additional Flow Export decoding
capabilities and all the decoding is done off line.
However if the Collector is also used to decode the Flow Export data
then it SHOULD support IPFIX [RFC5101] for meaningful results
analysis. If proprietary Flow Export is deployed, the Collector MUST
support it otherwise the Flow Export data analysis is not possible.
The Collector MUST be capable of capturing the export packets sent
from the DUT at the full rate without losing any of them. In the
case of the use of reliable transport protocols (see also section
4.3.3) to transmit Flow Export data, the Collector MUST have
sufficient resources to guarantee non-blocking data transmission on
the transport layer session.
During the analysis, the Flow Export data needs to be decoded and the
received Flow Records counted.
The capture buffer MUST be cleared at the beginning of each
measurement.
4.5 Sampling
Packet sampling and flow sampling is out of scope of this document.
This document applies to situations without packet, flow, or export
sampling.
4.6 Frame Formats
Flow monitoring itself is not dependent in any way on the media used
on the input and output ports. Any media can be used as supported by
the DUT and the test equipment. This applies both to data forwarding
interfaces and to the export interface (see Figure 2).
Novak Expires October, 2012
[Page 16]
Internet-Draft Flow Monitoring Benchmarking April 2012
At the time of writing the most common transmission media and
corresponding frame formats (Ethernet, Packet over SONET) for IPv4,
IPv6 and MPLS traffic are specified within [RFC2544], [RFC5180] and
[RFC5695].
The presented frame formats MUST be recorded in the measurement
report.
4.7 Frame Sizes
Frame sizes of the traffic to be analyzed by the DUT are specified in
[RFC2544] section 9 for Ethernet type interfaces (64, 128, 256, 1024,
1280, 1518 bytes) and in [RFC5180] section 5 for Packet over SONET
interfaces (47, 64, 128, 256, 1024, 1280, 1518, 2048, 4096 bytes).
When measuring with large frame sizes, care needs to be taken to
avoid any packet fragmentation on the DUT interfaces which could
negatively affect measured performance values.
The presented frame sizes MUST be recorded in the measurement report.
4.8 Flow Export Data Packet Sizes
The Flow monitoring performance will be affected by the packet size
the particular implementation uses to transmit Flow Export data to
the Collector. The used packet size MUST be part of the measurement
report and only measurements with same packet sizes SHOULD be
compared.
The DUT export interface (see figure 2) maximum transmission unit
(MTU) SHOULD be configured to the largest available value for the
media. The Flow Export MTU MUST be recorded in the measurement
report.
4.9 Illustrative Test Set-up Examples
The below examples represent a hypothetical test set-up to clarify
the use of Flow monitoring parameters and configuration, together
with traffic parameters to test Flow monitoring. The actual
benchmarking specifications are in sections 5 and 6.
4.9.1 Example 1 - Idle Timeout Flow Expiration
The traffic generator sends 1000 packets per second in 10000 defined
streams, each stream identified by an unique destination IP address.
Therefore each stream has a packet rate of 0.1 packets per second.
The packets are sent in a round robin fashion (stream 1 to 10000)
while incrementing the destination IP address for each sent packet.
After a packet for stream 10000 is sent, the next packet destination
IP address corresponds to stream 1's address again.
The configured Cache Size is 20000 Flow Records. The configured
Active Timeout is 100 seconds, the Idle Timeout is 5 seconds.
Novak Expires October, 2012
[Page 17]
Internet-Draft Flow Monitoring Benchmarking April 2012
Flow monitoring on the DUT uses the destination IP address as the
Flow Key.
A packet with destination IP address equal to A is sent every 10
seconds, so the Cache entry would be refreshed in the Cache every 10
seconds. However, the Idle Timeout is 5 seconds, so the Cache
entries will expire from the Cache due to the Idle Timeout and
when a new packet is sent with the same IP address A it will create a
new entry in the Cache. This behavior depends upon the design an
efficiency of the cache ager, and incidences of multi-packet flows
observed during this test should be noted.
The measured Flow Export Rate in this case will be 1000 Flow
Records per second since every single sent packet will always
create a new Cache entry and 1000 packets per second is sent.
The expected number of Cache entries in the Cache during the whole
measurement is around 5000. It corresponds to the Idle Timeout
being 5 seconds and during those five seconds 5000 entries are
created. This expectation might change in real measurement set-ups
with large Cache Sizes and high packet rate where the DUT's actual
export rate might be limited and lower than the Flow Expiration
activity caused by the traffic offered to the DUT. This behavior is
entirely implementation specific.
4.9.2 Example 2 - Active Timeout Flow Expiration
The traffic generator sends 1000 packets per second in 100 defined
streams, each stream identified by an unique destination IP address.
Each stream has a packet rate of 10 packets per second. The packets
are sent in a round robin fashion (stream 1 to 100) while
incrementing the destination IP address for each sent packet. After
a packet for stream 100 is sent, the next packet destination IP
address corresponds to stream 1's address again.
The configured Cache Size is 1000 Flow Records. The configured
Active Timeout is 100 seconds. The Idle Timeout is 10 seconds.
Flow monitoring on the DUT uses the destination IP address as the
Flow Key.
After the first 100 packets are sent, 100 Cache entries will have
been created in the Flow monitoring Cache. The subsequent packets
will be counted against the already created Cache entries since the
destination IP address (Flow Key) has already been seen by the DUT
(provided the Cache entries did not expire yet as described below).
A packet with destination IP address equal to A is sent every 0.1
second, so the Cache entry is refreshed in the Cache every 0.1
second, while the Idle Timeout is 10 seconds. In this case the
Cache entries will not expire until the Active Timeout, e.g. they
will expire every 100 seconds and then the Cache entries will be
created again.
Novak Expires October, 2012
[Page 18]
Internet-Draft Flow Monitoring Benchmarking April 2012
If the test measurement time is 50 seconds from the start of the
traffic generator then the measured Flow Export Rate is 0 since
during this period nothing expired from the Cache.
If the test measurement time is 100 seconds from the start of the
traffic generator then the measured Flow Export Rate is 1 Flow Record
per second.
If the test measurement time is 290 seconds from the start of the
traffic generator then the measured Flow Export Rate is 2/3 of Flow
Record per second since during the 290 seconds period the Cache
expired same number of Flows twice (100).
5. Flow Monitoring Throughput Measurement Methodology
Objective:
To measure the Flow monitoring performance in a manner comparable
between different Flow monitoring implementations.
Metric definition:
Flow Monitoring Throughput - see section 3.
Discussion:
Different Flow monitoring implementations might chose to handle
Flow Export from a partially empty Cache differently than in the
case when the Cache is fully occupied. Similarly software and
hardware based DUTs can handle the same situation as stated above
differently. The purpose of the benchmark measurement in this
section is to abstract from all the possible behaviors and define
one measurement procedure covering all the possibilities. The only
criteria is to measure as defined here until Flow Record or packet
losses are seen. The decision whether to dive deeper into the
conditions under which the packet losses happen is left to the
tester.
5.1 Flow Monitoring Configuration
Cache Size
Cache Size configuration is dictated by the expected position of
the DUT in the network and by the chosen Flow Keys of the Flow
Record. The number of unique Flow Keys sets that the traffic
generator (sender) provides should be multiple times larger than
the Cache Size. This ensures that the existing Cache entries are
never updated by a packet from the sender before the particular
Flow Expiration and Flow Export. This condition is simple to
fullfill with linearly incremented Flow Keys (for example IP
addresses or transport layer ports) where the range of values
must be larger than Cache Size. When randomized traffic
generation is in use the generator must ensure that same Flow Keys
are not repeated within a range of randomly generated values.
Novak Expires October, 2012
[Page 19]
Internet-Draft Flow Monitoring Benchmarking April 2012
The Cache Size MUST be known in order to define the measurement
circumstances properly. Typically Cache Size will be found using
the "show" commands of the Flow monitoring implementation, in the
actual configuration, or in the product documentation from the
vendor.
Idle Timeout
Idle Timeout is set (if configurable) to the minimum possible
value on the DUT. This ensures that the Cache entries are expired
as soon as possible and exported out of the DUT Cache. It MUST be
known in order to define the measurement circumstances completely
and equally across implementations.
Active Timeout
Active Timeout is set (if configurable) to a value equal to or
higher than the Idle Timeout. It MUST be known in order to
define the measurement circumstances completely and equally
across implementations.
Flow Keys Definition:
The test needs large numbers of unique Cache entries to be created
by incrementing values of one or several Flow Keys. The number of
unique combinations of Flow Keys values SHOULD be several times
larger than the DUT Cache Size. This makes sure that any incoming
packet will never refresh any already existing Cache entry.
The availability of Cache Size, Idle Timeout, Active Timeout as
configuration parameters is implementation specific. If the Flow
monitoring implementation does not support these parameters, the test
possibilities as specified by this document are restricted. Some
testing might be viable if the implementation follows the
[IPFIX-CONFIG] document and needs to be considered on the case by
by case basis.
5.2 Traffic Configuration
Traffic Generation
The traffic generator needs to increment the Flow Keys values with
each sent packet. This way each packet represents one Cache entry
in the DUT Cache.
A particular Flow monitoring implementation might choose to deploy
a hashing mechanism to match incoming data packets to certain Flow.
In such a case the combination of how the traffic is constructed
and the hashing might influence the DUT Flow monitoring
performance. For example, if IP addresses are used as Flow Keys
this means there could be a performance difference for linearly
incremented addresses (in ascending or descending order) as opposed
to IP addresses randomized in certain range. If randomized IP
address sequences are used, then the traffic generator needs to be
able to reproduce the randomization (e.g. same set of IP addresses
sent in same order in different test runs) in order to compare
various DUTs and Flow monitoring implementations.
Novak Expires October, 2012
[Page 20]
Internet-Draft Flow Monitoring Benchmarking April 2012
If the test traffic rate is below the maximum media rate for
the particular packet size the traffic generator MUST send the
packets in equidistant time intervals. Traffic generators which do
not fulfilll this condition MUST NOT and cannot be used for the
Flow Monitoring Throughput measurement. An example of this behavior
is if the test traffic rate is one half of the media rate and the
traffic generator achieves this by sending each half of the second
at the full media rate and then sending nothing for the second
half of the second. In such conditions it would be impossible to
distinguish if the DUT failed to handle the Flows due to the input
buffers shortage during the burst or due to the limits in the Flow
Monitoring performance.
Measurement Duration
The measurement duration (e.g. how long the test traffic is sent
to the DUT) MUST be at least two times longer than the Idle
Timeout otherwise no Flow Export would be seen. The measurement
duration SHOULD guarantee that the number of Cache entries created
during the measurement exceeds the available Cache Size.
5.3 Cache Population
The product of Idle Timeout and the packet rate offered to the
DUT (cache population) during one measurement determines the total
number of Cache entries in the DUT Cache during the measurement
(while taking into account some margin for dynamic behavior during
high DUT loads when processing the Flows).
The Flow monitoring implementation might behave differently depending
on the relation of cache population to the available Cache Size
during the measurement. This behavior is fully implementation
specific and will also be influenced if the DUT is software based or
hardware based architecture.
The cache population (if it is lower or higher than the available
Cache Size) during a particular benchmark measurement SHOULD be
noted and mainly only measurements with same cache population SHOULD
be compared.
5.4 Measurement Time Interval
The measurement time interval is the time value which is used to
calculate the measured Flow Export Rate from the captured Flow Export
data. It is obtained as specified below.
RFC2544 specifies with the precision of the packet beginning and end
the time intervals to be used to measure the DUT time
characteristics. In the case of a Flow Monitoring Throughput
measurement the start and stop time needs to be clearly defined but
the granularity of this definition can be limited to just marking the
start and stop time with the start and stop of the traffic generator.
This assumes that the traffic generator and DUT are collocated and
the variance in transmission delay from the generator to the DUT is
Novak Expires October, 2012
[Page 21]
Internet-Draft Flow Monitoring Benchmarking April 2012
negligible as compared to the total time of traffic generation.
The measurement start time: the time when the traffic generator is
started
The measurement stop time: the time when the traffic generator is
stopped
The measurement time interval is then calculated as the difference
(stop time) - (start time) - (Idle Timeout).
This supposes that the Cache Size is large enough so that the time to
fill it up with Cache entries is longer than Idle Timeout.
Otherwise the time to fill up the Cache needs to be used for
calculation of the measurement time interval in the place of the
Idle Timeout.
Instead of measuring the absolute values of stop and start time it is
possible to setup the traffic generator to send traffic for a certain
pre-defined time interval which is then used in the above definition
instead of the difference (stop time) - (start time).
The Collector MUST stop collecting the Flow Export data at the
measurement stop time.
The Idle Timeout (or the time needed to fill up the Cache) causes
delay of the Flow Export data behind the test traffic which is
analyzed by the DUT. E.g. if the traffic starts at time point X Flow
Export will start only at the time point X + Idle Timeout (or X +
time to fill up the Cache). Since Flow Export capture needs to stop
with the traffic (because that's when the DUT stops processing the
Flows at the given rate) the time interval during which the DUT kept
exporting data is shorter by the Idle Timeout than the Time
interval when the test traffic was sent from the traffic generator to
the DUT.
5.5 Flow Export Rate Measurement
The Flow Export Rate needs to be measured in two consequent steps.
The purpose of the first step (point a. below) is to gain the actual
value for the rate, the second step (point b. below) needs to be done
in order to verify Flow Record drops during the measurement:
a. In the first step the captured Flow Export data MUST be analyzed
only for the capturing interval (measurement time interval) as
specified in section 5.4. During this period the DUT is forced to
process Cache entries at the rate the packets are sent. When
traffic generation finishes, the behavior when emptying the Cache
is completely implementation specific and the Flow Export data
from this period cannot be therefore used for the benchmarking.
b. In the second step all the Flow Export data from the DUT MUST be
captured in order to be capable to determine the Flow Record
Novak Expires October, 2012
[Page 22]
Internet-Draft Flow Monitoring Benchmarking April 2012
losses. It needs to be taken into account that especially when
large Cache Sizes (in order of magnitude of hundreds of thousands
of entries and higher) are in use the Flow Export can take many
multiples of Idle Timeout to empty the Cache after the
measurement. This behavior is completely implementation specific.
If the Collector has the capability to redirect the Flow Export data
after the measurement time interval into different capture buffer
(or time stamp the received Flow Export data after that) this can be
done in one step. Otherwise each Flow Monitoring Throughput
measurement at certain packet rate needs to be executed twice - once
to capture the Flow Export data just for the measurement time
interval (to determine the actual Flow Export Rate) and second time
to capture all Flow Export data in order to determine Flow Record
losses at that packet rate.
At the end of the measurement time interval the DUT might still be
processing Cache entries which belong to the Flows expired from the
Cache before the end of the interval. These Flow records might
appear in an export packet sent only after the end of the
measurement interval. This imprecision can be mitigated by large
amounts of Flow Records used during the measurement (so that the
few Flow Records in one export packet can be ignored) or by use of
timestamps exported with the Flow Records.
5.6 The Measurement Procedure
The measurement procedure is same as the Throughput measurement in
section 26.1 of [RFC2544] for the traffic sending side. The DUT
output analysis is done on the traffic generator receiving side for
the test traffic the same way as for RFC2544 measurements.
An additional analysis is performed using data captured by the
Collector. The purpose of this analysis is to establish the value of
the Flow Export Rate during the current measurement step and to verify
that no Flow Records were dropped during the measurement. The
procedure to measure Flow Export Rate is described in section 5.5.
The Flow Export performance can be significantly affected by the way
the Flow monitoring implementation formats the Flow Records into the
Flow Export packets. The ordering and frequency of Control Information
export and mainly the number of Flow Records in one Flow Export packet
is of interest. The worst case scenario here is just one Flow Record
in every Flow Export packet.
Flow Export data should be sanity checked during the benchmark
measurement for:
a. the number of Flow Records per packet, by simply calculating the
ratio of exported Flow Records to the number of Flow Export
packets captured during the measurement (which should be available
as a counter on the Collector capture buffer)
b. the number of Flow Records corresponding to the export of Control
Novak Expires October, 2012
[Page 23]
Internet-Draft Flow Monitoring Benchmarking April 2012
Information per Flow Export packet (calculated as the ratio of the
total number of such Flow Records in the Flow Export data and the
number of Flow Export packets).
6. RFC2544 Measurements
RFC2544 measurements can be performed under two Flow Monitoring set-
ups (see also section 3.4.2). This section details both of them and
specifies ways to construct the test traffic so that RFC2544
measurements can be performed in a controlled environment from the
Flow monitoring point of view. A controlled Flow monitoring
environment means that the tester always knows what Flow monitoring
activity (Flow Export Rate) the traffic offered to the DUT causes.
This section is applicable mainly for the RFC2544 throughput (RFC2544
section 26.1) and latency (RFC2544 section 26.2 ) measurements. It
could be used also to measure frame loss rate (RFC2544 section 26.3)
and back-to-back frames (RFC2544 section 26.4). It is not relevant
for the rest of RFC2544 network interconnect devices characteristics.
Objective:
Provide RFC2544 network device characteristics in the presence of
Flow monitoring on the DUT. RFC2544 studies numerous
characteristics of network devices. The DUT forwarding and time
characteristics without Flow monitoring present on the DUT can
vary significantly when Flow monitoring is deployed on the network
device.
Metric definition:
Metric as specified in [RFC2544].
The measured RFC2544 Throughput MUST NOT include the packet rate
corresponding to the Flow Export data, because it is control type
traffic. It is generated by the DUT as a result of enabling Flow
monitoring and does not contribute to the test traffic which the DUT
can handle. Flow Export requires DUT resources to be generated and
transmitted and therefore the RFC2544 Throughput in most cases will
be much lower when Flow monitoring is enabled on the DUT than without
it.
6.1 Flow Monitoring Configuration
Flow monitoring configuration (as detailed in section 4.3) needs
to be applied the same way as discussed in section 5 with the
exception of the Active Timeout configuration.
The Active Timeout SHOULD be configured to exceed several times the
measurement time interval (see section 5.4). This makes sure that if
measurements with two traffic components are performed (see section
6.3.2) there is no Flow monitoring activity related to the second
traffic component.
Novak Expires October, 2012
[Page 24]
Internet-Draft Flow Monitoring Benchmarking April 2012
The Flow monitoring configuration does not change in any other way
for the measurement performed in this section. What changes and makes
the difference is the traffic configurations as specified in the
sections below.
6.2 Measurements with the Flow Monitoring Throughput Set-up
The major requirement to perform a measurement with Flow Monitoring
Throughput set-up is that the traffic and Flow monitoring is
configured in such a way that each sent packet creates one entry in
the DUT Cache. This restricts the possible set-ups only to the
measurement with two traffic components as specified in section
6.3.2.
6.3 Measurements With Fixed Flow Export Rate
This section covers the measurements where the RFC2544 metrics need
to be measured with Flow monitoring enabled but at certain Flow
Export Rate lower than Flow Monitoring Throughput.
The tester here has both options as specified in section 6.3.1 and
6.3.2.
6.3.1 Measurements With Single Traffic Component
Section 12 of [RFC2544] discusses the use of protocol source and
destination addresses for defined measurements. To perform all the
RFC2544 type measurements with Flow monitoring enabled the defined
Flow Keys SHOULD contain IP source and destination address. The
RFC2544 type measurements with Flow monitoring enabled then can be
executed under these additional conditions:
a. the test traffic is not limited to single unique pair of source
and destination addresses
b. the traffic generator defines test traffic as follows:
allow for a parameter to send N (where N is an integer number
starting at 1 and incremented in small steps) packets with source
IP address A and destination IP address B before changing both IP
addresses to the next value
This test traffic definition allows execution of the Flow monitoring
measurements with fixed Flow Export Rate while measuring the DUT
RFC2544 characteristics. This set-up is the better option since it
best simulates the live network traffic scenario with Flows
containing more than just one packet.
The initial packet rate at N equal to 1 defines the Flow Export Rate
for the whole measurement procedure. Subsequent increases of N will
not change the Flow Export Rate as the time and Cache
characteristics of the test traffic stay the same. This set-up is
suitable for measurements with Flow Export Rates below the Flow
Monitoring Throughput.
Novak Expires October, 2012
[Page 25]
Internet-Draft Flow Monitoring Benchmarking April 2012
6.3.2 Measurements With Two Traffic Components
The test traffic set-up in section 6.3.1 might be difficult to
achieve with commercial traffic generators or the granularity of the
traffic rates as defined by the initial packet rate at N equal to 1
might not be suitable for the required measurement. An alternative
mechanism is to define two traffic components in the test traffic.
One to populate Flow monitoring Cache and the second one to execute
the RFC2544 measurements.
a. Flow monitoring test traffic component - the exact traffic
definition as specified in section 5.2.
b. RFC2544 Test Traffic Component - test traffic as specified by
RFC2544 MUST create just one entry in the DUT Cache. In the
particular set-up discussed here this would mean a traffic stream
with just one pair of unique source and destination IP addresses
(but could be avoided if Flow Keys were for example UDP/TCP source
and destination ports and Flow Keys did not contain the
addresses).
The Flow monitoring traffic component will exercise the DUT in terms
of Flow activity while the second traffic component will measure the
RFC2544 characteristics.
The measured RFC2544 Throughput is the sum of the packet rates of
both traffic components. The definition of other RFC2544 metrics
remains unchanged.
7. Flow Monitoring Accuracy
The pure Flow Monitoring Throughput measurement in section 5 provides
the capability to verify the Flow monitoring accuracy in terms of the
exported Flow Record data. Since every Cache entry created in the
Cache is populated by just one packet, the full set of captured data
on the Collector can be parsed (e.g. providing the values of all Flow
Keys and other Flow Record fields, not only the overall Flow Record
count in the exported data) and each set of parameters from each Flow
Record can be checked against the parameters as configured on the
traffic generator and set in packets sent to the DUT. The exported
Flow Record is considered accurate if:
a. all the Flow Record fields are present in each exported Flow
Record
b. all the Flow Record fields values match the value ranges as set by
the traffic generator (for example an IP address falls within the
range of the IP addresses increments on the traffic generator)
c. all the possible Flow Record field values as defined at the
traffic generator have been found in the captured export data on
the Collector. This check needs to be offset against detected
packet losses at the DUT during the measurement
For a DUT with packet forwarding, the Flow monitoring accuracy also
involves data checks on the received traffic, as already discussed
in section 4.
Novak Expires October, 2012
[Page 26]
Internet-Draft Flow Monitoring Benchmarking April 2012
8. Evaluating Flow Monitoring Applicability
The measurement results as discussed in this document and obtained
for certain DUTs allow for a preliminary analysis of a Flow
monitoring deployment based on the traffic analysis data from the
providers network.
An example of such traffic analysis in the Internet is provided by
[CAIDA] and the way it can be used is discussed below. The data
needed to make an estimate if a certain network device can manage the
particular amount of live traffic with Flow monitoring enabled is:
Average packet size: 350 bytes
Number of packets per IP Flow: 20
Expected data rate on the network device: 1 Gbit/s
The required value needed to be known is the average number of Flows
created per second in the network device:
Expected packet rate
Flows per second = --------------------
Packet per flow
When using the example values given above, the network device would
be required to process 18 000 Flows per second. By executing the
benchmarking as specified in this document a platform capable of this
processing can be determined for the deployment in that particular
part of the user network.
It needs to be kept in mind that the above is a very rough and
averaged Flow activity estimate which cannot account for traffic
anomalies, for example a large number of DNS request packets which
are typically small packets coming from many different sources and
represent mostly just one packet per Flow.
9. Acknowledgements
This work could have been performed thanks to the patience and
support of Cisco Systems NetFlow development team, namely Paul
Aitken, Paul Atkins and Andrew Johnson. Thanks belong to Benoit
Claise for numerous detailed reviews and presentations of the
document and Aamer Akhter for initiating this work. A special
acknowledgment needs to go to the whole of the working group and
especially to the chair Al Morton for the support and work on
this draft and Paul Aitken for a very detailed technical review.
10. Security Considerations
Documents of this type do not directly affect the security of
the Internet or corporate networks as long as benchmarking
is not performed on devices or systems connected to operating
networks.
Novak Expires October, 2012
[Page 27]
Internet-Draft Flow Monitoring Benchmarking April 2012
Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the constraints
specified in sections above.
The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test
management network.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT.
Special capabilities SHOULD NOT exist in the DUT specifically for
benchmarking purposes. Any implications for network security arising
from the DUT SHOULD be identical in the lab and in production
networks.
11. IANA Considerations
This memo makes no requests of IANA.
12. References
12.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, April 1997
[RFC2544] Bradner, S., "Benchmarking Methodology for Network
Interconnect Devices", Informational, RFC 2544, April 1999
12.2. Informative References
[RFC1242] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, July 1991
[RFC2285] Mandeville R., "Benchmarking Terminology for LAN Switching
Devices", Informational, RFC 2285, November 1998
[RFC3031] E. Rosen, A. Viswanathan, R. Callon, "Multiprotocol Label
Switching Architecture", Standards Track, RFC 3031,
January 2001
[RFC3917] Quittek J., "Requirements for IP Flow Information Export
(IPFIX)", Informational, RFC 3917, October 2004
[RFC3954] Claise B., "Cisco Systems NetFlow Services Export
Version 9", Informational, RFC3954, October 2004
[RFC5101] Claise B., "Specification of the IP Flow Information
Export (IPFIX) Protocol for the Exchange of IP Traffic
Flow Information", Standards Track, RFC 5101, January 2008
Novak Expires October, 2012
[Page 28]
Internet-Draft Flow Monitoring Benchmarking April 2012
[RFC5180] C. Popoviciu, A. Hamza, D. Dugatkin, G. Van de Velde,
"IPv6 Benchmarking Methodology for Network Interconnect
Devices", Informational, RFC 5180, May 2008
[RFC5470] Sadasivan, G., Brownlee, N., Claise, B., and J. Quittek,
"Architecture Model for IP Flow Information Export",
RFC 5470, October 2011
[RFC5695] Akhter A. "MPLS Forwarding Benchmarking Methodology",
RFC 5695, November 2009
[CAIDA] Claffy, K., "The nature of the beast: recent traffic
measurements from an Internet backbone",
http://www.caida.org/publications/papers/1998/Inet98/
Inet98.html
[IPFIX-CONFIG] Configuration Data Model for IPFIX and PSAMP, G. Muenz
et al, Work in Progress,
draft-ietf-ipfix-configuration-model-10
[PSAMP-MIB] Dietz, T., Claise, B., and J. Quittek, "Definitions of
Managed Objects for Packet Sampling",
draft-ietf-ipfix-psamp-mib-04 (work in progress),
October 2011
[IPFIX-MIB] Dietz, T., A. Kobayashi, Claise, B., and G. Muenz,
"Definitions of Managed Objects for IP Flow Information
Export",
draft-ietf-ipfix-rfc5815bis-03.txt (work in progress),
April 2012
Author's Addresses
Jan Novak (editor)
Cisco Systems
Edinburgh,
United Kingdom
Email: janovak@cisco.com
Novak Expires October, 2012
[Page 29]
Internet-Draft Flow Monitoring Benchmarking April 2012
Appendix A: (Informative) Recommended Report Format
Parameter Units
----------------------------------- ------------------------------------
Test Case test case name (section 5 and 6)
Test Topology Figure 2, other
Traffic Type IPv4, IPv6, MPLS, other
Test Results
Flow Monitoring Throughput Flow Records per second or Not
Applicable
Flow Export Rate Flow Records per second or Not
Applicable
Control Information Export Rate Flow Records per second
RFC2544 Throughput packets per second
(Other RFC2544 Metrics) (as appropriate)
General Parameters
DUT Interface Type Ethernet, POS, ATM, other
DUT Interface Bandwidth MegaBits per second
Traffic Specifications
Number of Traffic Components (see section 6.3.1 and 6.3.2)
For each traffic component:
Packet Size bytes
Traffic Packet Rate packets per second
Traffic Bit Rate MegaBits per second
Number of Packets Sent number of entries
Incremented Packet Header Fields list of fields
Number of Unique Header Values number of entries
Number of Packets per Flow number of entries
Traffic Generation linearly incremented or
randomized
Flow monitoring Specifications
Direction ingress, egress, both
Observation Points DUT interface names
Cache Size number of entries
Active Timeout seconds
Idle Timeout seconds
Flow Keys list of fields
Flow Record Fields total number of fields
Number of Flows Created number of entries
Flow Export Transport Protocol UDP, TCP, SCTP, other
Flow Export Protocol IPFIX, NetFlow, other
Flow Export data packet size bytes
Flow Export MTU bytes
MPLS Specifications (for traffic type MPLS only)
Tested Label Operation imposition, swap, disposition
The format of the report as documented in this appendix is informative
but the entries in the contents of it are required as specified in the
corresponding sections of this document.
Novak Expires October, 2012
[Page 30]
Internet-Draft Flow Monitoring Benchmarking April 2012
Many of the configuration parameters required by the measurement
report can be retrieved from the [IPFIX-MIB] and [PSAMP-MIB] MIB
modules, and from [IPFIX-CONFIG] YANG module or other general MIBs.
Therefore, querying those modules from the DUT would be beneficial:
first of all, to help in populating the measurement report required
entries, but also to document all the other configuration parameters
from the DUT.
Appendix B: (Informative) Miscellaneous Tests
This section lists the tests which could be useful to asses a proper
Flow monitoring operation under various operational or stress
conditions. These tests are not deemed suitable for any benchmarking
for various reasons.
B.1 DUT Under Traffic Load
The Flow Monitoring Throughput should be measured under different
levels of static traffic load through the DUT. This can be achieved
only by using two traffic components as discussed in section 6.3.2.
One traffic component exercises the Flow Monitoring Plane. The second
traffic component loads only the Forwarding Plane without affecting
Flow monitoring (e.g. it creates just a certain amount of permanent
Cache entries).
The variance in Flow Monitoring Throughput as function of the traffic
load should be noted for comparison purposes between two DUTs of
similar architecture and capability.
B.2 In-band Flow Export
The test topology in section 4.1 mandates the use of separate Flow
Export interface to avoid the Flow Export data generated by the DUT
to mix with the test traffic from the traffic generator. This is
necessary in order to create clear and reproducible test conditions
for the benchmark measurement.
The real network deployment of Flow monitoring might not allow for
such a luxury - for example on a very geographically large network.
In such a case, Flow Export will use an ordinary traffic forwarding
interface e.g. in-band Flow Export.
The Flow monitoring operation should be verified with in-band Flow
Export configuration while following these test steps:
a. Perform benchmark test as specified in section 5
b. One of the results will be how much bandwidth Flow Export used
on the dedicated Flow Export interface
c. Change Flow Export configuration to use the test interface
d. Repeat the benchmark test while the receiver filters out the
Flow Export data from analysis
The expected result is that the RFC2544 Throughput achieved in step
Novak Expires October, 2012
[Page 31]
Internet-Draft Flow Monitoring Benchmarking April 2012
a. is same as the Throughput achieved in step d. provided that the
bandwidth of the output DUT interface is not the bottleneck (in
other words it must have enough capacity to forward both test and
Flow Export traffic).
B.3 Variable Packet Size
The Flow monitoring measurements specified in this document would be
interesting to repeat with variable packet sizes within one
particular test (e.g. test traffic containing mix of packet sizes).
The packet forwarding tests specified mainly in [RFC2544] do not
recommend and perform such tests. Flow monitoring is not dependent
on packet sizes so such a test could be performed during the Flow
Monitoring Throughput measurement and verify its value does not
depend on the offered traffic packet sizes. The tests must be
carefully designed in order to avoid measurement errors due to the
physical bandwidth limitations and changes of the base forwarding
performance with packet size.
B.4 Bursty Traffic
RFC2544 section 21 discusses and defines the use of bursty traffic.
It can be used for Flow monitoring testing as well to gauge some
short term overload DUT capabilities in terms of Flow monitoring. The
test benchmark here would not be the Flow Export Rate the DUT can
sustain but the absolute number of Flow Records the DUT can process
without dropping any single Flow Record. The traffic set-up to be
used for this test is as follows:
a. each sent packet creates a new Cache entry
b. the packet rate is set to the maximum transmission speed of the
DUT interface used for the test
B.5 Various Flow Monitoring Configurations
This section translates the terminology used in the IPFIX documents
[RFC5470], [RFC5101] and others into the terminology used in this
document. Section B.5.2 proposes another measurement which is not
possible to verify in a black box test manner.
B.5.1 RFC2544 Throughput without Metering Process
If Metering Process is not defined on the DUT it means no Flow
monitoring Cache exists and no Flow analysis occurs. The performance
measurement of the DUT in such a case is just pure [RFC2544]
measurement.
B.5.2 RFC2544 Throughput with Metering Process
If only Metering Process is enabled it means that Flow analysis on
the DUT is enabled and operational but no Flow Export happens. The
performance measurement of a DUT in such a configuration represents
an useful test of the DUT capabilities (this corresponds to the case
Novak Expires October, 2012
[Page 32]
Internet-Draft Flow Monitoring Benchmarking April 2012
when the network operator uses Flow monitoring for example for manual
denial of service attacks detection and does not wish to use Flow
Export).
The performance testing on this DUT can be performed as discussed in
this document but it is not possible to verify the operation and
results without interrogating the DUT.
B.5.3 RFC2544 Throughput with Metering and Exporting Process
This test represents the performance testing as discussed in
section 6.
B.6 Tests With Bidirectional Traffic
Bidirectional traffic is not part of the normative benchmarking tests
based on discussion and recommendation of the Benchmarking working
group. The experienced participants stated that this kind of traffic
did not provide reproducible results.
The test topology in figure 2 can be expanded to verify Flow
monitoring functionality with bidirectional traffic using the
interfaces in full duplex mode e.g. sending and receiving
simultaneously on each of them.
Same rules should be applied for Flow creation in the DUT Cache
(as per section 4.1 and 4.3.1) - traffic passing through each
Observation Point should always create a new Cache entry in the Cache
e.g. the same traffic should not be just looped back on the receiving
interfaces to create the bidirectional traffic flow.
B.7 Instantaneous Flow Export Rate
An additional useful information when analysing the Flow Export data
is the time distribution of the instantaneous Flow Export Rate. It
can be derived during the measurements in two ways:
a. The Collector might provide the capability to decode Flow Export
during capturing and at the same time counting the Flow Records
and provide the instantaneous (or simply an average over shorter
time interval than specified in section 5.4) Flow Export Rate
b. The Flow Export protocol (like IPFIX [RFC5101]) can provide time
stamps in the Flow Export packets which would allow time based
analysis and calculate the Flow Export Rate as an average over
much shorter time interval than specified in section 5.4
The accuracy and shortest time average will always be limited by the
precision of the time stamps (1 second for IPFIX) or by the
capabilities of the DUT and the Collector.
Novak Expires October, 2012
[Page 33]
Internet-Draft Flow Monitoring Benchmarking April 2012