Internet DRAFT - draft-poretsky-mpls-protection-meth
draft-poretsky-mpls-protection-meth
Network Working Group
INTERNET-DRAFT
Expires in: August 2006
Scott Poretsky
Reef Point Systems
Rajiv Papneja
Isocore
Shankar Rao
Qwest Communications
Jean-Louis Le Roux
France Telecom
February 2006
Benchmarking Methodology for MPLS Protection Mechanisms
<draft-poretsky-mpls-protection-meth-05.txt>
Intellectual Property Rights (IPR) statement:
By submitting this Internet-Draft, each author represents that any
applicable patent or other IPR claims of which he or she is aware
have been or will be disclosed, and any of which he or she becomes
aware will be disclosed, in accordance with Section 6 of BCP 79.
Status of this Memo
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as
Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
Copyright Notice
Copyright (C) The Internet Society (2006).
Poretsky, Papneja, Rao, Le Roux [Page 1]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
ABSTRACT
This draft describes the methodology for benchmarking MPLS
protection mechanisms using the terminology defined in [TERMID].
An overview of existing MPLS protection terminology and
functionality is provided as as background for the
methodology. The methodology can be applied to any MPLS protection
mechanisms such as Standby LSP, Fast Reroute Detour Mode, and Fast
Reroute Bypass Mode. The methodology can also be used to benchmark
LSP rerouting due to a Headend Reroute. The data plane is measured
to obtain the benchmarking metrics. Discussion is included to
explain the differences between MPLS Protection mechanisms, the
network events that cause failover, and the benefits of measuring
the data plane for black-box MPLS Protection benchmarking.
Measurements can be used to compare failover performance of
different Label-Switched Routers and evaluate the different MPLS
protection mechanisms.
Table of Contents
1. Introduction ...............................................3
2. Existing definitions........................................4
3. Test Considerations.........................................5
3.1 Types of Failover Events...................................5
3.2 Network Indications for Failover...........................5
3.3 Use of Data Traffic for MPLS Protection Benchmarking.......6
3.4 LSP and Route Scaling......................................6
3.5 Selection of IGP...........................................6
3.6 Reversion..................................................6
4. Test Setup..................................................7
4.1 DUT as Ingress.............................................7
4.2 DUT as Failover Node with Link Protection..................7
4.3 DUT as Failover Node with Node Protection..................7
4.4 DUT as Merge Node..........................................7
4.5 DUT as Egress..............................................8
4.6 DUT as Ingress and Failover Node with Link Protection......8
4.7 DUT as Ingress and Failover Node with Node Protection......8
4.8 DUT as Egress and Merge Node...............................8
5. Test Cases..................................................9
5.1 Node Protection............................................9
5.1.1 Ingress..................................................9
5.1.1.1 Local SONET Failure....................................9
5.1.1.2 Local Administrative Shutdown..........................10
5.1.1.3 Remote Failure.........................................11
5.1.2 Failover Node............................................12
5.1.2.1 Ingress Failover Node..................................12
5.1.2.2 Midpoint Failover Node.................................13
5.1.3 Merge Node...............................................14
5.1.4 Merge Node and Egress....................................15
5.2 Link Protection............................................16
5.2.1 Ingress Failover Node....................................16
5.2.2 Midpoint Failover Node...................................17
5.3 Fast Reroute Scalability...................................18
Poretsky, Papneja, Rao, Le Roux [Page 2]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
6. Security Considerations.....................................19
7. Acknowledgements............................................19
8. References..................................................19
9. Author's Address............................................20
1. Introduction
MPLS-TE offers three options for Protection Mechanisms to reroute
traffic from a Primary LSP to a Backup LSP: Standby LSP, Fast
Reroute (FRR) Detour Mode, and Fast Reroute (FRR) Bypass Mode. The
methodology can also be used to benchmark LSP rerouting due to a
Headend Reroute. The purpose of these mechanisms is to provide
protection for link and node failures. Each of the mechanisms
offers a distinct tradeoff in the amount of network configuration
and level of protection.
Headend Reroute is the default Protection Mechanism for MPLS-TE.
Headend Reroute establishes a backup LSP that is dynamically
signaled after a failure event has occurred. Headend Reroute has
the advantage that a backup is not utilizing resources prior to
failure. Its disadvantage is its long failover time [TERMID]
that produces high packet loss during failure that can be greater
than that of IGP Convergence. The most basic topology for dynamic
headend reroute is shown in Figure 1.
-------- -------- --------
|Ingress | |MidPoint| | Egress |
|Node |----| Node |----| Node |
-------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 1. Topology for Headend Reroute
The Standby LSP is an extension to RSVP-TE signaling in which a
backup LSP is signaled in advance from primary Ingress to Egress.
Its advantage is that it is faster than Headend Reroute, but it
requires high resource utilization at the ingress to maintain
unused backup LSPs. The topology for Standby LSP is identical
to that for Dynamic Headend Reroute shown in Figure 1, but the
Backup Path is pre-established.
FRR is an extension to TE specified in [MPLS-FRR-EXT] to provide
a Protection Mechanism with fast failover characteristics to
minimize packet loss while reducing processing load at the tunnel
ingress. FRR provides link and node protection. The primary and
backup LSPs are signaled using an optional extension to the
Resource Reservation Protocol with TE extensions (RSVP-TE). With
FRR midpoint LSRs along the primary path are responsible for
laying out the protected path in the downstream direction, toward
Poretsky, Papneja, Rao, Le Roux [Page 3]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
the egress. This provides local protection for link and node
failure and achieves a goal of less than 45msec failover while
minimizing resource utilization at the ingress. FRR does require
greater network configuration and increases signaling complexity.
There are two FRR modes, Bypass and Detour, each with distinct
advantages for operational configuration. The topology for FRR
is shown in Figure 2.
-------- -------- -------- -------- --------
|Ingress | |Failover| |MidPoint| | Merge | | Egress |
| Node |----| Node |----| Node |----| Node |----| Node |
-------- -------- -------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 2. Topology for Fast Reroute
This draft describes the methodology for benchmarking MPLS
protection mechanisms. The methodology can be applied to any MPLS
Protection mechanism such as Headend Reroute, Standby LSP, Fast
Reroute Detour Mode, and Fast Reroute Bypass Mode. The data plane
is measured to obtain the benchmarking metrics. Discussion is
included to explain the network events that cause failover and
benefits of measuring the data plane for black-box MPLS Protection
benchmarking. Measurements can be used to compare failover
performance of different Label-Switched Routers and evaluate the
different MPLS protection mechanisms.
2. Existing definitions
For the sake of clarity and continuity this RFC adopts the template
for definitions set out in Section 2 of RFC 1242. Definitions are
indexed and grouped together in sections for ease of reference.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in RFC 2119.
The reader is assumed to be familiar with the commonly used MPLS
terminology, some of which is defined in [MPLS-RSVP], [MPLS-RSVP-TE],
and [MPLS-FRR-EXT].
Poretsky, Papneja, Rao, Le Roux [Page 4]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
3. Test Considerations
This section discusses three fundamentals of MPLS Protection
testing: the types of network events that cause failover,
indications for failover, the use of data traffic, LSP Scaling,
and IGP Selection.
3.1 Failover Events [TERMID]
Causes of reroute are administration change, link failure, node
failure, setting of the IGP overload bit, and path optimization.
Indication for these types of failures wil vary whether the
failure is local or remote. Reasons for all of these types of
failures can vary due to the router/switch or network.
Different causes for failures could vary the detection and
recovery times.
3.2 Failure Detection [TERMID]
For an ingress administrative change a make-before-break
should occur to prevent any packet loss.
Local failures can be detected via SONET failure with directly
connected LSR. Failure indication may vary with the type of
alram - LOS, AIS, or RDI. Failures on Ethernet technology
links such as Gigabit Ethernet rely upon Layer 3 signaling
indication for failure.
Remote failures are indicated via Control Plane signaling.
Signaling indications are Negative RSVP Indication (loss of
PATH Refresh messages from upstream node, loss of RESV Refresh
messages from downstream node, or loss of RSVP Fast Hellos),
Positive RSVP Indication (receipt of PATHTear Message from
upstream node or receipt of RESVTear Message from downstream
node), or change to the TE-LSDB via the IGP.
Different MPLS Protection Mechanisms and different implementations
use different failure indications. Ethernet technologies such as
Gigabit Ethernet rely upon Layer 3 failure indication mechanisms
since there is no Layer 2 failure indication mechanisms. The
methodologies in this document provide remote failure cases to
evaluate failover performance independent of the implemented
signaling indication mechanism.
Poretsky, Papneja, Rao, Le Roux [Page 5]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
3.3 Use of Data Traffic for MPLS Protection Benchmarking
Customers of service providers use packet loss as the metric for
failover time. Packet loss is an externally observable event
having direct impact on customers' application performance. MPLS
Protection mechanisms exist to minimize packet loss in the event
of failure. For this reason it is important to develop a standard
router benchmarking methodology and terminology for measuring MPLS
Protection that uses packet loss as a metric. At a known rate for
forwarding rate, packet loss can be measured and used to calculate
the Failover time. Measurement of control plane signaling to
establish Backup paths is not enough to verify failover. Failover
is best determined when packets are actually traversing the Backup
Path.
An additional benefit of using packet loss for calculation of
Failover time is that it enables black-box tests to be designed.
Data traffic can be offered at line-rate to the device under test
(DUT), an emulated network event as described above can be forced to
occur, and packet loss can be externally measured to calculate the
convergence time. Knowledge of the DUT architecture is not required.
There is no need to rely on the DUT to produce the test results.
3.4 LSP and Route Scaling
Failover Time performance may vary with the number of established
primary and backup LSPs and routes learned. The methodologies
may be used for any number of LSPs, L, and number of routes, R.
N and R must be recorded. It is intended with Fast Reroute that
the less than 45msec failover requirement is maintained when
scaling the number of protected LSPs.
3.5 Selection of IGP
The methodologies can be used with ISIS-TE or OSPF-TE.
3.6 Reversion [TERMID]
Fast Reroute provides a method to return to restore a Backup Path
to original Primary LSP upon recoery from the failure. This is
referred to as Reversion, which can be implemented as Global
Reversion or Local Reversion. Standby LSP and Headend Reroute
also provide the ability to return to the Primary LSP upon failure
recovery. With any mechanism, Reversion should not produce any
packet loss. Each of the test cases in this methodology document
provides a step to verify that there is no packet loss. The
step can be performed regarless of Protection mechanism and
Reversion method.
Poretsky, Papneja, Rao, Le Roux [Page 6]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
4. Test Setup
There are numerous test setups for benchmarking a Protection Switching
System [TERMID] that use MPLS. This is due to the many roles that an
LSR can be along the LSP. A single DUT should be tested in each of
these roles. The Test Setups to use are shown in Figures 3 through 10.
4.1 DUT as Ingress
-------- -------- --------
|Ingress | |MidPoint| | Egress |
| DUT |----| Node |----| Node |
-------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 3. Test Setup with DUT as Ingress
4.2 DUT as Failover Node with Link Protection [TERMID]
-------- -------- -------- -------- --------
|Ingress | | FN | |MidPoint| | Merge | | Egress |
| |----| DUT |----| Node |----| Node |----| Node |
-------- -------- -------- -------- --------
| |
| -------- |
---|Backup |--
|Midpoint|
--------
Figure 4. Test Setup with DUT as Failover Node with Link Protection
4.3 DUT as Failover Node with Node Protection [TERMID]
-------- -------- -------- -------- --------
|Ingress | | FN | |MidPoint| | Merge | | Egress |
| |----| DUT |----| Node |----| Node |----| Node |
-------- -------- -------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 5. Test Setup with DUT as Failover Node with Node Protection
4.4 DUT as Merge Node [TERMID]
-------- -------- -------- -------- --------
|Ingress | | FN | |MidPoint| | Merge | | Egress |
| |----| |----| Node |----|Node DUT|----| Node |
-------- -------- -------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 6. Test Setup with DUT as Merge Node
Poretsky, Papneja, Rao, Le Roux [Page 7]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
4.5 DUT as Egress
-------- -------- --------
|Ingress | |MidPoint| | Egress |
| |----| Node |----| DUT |
-------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 7. Test Setup with DUT as Egress
4.6 DUT as Ingress and Failover Node with Link Protection
-------- -------- -------- --------
|Ingress/| |MidPoint| | Merge | | Egress |
|FN DUT |----| Node |----| Node |----| Node |
-------- -------- -------- --------
| |
| -------- |
---|Backup |--
|Midpoint|
--------
Figure 8. Test Setup with DUT as Ingress and Failover Node
with Link Protection
4.7 DUT as Ingress and Failover Node with Node Protection
-------- -------- -------- --------
|Ingress/| |MidPoint| | Merge | | Egress |
|FN DUT |----| Node |----| Node |----| Node |
-------- -------- -------- --------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 9. Test Setup with DUT as Ingress and Failover Node
with Node Protection
Poretsky, Papneja, Rao, Le Roux [Page 8]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
4.8 DUT as Egress and Merge Node
-------- -------- -------- ---------
|Ingress | | FN | |MidPoint| | Egress/ |
| |----| DUT |----| Node |----|Merge DUT|
-------- -------- -------- ---------
| |
| -------- |
---------|Backup |---------
|Midpoint|
--------
Figure 10. Test Setup with DUT as Egress and
Merge Node
5. Test Cases
5.1 Node Protection [TERMID]
5.1.1 Ingress
5.1.1.1 Local SONET Failure
Objective
To benchmark the MPLS failover time due to a Local
SONET Link failure event at the Ingress.
Test Setup
Use Figure 3. The DUT is the LSP ingress.
The test device(s) will have three interfaces to the
DUT:
1. Transmit IP Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The ingress can be configured for Headend Reroute,
Standby LSP, or Fast Reroute.
The test device(s) should emulate the Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of IP packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP.
4. Establish Backup LSP (Step 4 does not apply for
Headend Reroute).
5. Send IP traffic at maximum Forwarding Rate to DUT.
IP destination address must match FEC for Primary LSP.
Poretsky, Papneja, Rao, Le Roux [Page 9]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
6. Verify traffic switched over Primary LSP.
7. Remove SONET on DUT's Ingress Interface to Primary LSP.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Results
Packet loss upon failure at Ingress's (DUT) primary
interface should be 100%. The Failover Time is the time to
restore 100% of the traffic. The measured failover time is
influenced by the failure indication and Hardware update
time. The result with Headend Reroute is also influenced
by the time to establish the Backup LSP.
5.1.1.2 Local Administrative Shutdown
Objective
To benchmark the MPLS failover time due to a Local
administrative shutdown event at the Ingress.
Test Setup
Use Figure 3. The DUT is the LSP ingress.
The test device(s) will have three interfaces to the
DUT:
1. Transmit IP Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The ingress can be configured for Headend Reroute,
Standby LSP, or Fast Reroute.
The test device(s) should emulate the Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of IP packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP.
4. Establish Backup LSP (Step 4 does not apply for
Headend Reroute).
5. Send IP traffic at maximum Forwarding Rate to DUT.
IP destination address must match FEC for Primary LSP.
6. Verify traffic switched over Primary LSP.
7. Administratively shutdown the DUT's Ingress Interface
to Primary LSP.
Poretsky, Papneja, Rao, Le Roux [Page 10]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Results
Make-before-break should occur so that there is no observed
packet loss.
5.1.1.3 Remote Failure
Objective
To benchmark the MPLS failover time due to a remote failure
event at remote interface of a mid-point node.
Test Setup
Use Figure 3. The DUT is the LSP ingress. For Headend
Reroute and Standby LSPs, additional mid-point nodes
may be added to the test setup. The test device(s)
will have three interfaces to the DUT:
1. Transmit IP Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The ingress can be configured for Headend Reroute or
Standby LSP.
The test device(s) should emulate the Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of IP packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP.
4. Establish Backup LSP (Step 4 does not apply for
Headend Reroute).
5. Send IP traffic at maximum Forwarding Rate to DUT.
IP destination address must match FEC for Primary LSP.
6. Verify traffic switched over Primary LSP.
7. Receive control plane indication on DUT's Ingress
Interface to Primary LSP.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Poretsky, Papneja, Rao, Le Roux [Page 11]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
Results
Failover Time is the period starting with the first occurence
of packet loss and ending with full restoration of 100% of the
traffic. Traffic loss is first observed upon receipt and
processing of the control plane indication at the DUT. At
this time the DUT stops sending traffic on the primary LSP
and switches it to the Backup LSP. The Failover Time is the
time for the DUT to switch the traffic. The test device
packet generation and forwarding time is inherently not a
factor in the measurement.
5.1.2 Failover Node [TERMID]
5.1.2.1 Ingress Failover Node
Objective
To benchmark the MPLS failover time due to a Local SONET Link
failure event at the Ingress also configured as Failover Node with
node protection.
Test Setup
Use Figure 9. The DUT is the LSP ingress. All other nodes
indicated in the figure are simulated by a test device.
The test device(s) will have three interfaces to the DUT:
1. Transmit IP Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The ingress is configured for FRR.
The test device(s) should emulate FRR Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of IP packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP.
4. Establish Backup LSP. Ensure that the primary LSP and Backup
LSP outgoing interfaces are different.
5. Send IP traffic at maximum Forwarding Rate to DUT.
IP destination address must match FEC for Primary LSP.
6. Verify traffic switched over Primary LSP.
7. Remove SONET on DUT's Ingress Interface to Primary LSP.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Poretsky, Papneja, Rao, Le Roux [Page 12]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
Results
Packet loss upon failure at Ingress's (DUT) primary
interface should be 100%. The Failover Time is the time to
restore 100% of the traffic. The measured failover time is
influenced by the failure indication and Hardware update
time.
5.1.2.2 Midpoint Failover Node
Objective
To benchmark the MPLS failover time with node protection
provided, due to a SONET Link failure event at the Midpoint
Failover Node.
Test Setup
Use Figure 5. The DUT is a midpoint on the primary LSP.
The remaining nodes, in the indicated topology are simulated
by the test device.
The test device(s) will have three interfaces to the
DUT:
1. Transmit MPLS Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The DUT as midpoint Failover Node is configured for FRR.
The test device(s) should emulate FRR Ingress, Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of MPLS packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP with Tester as in the ingress.
4. Establish Backup LSP.
5. Send labeled traffic at maximum Forwarding Rate to DUT.
Label must match the one received from DUT.
6. Verify traffic switched over Primary LSP.
7. Remove SONET on DUT's interface to downstream Midpoint
LSR on Primary LSP path.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Results
Packet loss upon failure on midpoint LSRs (DUT) primary LSP
interface should be 100%. The Failover Time is the time to
Poretsky, Papneja, Rao, Le Roux [Page 13]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
restore 100% of the traffic. The measured failover time is
influenced by the Local failure indication and hardware
update time.
5.1.3 Merge Node [TERMID]
Objective
To benchmark the MPLS failover time and packet loss at the
merge node due to failure event along the protected path
somewhere upstream.
Test Setup
Use Figure 6. The DUT is the Merge Node. All other nodes are
simulated by the test device and the failure is assumed to
occur along the protected path upstream to the DUT.
The test device(s) will have three interfaces to the DUT:
1. Transmit MPLS Traffic to DUT inbound primary Interface
2. Transmit MPLS traffic to DUT's Backup interface
3. Receive MPLS traffic from DUT's primary outbound interface
before and after the failover
Test Configuration
The Merge Node can be configured for merging Backup Paths using
Sender-Template specific method and merging Detours using Path-
Specifi Method. The test device(s) should emulate the Point of
LLocal Repair, Primary Path Midpoint, Backup Path Midpoint, and
Egress Node. The test device sources an offered load of MPLS
packets to the DUT inbound interfaces and receives switched MPLS
from the DUT as an egress.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP with DUT as the potential Merge Node.
4. Establish Backup/Detour LSP with DUT as a Merge Node.
5. Send MPLS traffic at maximum Forwarding Rate to DUT along the
Primary Path.
6. Configure the Merge Node to have common outgoing interfaces
and the next-hop tester interface.
7. Verify that the DUT correctly identifies the Protected and
Backup LSP based on Sender-Template specific Method and
Path-Specific Method.
8. Verify traffic received at the emulated egress is not
affected after the failure has occurred and Failover Node has switched
to the Backup LSP.
9. Observe Packet Loss and observe packet re-ordering if any.
10. If there is packet loss measure the convergence time until
the DUT stabilizes switches MPLS traffic to the merged LSP.
11.Recover from failure and verify Reversion does not
produce any packet loss.
Poretsky, Papneja, Rao, Le Roux [Page 14]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
Results
Packet loss should be ideally 0% once the Failover Node switches to the
backup and merging takes place at the DUT. If there is packet loss,
the measured time should be less than that specified in
[MPLS-FRR-EXT].
5.1.4 Merge Node and Egress
Objective
To benchmark the MPLS failover time and packet loss at the
merge node/Egress due to failure event along the protected path
somewhere upstream.
Test Setup
Use Figure 10. The DUT is the Merge Node. All other nodes are
simulated by the test device and the failure is assumed to
occur along the protected path upstream to the DUT.
The test device(s) will have three interfaces to the DUT:
1. Transmit MPLS Traffic to DUT inbound primary Interface
2. Transmit MPLS traffic to DUT's Backup interface
3. Receive IP traffic from DUT's primary outbound interface
before and after the failover
Test Configuration
The Merge Node can be configured for merging Backup Paths using
Sender-Template specific method and merging Detours using Path
-Specific Method. The test device(s) should emulate the Point of
Local Repair, Primary Path Midpoint, Backup Path Midpoint, and
Egress Node. The test device sources an offered load of MPLS
packets to the DUT inbound interfaces and receives switched MPLS
from the DUT as an egress.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP with DUT as the potential Merge Node.
4. Establish Backup/Detour LSP with DUT as a Merge Node.
5. Send MPLS traffic at maximum Forwarding Rate to DUT along the
Primary Path.
6. Configure the Merge Node to have common outgoing interfaces
and the next-hop tester interface.
7. Verify that the DUT correctly identifies the Protected and
Backup LSP based on Sender-Template specific Method and
Path-Specific Method.
8. Verify traffic received at the emulated egress is not affected
after the failure has occurred and Failover Node has switched
to the Backup LSP.
9. Observe Packet Loss and observe packet re-ordering if any.
10.If there is packet loss measure the convergence time until the
Poretsky, Papneja, Rao, Le Roux [Page 15]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
DUT stabilizes and maps the IP traffic correctly to the
destination.
11.Recover from failure and verify Reversion does not
produce any packet loss.
Results
Packet loss should be ideally 0% once the Failover Node switches to
the backup and merging takes place at the DUT. If there is packet loss
the measured time should be less than that specified in [MPLS-FRR-EXT].
5.2 Link Protection [TERMID]
5.2.1 Ingress Failover Node
Objective
To benchmark the MPLS failover time due to a Local
SONET Link failure event at the Ingress with link protection.
Test Setup
Use Figure 8. The DUT is the LSP ingress. All other nodes
indicated in the figure are simulated by a test device.
The test device(s) will have three interfaces to the
DUT:
1. Transmit IP Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The ingress is configured for FRR.
The test device(s) should emulate FRR Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of IP packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP.
4. Establish Backup LSP. Ensure that the primary LSP and
Backup LSP outgoing interfaces are different.
5. Send IP traffic at maximum Forwarding Rate to DUT.
IP destination address must match FEC for Primary LSP.
6. Verify traffic switched over Primary LSP.
7. Remove SONET on DUT's Ingress Interface to Primary LSP.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
Poretsky, Papneja, Rao, Le Roux [Page 16]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
10.Recover from failure and verify Reversion does not
produce any packet loss.
Results
Packet loss upon failure at Ingress's (DUT) primary
interface should be 100%. The Failover Time is the time to
restore 100% of the traffic. The measured failover time is
influenced by the failure indication and Hardware update
time.
5.2.2 Midpoint Failover Node
Objective
To benchmark the MPLS failover time, with local (link) protection
provided, due to a SONET Link failure event at the Midpoint Failover Node.
Test Setup
Use Figure 4. The DUT is a midpoint on the primary LSP.
The reamining nodes, in the indicated topology are simulated
by the test device. The test device(s) will have three
interfaces to the DUT:
1. Transmit MPLS Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The DUT as midpoint Failover Node is configured for FRR.
The test device(s) should emulate FRR Ingress, Primary Path
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of MPLS packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish Primary LSP with Tester as in the ingress.
4. Establish Backup LSP.
5. Send labeled traffic at maximum Forwarding Rate to DUT.
Label must match the one received from DUT.
6. Verify traffic switched over Primary LSP.
7. Remove SONET on DUT's interface to downstream Midpoint
LSR on Primary LSP path.
8. Observe Packet Loss.
9. Measure Failover Time as DUT detects the link down event
and switches MPLS traffic over the Backup LSP.
10.Recover from failure and verify Reversion does not
produce any packet loss.
Poretsky, Papneja, Rao, Le Roux [Page 17]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
Results
Packet loss upon failure at Ingress's (DUT) primary
interface should be 100%. The Failover Time is the time to
restore 100% of the traffic. The measured failover time is
influenced by the failure indication and Hardware update
time.
5.3 Fast Reroute Scalability
Objective
Fast Reroute Recovery Delay could be dependent upon the
number of protected primary LSPs. The purpose of this test
is to benchmark the number of TE LSPs that can be protected
within 50ms.
Test Setup
Use Figure 5. The DUT is the Failover Node. All other nodes indicated
in the figure are simulated by test device(s). The test
device(s) will have three interfaces to the DUT:
1. Transmit MPLS Traffic to DUT
2. Receive MPLS traffic from DUT's Primary interface
3. Receive MPLS traffic from DUT's Backup interface
after failover
Test Configuration
The Failover Node is configured for FRR.
The test device(s) should emulate FRR Primary LSP
Midpoint, Backup Path Midpoint, and Egress Node. The
test device sources an offered load of MPLS packets to
the DUT ingress interface and receives switched MPLS
from the DUT.
Procedure
1. Enable either IGP-TE (ISIS-TE or OSPF-TE) on the
interfaces.
2. Advertise matching IGP TE-LSDB routes from Tester
to DUT on both primary and backup interfaces.
3. Establish a set of number N_tot primary TE LSPs.
N_tot must be quite high. Basically N_tot = 50 000
4. Establish a Bypass LSP.
5. Send trafic into each primary TE LSP at Aggregate Forwarding
Rate, Rg, to DUT.
6. Verify traffic switched over Primary LSPs.
7. Remove SONET on DUT's interface to downstream Midpoint
LSR on Primary LSP path.
8. Measure number of packets forwaded for 50ms, R_50.
9. Calculate the number of LSPs that are recovered within
50ms, N_50. N_50 = (R_50 / Rg) * N_tot.
Results
Results will vary depending upon Failover Node implementation.
Poretsky, Papneja, Rao, Le Roux [Page 18]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
6. Security Considerations
Documents of this type do not directly effect the security of
the Internet or of corporate networks as long as benchmarking
is not performed on devices or systems connected to operating
networks.
7. Acknowledgements
Thanks to Alia Atlas and Markus Jork for their review and
suggestions for this document. Their efforts to standardize
and implement Fast Reroute ([MPLS-FRR-EXT]) created the need
for this benchmarking methodology. Thanks to Dr. Bijan
Jabbari of Isocore for providing an independent leading edge
laboratory for equipment vendors and Service Providers to
evaluate and discuss MPLS protection mechanisms.
8. References
[MPLS-LDP] Andersson, L., Doolan, P., Feldman, N.,
Fredette, A. and B. Thomas, "LDP Specification",
RFC 3036, January 2001.
[MPLS-RSVP] R. Braden, Ed., et al, "Resource ReSerVation
protocol (RSVP) -- version 1 functional
specification," RFC2205, September 1999.
[MPLS-RSVP-TE] D. Awduche, et al, "RSVP-TE: Extensions to
RSVP for LSP Tunnels", RFC3209, December 2001.
[MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G.,
"Fast Reroute Extensions to RSVP-TE for
LSP Tunnels", RFC 4090.
[MPLS-ARCH] Rosen, E., Viswanathan, A. and R. Callon,
"Multiprotocol Label Switching Architecture",
RFC 3031, January 2001.
[RFC-WORDS] Bradner, S., "Key words for use in RFCs to
Indicate Requirement Levels", RFC 2119,
March 1997.
[RFC-IANA] T. Narten and H. Alvestrand, "Guidelines for
Writing an IANA Considerations Section in RFCs",
RFC 2434.
[TERM-ID] Poretsky, S., Papneja, R., Kimura, T.,
"Benchmarking Terminology for Protection
Performance",
draft-poretsky-protection-term-00.txt,
work in progress.
Poretsky, Papneja, Rao, Le Roux [Page 19]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
9. Author's Address
Scott Poretsky
Reef Point Systems
8 New England Executive Park
Burlington, MA 01803
USA
Phone: + 1 781 395 5090
EMail: sporetsky@reefpoint.com
Rajiv Papneja
Isocore
12359 Sunrise Valley Drive
Reston, VA 22102
USA
Phone: 1 703 860 9273
Email: rpapneja@isocore.com
Shankar Rao
Qwest Communications,
950 17th Street
Suite 1900
Qwest Communications
Denver, CO 80210
USA
Phone: + 1 303 437 6643
Email: shankar.rao@qwest.com
Jean-Louis Le Roux
France Telecom
FT R&D DAC/CPN
av Pierre Marzin
22300 Lannion
France
Phone: 00 33 2 96 05 30 20
Email: jeanlouis.leroux@rd.francetelecom.com
Poretsky, Papneja, Rao, Le Roux [Page 20]
INTERNET-DRAFT Benchmarking Methodology for February 2006
MPLS Protection Mechanisms
Full Copyright Statement
Copyright (C) The Internet Society (2006).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information
on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at
http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at ietf-
ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is currently provided by the
Internet Society.
Poretsky, Papneja, Rao, Le Roux [Page 21]