Internet DRAFT - draft-jacpra-bmwg-pmtest
draft-jacpra-bmwg-pmtest
Network Working Group Sudhin Jacob
Internet Draft Juniper Networks
Intended Status: Informational Praveen Ananthasankaran
Expires: August 03,2017 Nokia
Februrary 06, 2017
Benchmarking of Y1731 Performance Monitoring
draft-jacpra-bmwg-pmtest-03
Abstract
The draft defines the methodologies for benchmarking of the Y1731
performance monitoring on DUT in various methods like Calculation
of near-end and far-end data. Measurement is done in scenarios by
using pre-defined COS and without COS in the network.The test
includes Impairment test,High Availability test and soak tests.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 03, 2017.
Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Praveen & Sudhin Expires August 03,2017 [Page 1]
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3
1.2. Terminologies. . . . . . . . . . . . . . . . . . . . . . 3
2. Test Topology . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. Test Procedure . . . . . . . . . . . . . . . . . . . . . . . 5
5.Test cases
5.1 Y.1731 Two-way Delay Measurement Test procedure. . . . . . . . 5
5.2 Y.1731 One-way Delay Measurement Test procedure. . . . . . . . 7
5.3 Loss measurement without COS Test Procedure. . . . . . . . . . 9
5.4 Loss measurement with COS Test Procedure. . . . . . . . . . . .12
5.5. Synthetic Loss Measurement Test Procedure. . . . . . . . . . .15
6.Acknowledgements. . . . . . . . . . . . . . . . . . . . . . . . . 18
7. Security Considerations. . . . . . . . . . . . . . . . . . . . . 18
8.IANA Considerations. . . . . . . . . . . . . . . . . . . . . . . .18
Praveen & Sudhin Expires August 03,2017 [Page 2]
1. Introduction
Performance monitoring is explained in ITU Y1731.This document defines
the methodologies for benchmarking performance of Y1731 over a point to
point service. Performance Monitoring has been implemented with
many varying designs in order to achieve their intended network functionality.
The scope of this document is to define methodologies for benchmarking Y1731
performance measurement. The following protocols under Y.1731 will be benchmarked.
1. Two-way delay measurement
2. One-way delay measurement
3. Loss measurement
4. Synthetic loss measurement
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
1.2. Terminologies
PM Performance monitoring
COS Class of Service
In-profile CIR termed as green packets.
Out-profile EIR Yellow/Amber packet.
LMM Loss Measurement Message
LMR Loss Measurement Reply
DMM Delay Measurement Message
DMR Delay MEasurement Reply
P Router Provider Router.
PE Router Provider Edge Router
CE Router customer Edge Router
DUT Device under Test.
CCM Continuity check messages
Praveen & Sudhin Expires August 03,2017 [Page 3]
2.1 Test Topology
| Traffic Generator
+----------+
| |
| PE2 |
| |
+----------+
|
|
+----------+
| |
| Core |
| router |
+----------+
|
|
+----------+
| |
| DUT |
| PE1 |
+----------+
|
|--- Traffic Generator
Praveen & Sudhin Expires August 03,2017 [Page 4]
3. Network
The benchmarking topology consists of 3 routers and 2 traffic generators.
DUT is PE1 connected to CE. The core router is the P router mentioned in the
topology. There is layer two(point-to-point) services running from PE1 to PE2.
On the top of that performance monitoring such as loss,delay and synthetic
measurements are running.PE1 is acting as DUT.The traffic will be layer 2
with vlan tag.The frame size will be 64,128,512,1024 and 1400.The tests are
carried out using these various frame size.The traffic will be uni directional
or bi directional.
4. Test Procedure
The tests are defined to benchmark the Y1731 performance monitoring in
High Availability,Impairment,SOAK,Scale,with traffic of various line rate
and frame sizes.
4.1 Performance Monitoring with traffic
Traffic is send with different .1p priorities,line rate and frame size of 64,
128,512,1024,1400.The PM values are measured with each frame size with various
line rates.
4.2 High Availability
The traffic is flowing bi-direction.Then traffic is flowing at "P" packets per
sec. The traffic generator is measuring the Tx and Rx packets, while the routing
engine failover there should not any packet loss the router tester must show both
"P" packet per seconds.The PM historical data should not reset.
4.3 Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions with
Performance monitoring running over it.There should not be any crashes,memory
leaks.
4.4 SOAK
This test is used to measure the performance of DUT over a period of time,with scaled
configuration and traffic over a period of time "T'". In each interval "t1" the
parameters measured are CPU usage, memory usage and crashes.
4.5 Measurement Statistics
The test is repeated for "N" times and the value is taken by averaging the values.
Praveen & Sudhin Expires August 03,2017 [Page 5]
5 Test Cases
5.1 Y.1731 Two-way Delay Measurement Test procedure
Basic Testing Objective
Check the round trip delay of the network in different conditions of
traffic load in the network.
Test Procedure
Configure a layer 2 point-to-point service between PE1 and PE2.
Configure Y.1731 Two way delay measurement over the service.Observe
the delay measurement in the following conditions of traffic in the network
a. Send 80% of Line-rate traffic with different priorities and frame size.
b. Send 40% of Line-rate traffic with different priorities and frame size.
c. Without any line traffic
The result of all the 3 conditions above are noted and correlated.
Test Measurement
The following factors needs are to be measured to benchmark the result
1. The average two-way delay
2. The average two-way delay variation
In the above 3 conditions the results obtained must be similar
1. Ideal case
In this case the hardware aspects of processing capacity and the link
level anomalies are not considered. The benchmark is just on the protocol
functioning.In such environment where for an ideal case the system should
expect delay variation to be zero.
Praveen & Sudhin Expires August 03,2017 [Page 6]
2. Practical case
This case is used to benchmark results when delay measurement is done on
physical hardware (like a router).The factors of packet process jitter and
link level delays needs to be considered here.The delay variation in such
cases will defer based on the above parameters on different hardware systems.
Result will very base on the exact hardware.
Delay Variation
+
^ |
| |
| |
+ |
|
| +----------+
| | |
| | |
+------------+----------+-------------+
Time
----->
Traffic (0 to 100 percent line rate)
Impairment
This is to benchmark two-way delay measurement even when both data and PDUs
are dropped in the network using the impairment tool.
Measurement
The results must show similar results before and after this test.
High Availability
During routing engine failover the historical data must not reset.
Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions
with Performance monitoring running over it.There should not be any
crashes,memory leaks.
Soak
The bi directional traffic is send over service over 24 to 48
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring
Measurement
There should not be any core or crashes,memory leaks.
Praveen & Sudhin Expires August 03,2017 [Page 7]
5.2 One-Way delay measurement Test Procedure
Basic Testing Objective
The test defined to measure the one-way delay measurement.
One-way delay measurement as defined in Y.1731 is the delay
of the packet to originate from a specific end-point till it
reached the other end of the network. The measurement of
this mandates the clock to be accurately synchronized as
the delay is computed based on the time of two different end-points.
Test Procedure
Configure a layer2 point-to-point service between PE1 and PE2.
Configure Y.1731 one-way delay measurement over the service.
Observe the delay measurement delay measurement in the following
conditions of traffic in the network
a. Send 80% of Line-rate traffic with different priorities with different
frame size.
b. Send 40% of Line-rate traffic with different priorities with
different frame size.
c. Without any line traffic
The result of all the 3 conditions above are noted and correlated.
Test Measurement
The following factors needs to be measured to benchmark the result
The average one-way delay
The average one-way delay variation
In the above 3 cases results obtained must be similar.
Praveen & Sudhin Expires August 03,2017 [Page 8]
1. Ideal case
In this case the hardware aspects of processing capacity and the link
level anomalies are not considered. The benchmark is just on the protocol
functioning.In such environment where for an ideal case the system should
expect delay variation to be zero.
2. Practical case
This case is used to benchmark results when delay measurement is done on
physical hardware (like a router).The factors of packet process jitter and
link level delays needs to be considered here.The delay variation in such
cases will defer based on the above parameters on different hardware systems.
Result will very base on the exact hardware.
Delay Variation
+
^ |
| |
| |
+ |
|
| +----------+
| | |
| | |
+------------+----------+-------------+
Time
----->
Traffic (0 to 100 percent line rate)
Impairment
This is to benchmark one-way delay measurement even when both data
and PDUs are dropped in the network using the impairment tool.
Measurement
The results must show similar results before and after this test.
High Availability
During routing engine failover the historical data must not reset.
Praveen & Sudhin Expires August 03,2017 [Page 9]
Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions
with Performance monitoring running over it.There should not be any
crashes,memory leaks.
Soak
The bi directional traffic is send over service over 24 to 48
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring
Measurement
There should not be any core or crashes,memory leaks.
5.3 Loss measurement without COS Test Procedure
Basic Testing Objective
The test defined methodology for benchmarking data loss in the network
on real customer traffic. The Y.1731 indicates to consider only
in-profile (green) packet for loss measurement. For this, the testing
needs to be done in multiple environment where
a.All data packets from traffic generator are sent with single 802.1p
priority and the network do not have a COS profile defined.
b.All data packets from traffic generator are sent with 0 to 7
values for 802.1p priority and the network do not have a COS profile
defined.
The objective is to benchmark the protocol behavior under different
networking conditions and correlate the data.The objective is not
to test the actual functioning of Y.1731 Loss measurement.The loss
measurement must count only in profile packet, since there is no COS
defined.All the packets must be recorded as green.
Praveen & Sudhin Expires August 03,2017 [Page 10]
Test Procedure
Configure a layer2 point-to-point service between PE1 and PE2.
Configure Y.1731 loss measurement over the service.
Observe the loss measurement in the following conditions of traffic in
the network
a.Send 80% of Line-rate traffic with different priorities with different
frame size.
b.Send 40% of Line-rate traffic with different priorities with different
frame size.
c.Without any line traffic
The result of all the 3 conditions above are noted and correlated.
Test Measurement
The factors which need to be considered is the acceptable absolute
loss for the given network.
Impairment
This is to benchmark loss measurement even when both data and PDUs
are dropped in the network using the impairment tool.
Measurement
When the data is dropped it must show the loss correctly and PM PDUs
are dropped the counting should not be affected,ther should not be
any abnormal output.
High Availability
During routing engine failover the historical data must not reset.
Ideal case there must be 0 packet loss.
Praveen & Sudhin Expires August 03,2017 [Page 11]
Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions
with Performance monitoring running over it.There should not be any
crashes,memory leaks.Each session must record loss measurement correctly.
Soak
The bi directional traffic is send over service over 24 to 48
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring
Measurement
There should not be any core or crashes,memory leaks.
Result
+----------------------------------+
| Traffic sent |Loss measurement|
| over the service|(without cos) |
| for bi direction| |
+----------------------------------+
| 7 Streams at | Near End = 100%|
| 100% line rate | Far End = 100% |
| with priority | |
| from 0 to 7 | |
+----------------------------------+
| Dropping 50% | Near End 50% |
| of line rate | Far end 100% |
| at near end. | Near End loss |
| | observed 50% |
| | |
+----------------------------------+
| Dropping 50% |Near End 100% |
| of line rate | Far end 50% |
| at far end. | Far End Loss |
| | observed 50% |
+-----------------+----------------+
Praveen & Sudhin Expires August 03,2017 [Page 12]
5.4. Loss measurement with COS Test Procedure
Basic Testing Objective
The test defined methodology for benchmarking data loss in the network on
real customer traffic. The Y.1731 indicates to consider only in-profile(green)
packet for loss measurement. For this, the testing needs to be done in multiple
environment where
a. All data packets from traffic generator are sent with single 802.1p
priority and the network have pre-defined COS profile defined.
b. All data packets from traffic generator are sent with 0 to 7 values
for 802.1p priority and the network have pre-defined COS profile defined.
The COS profile defined needs to have 2 factors
a.COS needs to treat different 802.1p as separate class of packets.
b.Each Class of packets needs to be an defined CIR for the
specific network.
The objective is to benchmark the protocol behavior under different
networking conditions and correlate the data. The objective is not
to test the actual functioning of Y.1731 Loss measurement.
The loss measurement must show in profile packet for each COS levels.
Each COS level must count only its own defined in profile packets.
The Packets, which are termed, as out profile by COS marking must
not be counted.When the traffic is send with single 802.1p priority
the loss measurement must record value only for that particular COS level.
Test Procedure
Configure a layer2 point-to-point service between PE1 and PE2.
Configure Y.1731 loss measurement over the service.
Observe the loss measurement in the following conditions of traffic
in the network.
Praveen & Sudhin Expires August 03,2017 [Page 13]
d.Send 80% of Line-rate traffic with different priorities with
different frame size.
e.Send 40% of Line-rate traffic with different priorities with different
frame size.
f. Without any line traffic
The result of all the 3 conditions above are noted and correlated.
Test Measurement
The factors which need to be considered is the acceptable absolute
loss for the given network.
Impairment
This is to benchmark loss measurement even when both data and PDUs
are dropped in the network using the impairment tool.
Measurement
When the data is dropped it must show the loss correctly and PM PDUs
are dropped the counting should not be affected,there should not be
any abonormal output.
High Availability
During routing engine failover the historical data must not reset.
Ideal case there must be 0 packet loss.
Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions
with Performance monitoring running over it.There should not be any
crashes,memory leaks.Each session must record loss measurement correctly.
Soak
The bi directional traffic is send over service over 24 to 48
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring
Praveen & Sudhin Expires August 03,2017 [Page 14]
Measurement
There should not be any core or crashes,memory leaks.
Result
+----------------------------------+
| Traffic sent |Loss measurement|
| over the service|(With cos) |
| for bi direction| |
+----------------------------------+
| 7 Streams at | Near End = 100%|
| 100% line rate | Far End = 100% |
| with priority | |
| from 0 to 7 | |
+----------------------------------+
| Dropping 50% | Near End 50%|
| of line rate | Far end 100% |
| at near end | Near End loss |
| for priority | observed 50% |
| marked 0 | (priority 0) |
+----------------------------------+
| Dropping 50% |Near End 100%|
| of line rate | Far end 50% |
| at far end for | Far End Loss |
| priority 0 | observed 50% |
| | (priority 0) |
+-----------------+----------------+
Praveen & Sudhin Expires August 03,2017 [Page 15]
5.5. Synthetic Loss Measurement Test Procedure
5.5.1 Basic Testing Objective
The test defined methodology for benchmarking synthetic loss in the network.
The testing needs to be done in multiple environment where
a. All data packets from traffic generator are sent with single 802.1p
priority and the network do not have a COS profile defined.
The synthetic loss measurement also uses the same 802.1p priority as that
of traffic.
b. All data packets from traffic generator are sent with single 802.1p
priority and the network have pre-defined COS profile defined.The synthetic
loss measurement also uses the same 802.1p priority as that of traffic.
c. All data packets from traffic generator are sent with 0 to 7
values for 802.1p priority and the network do not have a COS profile
defined. The synthetic loss measurement also uses the same 802.1p priority
as that of traffic. Hence 8 sessions are tested in parallel.
d. All data packets from traffic generator are sent with 0 to 7
values for 802.1p priority and the network have pre-defined COS profile defined.
The synthetic loss measurement also uses the same 802.1p priority as that of
traffic. Hence 8 sessions are tested in parallel.
The COS profile defined needs to have 2 factors
1.COS needs to treat different 802.1p as separate class of packets.
2.Each Class of packets needs to have defined CIR for the specific network.
The objective is to benchmark the protocol behavior under different networking
conditions and correlate the data. The objective is not to test the
actual functioning of Y.1731 Loss measurement.
Test Procedure
Configure a layer2 point-to-point service between PE1 and PE2.
Configure Y.1731 loss measurement over the service. Observe the synthetic
loss measurement in the following conditions of traffic in the network
a. Send 80% of Line-rate traffic with different priorities
b. Send 40% of Line-rate traffic with different priorities
c. Without any line traffic
The result of all the 3 conditions above are noted and correlated.
Praveen & Sudhin Expires August 03,2017 [Page 16]
Test Measurement
The factors which need to be considered is the acceptable absolute loss
for the given network.
Impairment
This is to benchmark synthetic loss measurement even when both data
and PDUs are dropped in the network using the impairment tool.
Measurement
When the data is dropped it must not affect the SLM counters but if
synthetic frames are dropped the loss must be shown accordingly.
High Availability
During routing engine failover the historical data must not reset.
Scale
This is to measure the performance of DUT in scaling to "X" CFM sessions
with Performance monitoring running over it.There should not be any
crashes,memory leaks.
Soak
The bi directional traffic is send over service over 24 to 48
hours and measure after the stipulated time there must not be
any change in behavior in the network for performance monitoring
Measurement
There should not be any core or crashes,memory leaks.
Praveen & Sudhin Expires August 03,2017 [Page 17]
6. Acknowledgements
We would like to thank Al Morton of (ATT) for their support and encouragement.
We would like to thank Fioccola Giuseppe of Telecom Italia reviewing our
draft and commenting it.
7. Security Considerations
NA
8.IANA Considerations
NA
Appendix A. Appendix
Authors' Addresses
Sudhin Jacob
Juniper Networks
Bangalore
Email: sjacob@juniper.net
sudhinjacob@rediffmail.com
Praveen Ananthasankaran
Nokia
Manyata Embassy Tech Park,
Silver Oak (Wing A), Outer Ring Road,
Nagawara, Bangalore-560045
Email: praveen.ananthasankaran@nokia.com
Praveen & Sudhin Expires August 03,2017 [Page 18]