Internet DRAFT - draft-bhaprasud-ippm-pm
draft-bhaprasud-ippm-pm
IPPM Working Group B. M Gaonkar
Internet-Draft S. Jacob
Intended status: Standards Track Juniper
Expires: December 26, 2017 G. Fioccola
Telecom Italia
Q. Wu
Huawei
P. Ananthasankaran
Nokia
June 24, 2017
Performance Measurement Models
draft-bhaprasud-ippm-pm-03
Abstract
This document defines the performance measurement models for service
level packets on the network which can be implemented in different
kind of network scenarios. Based on the performance matrix, the
analytics data can be pulled from a live network which is not
possible at present.This can be used for self evolving networks.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 26, 2017.
Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
M Gaonkar, et al. Expires December 26, 2017 [Page 1]
Internet-Draft Performance Measurement Models June 2017
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Conventions used in this document . . . . . . . . . . . . . . 3
3. Traffic Management Architecture . . . . . . . . . . . . . . . 5
3.1. Selection Process . . . . . . . . . . . . . . . . . . . . 5
3.2. Metering Process . . . . . . . . . . . . . . . . . . . . 6
4. Performance Measurement Models . . . . . . . . . . . . . . . 6
4.1. Complete data measurement (Monitoring all the traffic) . 6
4.2. Color based data measurement . . . . . . . . . . . . . . 7
4.3. CoS based Data measurement . . . . . . . . . . . . . . . 7
4.4. CoS and Color based Data measurement . . . . . . . . . . 8
5. Active and Passive performance measurements . . . . . . . . . 8
6. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 8
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 9
8. Security Considerations . . . . . . . . . . . . . . . . . . . 9
9. References . . . . . . . . . . . . . . . . . . . . . . . . . 10
9.1. Normative References . . . . . . . . . . . . . . . . . . 10
9.2. Informative References . . . . . . . . . . . . . . . . . 10
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10
1. Introduction
Today performance monitoring or tracking of the performance
experienced by customer traffic is a key technology to strengthen
service offering and verify service level agreement between customers
and service providers, perform troubleshooting. The lack of adequate
monitoring tools to detect an interesting subset of a packet stream,
as identified by a particular packet attribute(e.g., commit rate or
DSCP) and measure that packet loss drives an effort to design a new
method for the performance monitoring of live traffic, possibly easy
to implement and deploy. The draft aims to provide fine granularity
loss, delay and delay variation measurement and define a performance
measurement model on customer traffic based on a set of constraints
that are associated with service level agreement such as cos
attribute, color attribute. Each customer traffic is corresponding
to an interesting subset of the same packet stream. The customer or
a interesting packet stream can be identified by a list of source or
destination prefixes, or by ingress or egress interfaces, combing
with packet attributes such as DSCP or commit rate).Unlike Color and
COS identification specified in MEF 23.1, this draft doesn't define
M Gaonkar, et al. Expires December 26, 2017 [Page 2]
Internet-Draft Performance Measurement Models June 2017
new Color and CoS identification mechanism, instead, it stick to
color definition in [RFC2697] and [RFC2698] and COS definition in
[RFC2474].
The network would be provisioned with multiple services(e.g., real
time service, interactive service) having different network
performance criteria(e.g., bandwidth constraint or packet loss
constraint for the end to end path) based on the customers'
requirement. This models aims at performing Loss, Delay and delay
variation measurement for these services (belonging to the same
customer)independently for each defined network performance criteria.
The class-of-service and packet color classification defined in the
network is a key factor to classify network traffic and drive traffic
management mechanism to achieve corresponding network performance
criteria for each service. This draft uses the class-of-service
model and color based model for any given network to define the
performance measurement for various services with the different
network performance criteria requirements.
The proposed models is suitable mainly for passive performance
measurements but can be considered for active and hybrid performance
measurements as well.
This solution models loss, delay an delay variation measurement in
different kinds of network scenarios. The different models explained
here will help to analyse performance pattern, analyze the network
congestion in a better way and model the network in a better way.
For instance, Loss measurement is carried out between 2 end points.
The underlying technology could be an active loss measurement or a
passive loss measurement.
Any loss measurement will require 2 counters:
o Number of packets transmitted from one end point.
o Number of packets received at the other end point.
This draft explains the different ways to model the above data and
get meaningful result for the loss, delay and delay variation
measurement. The underlying technology could be an MPLS performance
measurement, or an IP based performance measurement.
2. Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC2119 [RFC2119].
M Gaonkar, et al. Expires December 26, 2017 [Page 3]
Internet-Draft Performance Measurement Models June 2017
Observation Point An Observation Point is a location in the network
where data packets can be observed. Examples include a line to
which a probe is attached, a shared medium, such as an Ethernet-
based LAN, a single port of a router, or a set of interfaces
(physical or logical) of a router.
Persistence Data Store The persistence Data store is a scalable data
store which collects time based data such as streaming data or
time series data for network analytics.
Time Series Data Time Series Data is a sequence of data points with
time stamps. The data points are limited to loss, delay and delay
variation measurement results in this document.
Packet Stream A Packet Stream denotes a set of packets from the
Observed Packet Stream that flows past some specified point within
the Metering Process. An example of a Packet Stream is the output
of the Selection Process.
Packet Content The Packet Content denotes the union of the packet
header (which includes link layer, network layer, and other
encapsulation headers) and the packet payload.
Color Identifier: It is used to identify the color that applies to
the data packet. Color identifier can be assigned to service
level packet based on commit rate and excess rate set for the
traffic. For example, the service level packet will be set with
"green" color if it is less than committed" rate; the Service
Level packet will be set with "yellow" color if it is exceeding
the"committed" rate but less than the "excess" rate. The service
frame will be set with "red" color if it is exceeding both the
"committed" and "excess" rates.
CoS Identifier: It is used to identify the CoS that applies to the
data packet. CoS identifier can be assigned based on dot1p value
in C-tag, or DSCP in IP header.
Complete data measurement: Complete data measurement is a data
measurement method which monitors every packet and condense a
large amount of information about packet arrivals into a small
number of statistics. The aim of "monitoring every packet" is to
ensure that the information reported is not dependent on the
application.
Color based data measurement: Color based data measurement is a data
measurement method which monitors the data packet with the same
color identifier. Color identifier could be "green"
color,"yellow" color and "red" color.
M Gaonkar, et al. Expires December 26, 2017 [Page 4]
Internet-Draft Performance Measurement Models June 2017
CoS based data measurement: Color based data measurement is a data
measurement method which monitors the data packet with the same
CoS identifier. COS identifier could be C-Tag Priority Code
Point(PCP) or DSCP.
CoS and Color based Data measurement: CoS and Color based Data
measurement is a data measurement method which monitors the data
packet with the specific CoS Identifier and Specific Color
Identifier as constraints. The measurement results with CoS
Identifier and Color Identifier constraints constitute a Network
Performance matrix.
3. Traffic Management Architecture
A stream of packets is observed at an Observation Point of the source
endpoint and destination endpoints. Two observation points can also
be placed at the same endpoint for node monitoring
[I-D.ietf-ippm-alt-mark], i.e.,one is at ingress interface of the
endpoint and the other is at the egress interface of the endpoint. A
Selection Process inspects each packet to determine whether or not it
is to be selected for data analytics. The Selection Process is part
of the Metering Process, which constructs a report stream on selected
packets as output, using the Packet Content, and possibly other
information such as the arrival timestamp. The report stream on
selected packets will be stored in the persistence data store for
real time data analysis or time sequence data analysis.
The following figure indicates the sequence of the three processes
(Selection, Metering, and Storing).
+-----------+ +-----------+
|Persistence| |Persistence|
|Data Store | |Data Store |
Src Endpoint +-----^-----+ Dst Endpoint +------^----+
+------------------+ | +------------------+|
| Metering Process | | | Metering Process ||
Observed | +-----------+ | | | +-----------+ ||
Packet--->| | Selection |------+ Observed | | Selection | ||
Stream | | Process |--------Packet--->| | Process |-----+
| +-----------+ | Stream | +-----------+ |
+------------------+ +------------------+
3.1. Selection Process
This section defines the Selection Process and related objects.
M Gaonkar, et al. Expires December 26, 2017 [Page 5]
Internet-Draft Performance Measurement Models June 2017
Selection Process: A Selection Process takes the Observed Packet
Stream as its input and selects a subset of that stream as its
output.
Selection State: A Selection Process may maintain state information
for use by the Selection Process. At a given time, the Selection
State may depend on packets observed at and before that time, and
other variables. Examples include sequence numbers of packets at
the input of Selectors,a timestamp of observation of the packet at
the Observation Point,indicators of whether the packet was
selected by a given Selector.
Selector: A Selector defines the action of a Selection Process on a
single packet of its input. If selected, the packet becomes an
element of the output Packet Stream.
The Selector can make use of the following information in
determining whether a packet is selected:
* COS Identifier in the Packet Content;
* Traffic attribute such as Color identifier;
* Combination of CoS Identifier and Color Identifier
3.2. Metering Process
A Metering Process selects packets from the Observed Packet Stream
using a Selection Process, and produces as output a Report Stream
concerning the selected packets.
4. Performance Measurement Models
4.1. Complete data measurement (Monitoring all the traffic)
This model uses the complete data traffic between the 2 end-points to
compute loss measurement, delay and delay variation. This will
result in computation of loss, delay and delay variation measurement
for the entire traffic in the network in one direction. This is
primarily used in cases of backbone traffic where traffic from
different services are aggregated and send into the core network.
This will count all the packet, this gives the overall measurment
between one endpoint to other.
M Gaonkar, et al. Expires December 26, 2017 [Page 6]
Internet-Draft Performance Measurement Models June 2017
4.2. Color based data measurement
This is same as the above section of "complete data measurement" with
a minor difference, only monitoring the data packet with specific
color identifier.
In this model the packets are counted in the following Way: Count
specific data traffic with different color identifier between 2 end
points for loss, delay and delay variation measurement. One example
of Color based data measurement is to count two type of color based
traffic:
o Count all committed traffic between the 2 end-point for loss
measurement.
o Count all Excess traffic which is beyond the committed traffic for
the specific network.
o The probe carries the time stamps, which can later be used for
calculating the service outage.
o This method can be used for mapping the overall customer traffic
along with EIR, based on the EIR provider can increase the
bandwidth and charge him.
When both of these are combined then it becomes the model for
complete traffic as mentioned in the above section.
In practice the Color of traffic can use any mechanism based on the
network encapsulation.As long as the packets could be treated
differently based on the underlying encapsulation this mechanism
could be used.
This can be used for measuring the whole traffic of the customer who
dont want cos level measurement.Ideally this can be used for provider
who extend bandwidth for small providers, point to point services
etc.
4.3. CoS based Data measurement
This model uses the data traffic in the network which is flowing in a
specific CoS to measure the loss, delay and delay variation in the
network. Based on the class of traffic in the network the
transmitted and received packets are counted to calculate the packets
transfered per service level. The time stamp will be captured along
with the packet count to measure the service down time. This model
measures the performance per service level. This data can be stored
on the routers which can be used to plot the live analytics.
M Gaonkar, et al. Expires December 26, 2017 [Page 7]
Internet-Draft Performance Measurement Models June 2017
Primary use of this kind of measurement is to measure packet loss
delay and delay variation for a specific service which needs to meet
network performance requirements. The service could be a point-to-
point layer2 service, an MPLS based service.
4.4. CoS and Color based Data measurement
This model uses a combination of both Color based data measurement
and CoS based data measurement. Packets are counted for a specific
CoS with a specific Color.This can count both in profile packet which
are green and yellow which are out profile packets. This will not
count the red packet which doesn't meet network performance
requirements.The packets will be counted per service level with CIR
and EIR along with time stamps to find the service outage and loss.
The per service level counting for COS and color will give more
granular level data for poloting service graph and if some service is
continously exceeding the bandwidth this data can be used for
charging the end customer for extra bandwidth usage or increase the
bandwidth based on usage basis.
5. Active and Passive performance measurements
This model reinforces the use of well known methodologies for passive
performance measurements. A very simple, flexible and
straightforward mechanism is presented in [I-D.ietf-ippm-alt-mark].
The basic idea is to virtually split traffic flows into consecutive
batches of packets:each block represents a measurable entity
unambiguously recognizable thanks to the alternate marking. This
approach, called Alternate Marking method, is efficient both for
passive performance monitoring and for active performance monitoring.
Most of the applications requires passive packet loss measurement for
a better accuracy. Instead, in same cases, it is desirable to have
only active delay measurements (e.g TWAMP or OWAMP), because it is
enough.
6. Use Cases
Consider a provider running point to point service between router A
and B for his customer "X".Customer "X" has voice traffic which
requires special treatment,then he requires attention for database
traffic. The customer "X" has SLA with the provider. Now the
challenge faced by the provider is how to measure the traffic of
customer "X" for each class and calculate the bandwidth, moreover the
provider has to see whether the "X" is sending traffic which is
exceeding the level so that he can make tariff accordingly. This
problem is solved by the above models which can measures the packet
for each class of traffic and tabulates the data. Later point of
time this data can be pulled for evaluation.
M Gaonkar, et al. Expires December 26, 2017 [Page 8]
Internet-Draft Performance Measurement Models June 2017
+-------+ +-------+
| | | |
| +--------------+ |
| | P2P service | |
+-------+ +-------+
Router A Router B
Figure 1: P2P
The same considerations can be applicable in a multipoint to
multipoint scenario (e.g. VPN or Data Center interconnections). In
this case Customer "X" has multiple ingress endpoints and multiple
egress endpoints. The proposed matrix model is composed by the
number of flows of "X" in the multipoint scenario and by class-of-
service and color classification. So the SLA matrix is a reference
for the analysis and evaluation phase.
+--+ +--+
| | | |
+--+ +--+
Router A1 Router B1
+--+ +--+
| | MP2MP service | |
+--+ +--+
Router A2 Router B2
. .
. .
. .
+--+ +--+
| | | |
+--+ +--+
Router An Router Bn
Figure 2: MP2MP
7. Acknowledgements
We would like to thank Brian Trammell for giving us the opportunity
to present our draft.We would like to thank Greg Mirsky for the
comments.
8. Security Considerations
This document does not introduce security issues beyond those
discussed in [I-D.ietf-ippm-alt-mark].
M Gaonkar, et al. Expires December 26, 2017 [Page 9]
Internet-Draft Performance Measurement Models June 2017
9. References
9.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", March 1997.
9.2. Informative References
[I-D.ietf-ippm-alt-mark]
Fioccola, G., Capello, A., Cociglio, M., Castaldelli, L.,
Chen, M., Zheng, L., Mirsky, G., and T. Mizrahi,
"Alternate Marking method for passive performance
monitoring", draft-ietf-ippm-alt-mark-04 (work in
progress), March 2017.
Authors' Addresses
Bharat M Gaonkar
Juniper Networks
1133 Innovation Way
Sunnyvale, California 94089
USA
Email: gbharat@juniper.net
Sudhin Jacob
Juniper Networks
1133 Innovation Way
Sunnyvale, California 94089
USA
Email: gbharat@juniper.net
Giuseppe Fioccola
Telecom Italia
Via Reiss Romoli, 274
Torino 10148
Italy
Email: giuseppe.fioccola@telecomitalia.it
M Gaonkar, et al. Expires December 26, 2017 [Page 10]
Internet-Draft Performance Measurement Models June 2017
Qin Wu
Huawei
101 Software Avenue, Yuhua District
Nanjing, Jiangsu 210012
China
Email: bill.wu@huawei.com
Praveen Ananthasankaran
Nokia
Manyata Embassy Tech Park, Silver Oak (Wing A),
Outer Ring Road, Nagawara
Bangalore 560045
Inda
Email: praveen.ananthasankaran@nokia.com
M Gaonkar, et al. Expires December 26, 2017 [Page 11]