Internet DRAFT - draft-ietf-bmwg-sdn-controller-benchmark-meth
draft-ietf-bmwg-sdn-controller-benchmark-meth
Internet-Draft Bhuvaneswaran Vengainathan
Network Working Group Anton Basil
Intended Status: Informational Veryx Technologies
Expires: November 25, 2018 Mark Tassinari
Hewlett-Packard
Vishwas Manral
Nano Sec
Sarah Banks
VSS Monitoring
May 25, 2018
Benchmarking Methodology for SDN Controller Performance
draft-ietf-bmwg-sdn-controller-benchmark-meth-09
Abstract
This document defines methodologies for benchmarking control plane
performance of SDN controllers. SDN controller is a core component
in software-defined networking architecture that controls the
network behavior. SDN controllers have been implemented with many
varying designs in order to achieve their intended network
functionality. Hence, the authors have taken the approach of
considering an SDN controller as a black box, defining the
methodology in a manner that is agnostic to protocols and network
services supported by controllers. The intent of this document is to
provide a method to measure the performance of all controller
implementations.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current.
Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress.
This Internet-Draft will expire on November 25, 2018.
Bhuvan, et al. Expires November 25, 2018 [Page 1]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Copyright Notice
Copyright (c) 2018 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction...................................................4
2. Scope..........................................................4
3. Test Setup.....................................................4
3.1. Test setup - Controller working in Standalone Mode........5
3.2. Test setup - Controller working in Cluster Mode...........6
4. Test Considerations............................................7
4.1. Network Topology..........................................7
4.2. Test Traffic..............................................7
4.3. Test Emulator Requirements................................7
4.4. Connection Setup..........................................7
4.5. Measurement Point Specification and Recommendation........8
4.6. Connectivity Recommendation...............................8
4.7. Test Repeatability........................................8
4.8. Test Reporting............................................8
5. Benchmarking Tests.............................................9
5.1. Performance...............................................9
5.1.1. Network Topology Discovery Time......................9
5.1.2. Asynchronous Message Processing Time................11
5.1.3. Asynchronous Message Processing Rate................12
5.1.4. Reactive Path Provisioning Time.....................15
5.1.5. Proactive Path Provisioning Time....................16
5.1.6. Reactive Path Provisioning Rate.....................18
5.1.7. Proactive Path Provisioning Rate....................19
5.1.8. Network Topology Change Detection Time..............21
5.2. Scalability..............................................22
5.2.1. Control Session Capacity............................22
5.2.2. Network Discovery Size..............................23
5.2.3. Forwarding Table Capacity...........................24
5.3. Security.................................................26
Bhuvan, et al. Expires November 25, 2018 [Page 2]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
5.3.1. Exception Handling..................................26
5.3.2. Denial of Service Handling..........................27
5.4. Reliability..............................................29
5.4.1. Controller Failover Time............................29
5.4.2. Network Re-Provisioning Time........................30
6. References....................................................32
6.1. Normative References.....................................32
6.2. Informative References...................................32
7. IANA Considerations...........................................32
8. Security Considerations.......................................32
9. Acknowledgments...............................................33
Appendix A Benchmarking Methodology using OpenFlow Controllers..34
A.1. Protocol Overview........................................34
A.2. Messages Overview........................................34
A.3. Connection Overview......................................34
A.4. Performance Benchmarking Tests...........................35
A.4.1. Network Topology Discovery Time.....................35
A.4.2. Asynchronous Message Processing Time................36
A.4.3. Asynchronous Message Processing Rate................37
A.4.4. Reactive Path Provisioning Time.....................38
A.4.5. Proactive Path Provisioning Time....................39
A.4.6. Reactive Path Provisioning Rate.....................40
A.4.7. Proactive Path Provisioning Rate....................41
A.4.8. Network Topology Change Detection Time..............42
A.5. Scalability..............................................43
A.5.1. Control Sessions Capacity...........................43
A.5.2. Network Discovery Size..............................43
A.5.3. Forwarding Table Capacity...........................44
A.6. Security.................................................46
A.6.1. Exception Handling..................................46
A.6.2. Denial of Service Handling..........................47
A.7. Reliability..............................................49
A.7.1. Controller Failover Time............................49
A.7.2. Network Re-Provisioning Time........................50
Authors' Addresses...............................................53
Bhuvan, et al. Expires November 25, 2018 [Page 3]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
1. Introduction
This document provides generic methodologies for benchmarking SDN
controller performance. An SDN controller may support many
northbound and southbound protocols, implement a wide range of
applications, and work solely, or as a group to achieve the desired
functionality. This document considers an SDN controller as a black
box, regardless of design and implementation. The tests defined in
the document can be used to benchmark SDN controller for
performance, scalability, reliability and security independent of
northbound and southbound protocols. Terminology related to
benchmarking SDN controllers is described in the companion
terminology document [I-D.sdn-controller-benchmark-term]. These
tests can be performed on an SDN controller running as a virtual
machine (VM) instance or on a bare metal server. This document is
intended for those who want to measure the SDN controller
performance as well as compare various SDN controllers performance.
Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here.
2. Scope
This document defines methodology to measure the networking metrics
of SDN controllers. For the purpose of this memo, the SDN controller
is a function that manages and controls Network Devices. Any SDN
controller without a control capability is out of scope for this
memo. The tests defined in this document enable benchmarking of SDN
Controllers in two ways; as a standalone controller and as a cluster
of homogeneous controllers. These tests are recommended for
execution in lab environments rather than in live network
deployments. Performance benchmarking of a federation of
controllers, set of SDN controllers managing different domains, is
beyond the scope of this document.
3. Test Setup
The tests defined in this document enable measurement of an SDN
controller's performance in standalone mode and cluster mode. This
section defines common reference topologies that are later referred
to in individual tests.
Bhuvan, et al. Expires November 25, 2018 [Page 4]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
3.1. Test setup - Controller working in Standalone Mode
+-----------------------------------------------------------+
| Application Plane Test Emulator |
| |
| +-----------------+ +-------------+ |
| | Application | | Service | |
| +-----------------+ +-------------+ |
| |
+-----------------------------+(I2)-------------------------+
|
| (Northbound interfaces)
+-------------------------------+
| +----------------+ |
| | SDN Controller | |
| +----------------+ |
| |
| Device Under Test (DUT) |
+-------------------------------+
| (Southbound interfaces)
|
+-----------------------------+(I1)-------------------------+
| |
| +-----------+ +-----------+ |
| | Network | | Network | |
| | Device 2 |--..-| Device n-1| |
| +-----------+ +-----------+ |
| / \ / \ |
| / \ / \ |
| l0 / X \ ln |
| / / \ \ |
| +-----------+ +-----------+ |
| | Network | | Network | |
| | Device 1 |..| Device n | |
| +-----------+ +-----------+ |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| |
| Forwarding Plane Test Emulator |
+-----------------------------------------------------------+
Figure 1
Bhuvan, et al. Expires November 25, 2018 [Page 5]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
3.2. Test setup - Controller working in Cluster Mode
+-----------------------------------------------------------+
| Application Plane Test Emulator |
| |
| +-----------------+ +-------------+ |
| | Application | | Service | |
| +-----------------+ +-------------+ |
| |
+-----------------------------+(I2)-------------------------+
|
| (Northbound interfaces)
+---------------------------------------------------------+
| |
| ------------------ ------------------ |
| | SDN Controller 1 | <--E/W--> | SDN Controller n | |
| ------------------ ------------------ |
| |
| Device Under Test (DUT) |
+---------------------------------------------------------+
| (Southbound interfaces)
|
+-----------------------------+(I1)-------------------------+
| |
| +-----------+ +-----------+ |
| | Network | | Network | |
| | Device 2 |--..-| Device n-1| |
| +-----------+ +-----------+ |
| / \ / \ |
| / \ / \ |
| l0 / X \ ln |
| / / \ \ |
| +-----------+ +-----------+ |
| | Network | | Network | |
| | Device 1 |..| Device n | |
| +-----------+ +-----------+ |
| | | |
| +---------------+ +---------------+ |
| | Test Traffic | | Test Traffic | |
| | Generator | | Generator | |
| | (TP1) | | (TP2) | |
| +---------------+ +---------------+ |
| |
| Forwarding Plane Test Emulator |
+-----------------------------------------------------------+
Figure 2
Bhuvan, et al. Expires November 25, 2018 [Page 6]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
4. Test Considerations
4.1. Network Topology
The test cases SHOULD use Leaf-Spine topology with at least 2
Network Devices in the topology for benchmarking. The test traffic
generators TP1 and TP2 SHOULD be connected to the leaf Network
Device 1 and the leaf Network Device n. To achieve a complete
performance characterization of the SDN controller, it is
recommended that the controller be benchmarked for many network
topologies and a varying number of Network Devices. Further, care
should be taken to make sure that a loop prevention mechanism is
enabled either in the SDN controller, or in the network when the
topology contains redundant network paths.
4.2. Test Traffic
Test traffic is used to notify the controller about the asynchronous
arrival of new flows. The test cases SHOULD use frame sizes of 128,
512 and 1508 bytes for benchmarking. Tests using jumbo frames are
optional.
4.3. Test Emulator Requirements
The Test Emulator SHOULD time stamp the transmitted and received
control messages to/from the controller on the established network
connections. The test cases use these values to compute the
controller processing time.
4.4. Connection Setup
There may be controller implementations that support unencrypted and
encrypted network connections with Network Devices. Further, the
controller may have backward compatibility with Network Devices
running older versions of southbound protocols. It may be useful to
measure the controller performance with one or more applicable
connection setup methods defined below. For cases with encrypted
communications between the controller and the switch, key management
and key exchange MUST take place before any performance or benchmark
measurements.
1. Unencrypted connection with Network Devices, running same
protocol version.
2. Unencrypted connection with Network Devices, running different
protocol versions.
Example:
Bhuvan, et al. Expires November 25, 2018 [Page 7]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
a. Controller running current protocol version and switch
running older protocol version
b. Controller running older protocol version and switch
running current protocol version
3. Encrypted connection with Network Devices, running same
protocol version
4. Encrypted connection with Network Devices, running different
protocol versions.
Example:
a. Controller running current protocol version and switch
running older protocol version
b. Controller running older protocol version and switch
running current protocol version
4.5. Measurement Point Specification and Recommendation
The measurement accuracy depends on several factors including the
point of observation where the indications are captured. For
example, the notification can be observed at the controller or test
emulator. The test operator SHOULD make the observations/
measurements at the interfaces of test emulator unless it is
explicitly mentioned otherwise in the individual test. In any case,
the locations of measurement points MUST be reported.
4.6. Connectivity Recommendation
The SDN controller in the test setup SHOULD be connected directly
with the forwarding and the management plane test emulators to avoid
any delays or failure introduced by the intermediate devices during
benchmarking tests. When the controller is implemented as a virtual
machine, details of the physical and logical connectivity MUST be
reported.
4.7. Test Repeatability
To increase the confidence in measured result, it is recommended
that each test RECOMMENDED be repeated a minimum of 10 times.
4.8. Test Reporting
Each test has a reporting format that contains some global and
identical reporting components, and some individual components that
are specific to individual tests. The following test configuration
parameters and controller settings parameters MUST be reflected in
the test report.
Bhuvan, et al. Expires November 25, 2018 [Page 8]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Test Configuration Parameters:
1. Controller name and version
2. Northbound protocols and versions
3. Southbound protocols and versions
4. Controller redundancy mode (Standalone or Cluster Mode)
5. Connection setup (Unencrypted or Encrypted)
6. Network Device Type (Physical or Virtual or Emulated)
7. Number of Nodes
8. Number of Links
9. Dataplane Test Traffic Type
10. Controller System Configuration (e.g., Physical or Virtual
Machine, CPU, Memory, Caches, Operating System, Interface
Speed, Storage)
11. Reference Test Setup (e.g., Section 3.1 etc.,)
Controller Settings Parameters:
1. Topology re-discovery timeout
2. Controller redundancy mode (e.g., active-standby etc.,)
3. Controller state persistence enabled/disabled
To ensure the repeatability of test, the following capabilities of
test emulator SHOULD be reported
1. Maximum number of Network Devices that the forwarding plane
emulates
2. Control message processing time (e.g., Topology Discovery
Messages)
One way to determine the above two values are to simulate the
required control sessions and messages from the control plane.
5. Benchmarking Tests
5.1. Performance
5.1.1. Network Topology Discovery Time
Objective:
The time taken by controller(s) to determine the complete network
topology, defined as the interval starting with the first discovery
message from the controller(s) at its Southbound interface, ending
with all features of the static topology determined.
Bhuvan, et al. Expires November 25, 2018 [Page 9]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST support network discovery.
2. Tester should be able to retrieve the discovered topology
information either through the controller's management interface,
or northbound interface to determine if the discovery was
successful and complete.
3. Ensure that the controller's topology re-discovery timeout has
been set to the maximum value to avoid initiation of re-discovery
process in the middle of the test.
Procedure:
1. Ensure that the controller is operational, its network
applications, northbound and southbound interfaces are up and
running.
2. Establish the network connections between controller and Network
Devices.
3. Record the time for the first discovery message (Tm1) received
from the controller at forwarding plane test emulator interface
I1.
4. Query the controller every t seconds (RECOMMENDED value for t is
3) to obtain the discovered network topology information through
the northbound interface or the management interface and compare
it with the deployed network topology information.
5. Stop the trial when the discovered topology information matches
the deployed network topology, or when the discovered topology
information return the same details for 3 consecutive queries.
6. Record the time last discovery message (Tmn) sent to controller
from the forwarding plane test emulator interface (I1) when the
trial completed successfully. (e.g., the topology matches).
Measurement:
Topology Discovery Time Tr1 = Tmn-Tm1.
Tr1 + Tr2 + Tr3 .. Trn
Average Topology Discovery Time (TDm) = -----------------------
Total Trials
SUM[SQUAREOF(Tri-TDm)]
Topology Discovery Time Variance (TDv) ----------------------
Total Trials -1
Bhuvan, et al. Expires November 25, 2018 [Page 10]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Reporting Format:
The Topology Discovery Time results MUST be reported in the format
of a table, with a row for each successful iteration. The last row
of the table indicates the Topology Discovery Time variance and the
previous row indicates the average Topology Discovery Time.
If this test is repeated with varying number of nodes over the same
topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Topology Discovery Time.
5.1.2. Asynchronous Message Processing Time
Objective:
The time taken by controller(s) to process an asynchronous message,
defined as the interval starting with an asynchronous message from a
network device after the discovery of all the devices by the
controller(s), ending with a response message from the controller(s)
at its Southbound interface.
Reference Test Setup:
This test SHOULD use one of the test setup described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST have successfully completed the network
topology discovery for the connected Network Devices.
Procedure:
1. Generate asynchronous messages from every connected Network
Device, to the SDN controller, one at a time in series from the
forwarding plane test emulator for the trial duration.
2. Record every request transmit time (T1) and the corresponding
response received time (R1) at the forwarding plane test emulator
interface (I1) for every successful message exchange.
Measurement:
SUM{Ri} - SUM{Ti}
Asynchronous Message Processing Time Tr1 = -----------------------
Nrx
Bhuvan, et al. Expires November 25, 2018 [Page 11]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Where Nrx is the total number of successful messages exchanged
Tr1 + Tr2 + Tr3..Trn
Average Asynchronous Message Processing Time = --------------------
Total Trials
Asynchronous Message Processing Time Variance (TAMv) =
SUM[SQUAREOF(Tri-TAMm)]
----------------------
Total Trials -1
Where TAMm is the Average Asynchronous Message Processing Time.
Reporting Format:
The Asynchronous Message Processing Time results MUST be reported in
the format of a table with a row for each iteration. The last row of
the table indicates the Asynchronous Message Processing Time
variance and the previous row indicates the average Asynchronous
Message Processing Time.
The report SHOULD capture the following information in addition to
the configuration parameters captured in section 4.8.
- Successful messages exchanged (Nrx)
- Percentage of unsuccessful messages exchanged, computed using the
formula (1 - Nrx/Ntx) * 100), Where Ntx is the total number of
messages transmitted to the controller.
If this test is repeated with varying number of nodes with same
topology, the results SHOULD be reported in the form of a graph. The
X coordinate SHOULD be the Number of nodes (N), the Y coordinate
SHOULD be the average Asynchronous Message Processing Time.
5.1.3. Asynchronous Message Processing Rate
Objective:
Measure the number of responses to asynchronous messages (such as
new flow arrival notification message, link down, etc.) for which
Bhuvan, et al. Expires November 25, 2018 [Page 12]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
the controller(s) performed processing and replied with a valid and
productive (non-trivial) response message
This test will measure two benchmarks on Asynchronous Message
Processing Rate using a single procedure. The two benchmarks are
(see section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]):
1. Loss-free Asynchronous Message Processing Rate
2. Maximum Asynchronous Message Processing Rate
Here two benchmarks are determined through a series of trials where
the number of messages are sent to the controller(s), and the
responses from the controller(s) are counted over the trial
duration. The message response rate and the message loss ratio are
calculated for each trial.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller(s) MUST have successfully completed the network
topology discovery for the connected Network Devices.
2. Choose and record the Trial Duration (Td), the sending rate step-
size (STEP), the tolerance on equality for two consecutive trials
(P%),and the maximum possible message sending rate (Ntx1/Td).
Procedure:
1. Generate asynchronous messages continuously at the maximum
possible rate on the established connections from all the
emulated/simulated Network Devices for the given trial Duration
(Td).
2. Record the total number of responses received from the controller
(Nrx1) as well as the number of messages sent (Ntx1) to the
controller within the trial duration (Td).
3. Calculate the Asynchronous Message Processing Rate (Tr1) and the
Message Loss Ratio (Lr1). Ensure that the controller(s) have
returned to normal operation.
4. Repeat the trial by reducing the asynchronous message sending rate
used in last trial by the STEP size.
5. Continue repeating the trials and reducing the sending rate until
both the maximum value of Nrxn (number of responses received from
Bhuvan, et al. Expires November 25, 2018 [Page 13]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
the controller) and the Nrxn corresponding to zero loss ratio have
been found.
6. The trials corresponding to the benchmark levels MUST be repeated
using the same asynchronous message rates until the responses
received from the controller are equal (+/-P%) for two consecutive
trials.
7. Record the number of responses received from the controller (Nrxn)
as well as the number of messages sent (Ntxn) to the controller in
the last trial.
Measurement:
Nrxn
Asynchronous Message Processing Rate Trn = -----
Td
Maximum Asynchronous Message Processing Rate = MAX(Trn) for all n
Nrxn
Asynchronous Message Loss Ratio Lrn = 1 - -----
Ntxn
Loss-free Asynchronous Message Processing Rate = MAX(Trn) given
Lrn=0
Reporting Format:
The Asynchronous Message Processing Rate results MUST be reported in
the format of a table with a row for each trial.
The table should report the following information in addition to the
configuration parameters captured in section 4.8, with columns:
- Offered rate (Ntxn/Td)
- Asynchronous Message Processing Rate (Trn)
- Loss Ratio (Lr)
- Benchmark at this iteration (blank for none, Maximum, Loss-Free)
The results MAY be presented in the form of a graph. The X axis
SHOULD be the Offered rate, and dual Y axes would represent
Asynchronous Message Processing Rate and Loss Ratio, respectively.
If this test is repeated with varying number of nodes over same
topology, the results SHOULD be reported in the form of a graph. The
Bhuvan, et al. Expires November 25, 2018 [Page 14]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
X axis SHOULD be the Number of nodes (N), the Y axis SHOULD be the
Asynchronous Message Processing Rate. Both the Maximum and the Loss-
Free Rates should be plotted for each N.
5.1.4. Reactive Path Provisioning Time
Objective:
The time taken by the controller to setup a path reactively between
source and destination node, defined as the interval starting with
the first flow provisioning request message received by the
controller(s) at its Southbound interface, ending with the last flow
provisioning response message sent from the controller(s) at its
Southbound interface.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document. The number of Network Devices in
the path is a parameter of the test that may be varied from 2 to
maximum discovery size in repetitions of this test.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination endpoint for which the path has to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for 'flow miss' in Network Device
is configured to 'send to controller'.
4. Ensure that each Network Device in a path requires the controller
to make the forwarding decision while paving the entire path.
Procedure:
1. Send a single traffic stream from the test traffic generator TP1
to test traffic generator TP2.
2. Record the time of the first flow provisioning request message
sent to the controller (Tsf1) from the Network Device at the
forwarding plane test emulator interface (I1).
3. Wait for the arrival of first traffic frame at the Traffic
Endpoint TP2 or the expiry of trial duration (Td).
4. Record the time of the last flow provisioning response message
received from the controller (Tdf1) to the Network Device at the
forwarding plane test emulator interface (I1).
Bhuvan, et al. Expires November 25, 2018 [Page 15]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Measurement:
Reactive Path Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Time = -----------------------
Total Trials
SUM[SQUAREOF(Tri-TRPm)]
Reactive Path Provisioning Time Variance(TRPv) ---------------------
Total Trials -1
Where TRPm is the Average Reactive Path Provisioning Time.
Reporting Format:
The Reactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Reactive Path Provisioning Time variance and the
previous row indicates the Average Reactive Path Provisioning Time.
The report should capture the following information in addition to
the configuration parameters captured in section 4.8.
- Number of Network Devices in the path
5.1.5. Proactive Path Provisioning Time
Objective:
The time taken by the controller to setup a path proactively between
source and destination node, defined as the interval starting with
the first proactive flow provisioned in the controller(s) at its
Northbound interface, ending with the last flow provisioning
response message sent from the controller(s) at its Southbound
interface.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Bhuvan, et al. Expires November 25, 2018 [Page 16]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination endpoint for which the path has to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for flow miss in Network Device is
'drop'.
Procedure:
1. Send a single traffic stream from test traffic generator TP1 to
TP2.
2. Install the flow entries to reach from test traffic generator TP1
to the test traffic generator TP2 through controller's northbound
or management interface.
3. Wait for the arrival of first traffic frame at the test traffic
generator TP2 or the expiry of trial duration (Td).
4. Record the time when the proactive flow is provisioned in the
Controller (Tsf1) at the management plane test emulator interface
I2.
5. Record the time of the last flow provisioning message received
from the controller (Tdf1) at the forwarding plane test emulator
interface I1.
Measurement:
Proactive Flow Provisioning Time Tr1 = Tdf1-Tsf1.
Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Time = -----------------------
Total Trials
SUM[SQUAREOF(Tri-TPPm)]
Proactive Path Provisioning Time Variance(TPPv) --------------------
Total Trials -1
Where TPPm is the Average Proactive Path Provisioning Time.
Reporting Format:
The Proactive Path Provisioning Time results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Proactive Path Provisioning Time variance and
Bhuvan, et al. Expires November 25, 2018 [Page 17]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
the previous row indicates the Average Proactive Path Provisioning
Time.
The report should capture the following information in addition to
the configuration parameters captured in section 4.8.
- Number of Network Devices in the path
5.1.6. Reactive Path Provisioning Rate
Objective:
The maximum number of independent paths a controller can
concurrently establish per second between source and destination
nodes reactively, defined as the number of paths provisioned per
second by the controller(s) at its Southbound interface for the flow
provisioning requests received for path provisioning at its
Southbound interface between the start of the test and the expiry of
given trial duration.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination addresses for which the paths have to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for 'flow miss' in Network Device
is configured to 'send to controller'.
4. Ensure that each Network Device in a path requires the controller
to make the forwarding decision while provisioning the entire
path.
Procedure:
1. Send traffic with unique source and destination addresses from
test traffic generator TP1.
2. Record total number of unique traffic frames (Ndf) received at the
test traffic generator TP2 within the trial duration (Td).
Bhuvan, et al. Expires November 25, 2018 [Page 18]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Measurement:
Ndf
Reactive Path Provisioning Rate Tr1 = ------
Td
Tr1 + Tr2 + Tr3 .. Trn
Average Reactive Path Provisioning Rate = ------------------------
Total Trials
SUM[SQUAREOF(Tri-RPPm)]
Reactive Path Provisioning Rate Variance(RPPv) --------------------
Total Trials -1
Where RPPm is the Average Reactive Path Provisioning Rate.
Reporting Format:
The Reactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Reactive Path Provisioning Rate variance and the
previous row indicates the Average Reactive Path Provisioning Rate.
The report should capture the following information in addition to
the configuration parameters captured in section 4.8.
- Number of Network Devices in the path
- Offered rate
5.1.7. Proactive Path Provisioning Rate
Objective:
Measure the maximum number of independent paths a controller can
concurrently establish per second between source and destination
nodes proactively, defined as the number of paths provisioned per
second by the controller(s) at its Southbound interface for the
paths requested in its Northbound interface between the start of the
test and the expiry of given trial duration. The measurement is
based on dataplane observations of successful path activation
Bhuvan, et al. Expires November 25, 2018 [Page 19]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST contain the network topology information for
the deployed network topology.
2. The controller should have the knowledge about the location of
destination addresses for which the paths have to be provisioned.
This can be achieved through dynamic learning or static
provisioning.
3. Ensure that the default action for flow miss in Network Device is
'drop'.
Procedure:
1. Send traffic continuously with unique source and destination
addresses from test traffic generator TP1.
2. Install corresponding flow entries to reach from simulated
sources at the test traffic generator TP1 to the simulated
destinations at test traffic generator TP2 through controller's
northbound or management interface.
3. Record total number of unique traffic frames received Ndf) at the
test traffic generator TP2 within the trial duration (Td).
Measurement:
Ndf
Proactive Path Provisioning Rate Tr1 = ------
Td
Tr1 + Tr2 + Tr3 .. Trn
Average Proactive Path Provisioning Rate = -----------------------
Total Trials
SUM[SQUAREOF(Tri-PPPm)]
Proactive Path Provisioning Rate Variance(PPPv) --------------------
Total Trials -1
Where PPPm is the Average Proactive Path Provisioning Rate.
Bhuvan, et al. Expires November 25, 2018 [Page 20]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Reporting Format:
The Proactive Path Provisioning Rate results MUST be reported in the
format of a table with a row for each iteration. The last row of the
table indicates the Proactive Path Provisioning Rate variance and
the previous row indicates the Average Proactive Path Provisioning
Rate.
The report should capture the following information in addition to
the configuration parameters captured in section 4.8.
- Number of Network Devices in the path
- Offered rate
5.1.8. Network Topology Change Detection Time
Objective:
The amount of time required for the controller to detect any changes
in the network topology, defined as the interval starting with the
notification message received by the controller(s) at its Southbound
interface, ending with the first topology rediscovery messages sent
from the controller(s) at its Southbound interface.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST have successfully discovered the network
topology information for the deployed network topology.
2. The periodic network discovery operation should be configured to
twice the Trial duration (Td) value.
Procedure:
1. Trigger a topology change event by bringing down an active
Network Device in the topology.
Bhuvan, et al. Expires November 25, 2018 [Page 21]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
2. Record the time when the first topology change notification is
sent to the controller (Tcn) at the forwarding plane test emulator
interface (I1).
3. Stop the trial when the controller sends the first topology re-
discovery message to the Network Device or the expiry of trial
duration (Td).
4. Record the time when the first topology re-discovery message is
received from the controller (Tcd) at the forwarding plane test
emulator interface (I1)
Measurement:
Network Topology Change Detection Time Tr1 = Tcd-Tcn.
Tr1 + Tr2 + Tr3 .. Trn
Average Network Topology Change Detection Time = ------------------
Total Trials
Network Topology Change Detection Time Variance(NTDv) =
SUM[SQUAREOF(Tri-NTDm)]
-----------------------
Total Trials -1
Where NTDm is the Average Network Topology Change Detection Time.
Reporting Format:
The Network Topology Change Detection Time results MUST be reported
in the format of a table with a row for each iteration. The last row
of the table indicates the Network Topology Change Detection Time
variance and the previous row indicates the average Network Topology
Change Time.
5.2. Scalability
5.2.1. Control Session Capacity
Objective:
Measure the maximum number of control sessions the controller can
maintain, defined as the number of sessions that the controller can
accept from network devices, starting with the first control
Bhuvan, et al. Expires November 25, 2018 [Page 22]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
session, ending with the last control session that the controller(s)
accepts at its Southbound interface.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Procedure:
1. Establish control connection with controller from every Network
Device emulated in the forwarding plane test emulator.
2. Stop the trial when the controller starts dropping the control
connections.
3. Record the number of successful connections established with the
controller (CCn) at the forwarding plane test emulator.
Measurement:
Control Sessions Capacity = CCn.
Reporting Format:
The Control Session Capacity results MUST be reported in addition to
the configuration parameters captured in section 4.8.
5.2.2. Network Discovery Size
Objective:
Measure the network size (number of nodes, links and hosts) that a
controller can discover, defined as the size of a network that the
controller(s) can discover, starting from a network topology given
by the user for discovery, ending with the topology that the
controller(s) could successfully discover.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. The controller MUST support automatic network discovery.
Bhuvan, et al. Expires November 25, 2018 [Page 23]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
2. Tester should be able to retrieve the discovered topology
information either through controller's management interface or
northbound interface.
Procedure:
1. Establish the network connections between controller and network
nodes.
2. Query the controller every t seconds (RECOMMENDED value for t is
30) to obtain the discovered network topology information through
the northbound interface or the management interface.
3. Stop the trial when the discovered network topology information
remains the same as that of last two query responses.
4. Compare the obtained network topology information with the
deployed network topology information.
5. If the comparison is successful, increase the number of nodes by 1
and repeat the trial.
If the comparison is unsuccessful, decrease the number of nodes by
1 and repeat the trial.
6. Continue the trial until the comparison of step 5 is successful.
7. Record the number of nodes for the last trial run (Ns) where the
topology comparison was successful.
Measurement:
Network Discovery Size = Ns.
Reporting Format:
The Network Discovery Size results MUST be reported in addition to
the configuration parameters captured in section 4.8.
5.2.3. Forwarding Table Capacity
Objective:
Measure the maximum number of flow entries a controller can manage
in its Forwarding table.
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Bhuvan, et al. Expires November 25, 2018 [Page 24]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Prerequisite:
1. The controller Forwarding table should be empty.
2. Flow Idle time MUST be set to higher or infinite value.
3. The controller MUST have successfully completed network topology
discovery.
4. Tester should be able to retrieve the forwarding table information
either through controller's management interface or northbound
interface.
Procedure:
Reactive Flow Provisioning Mode:
1. Send bi-directional traffic continuously with unique source and
destination addresses from test traffic generators TP1 and TP2 at
the asynchronous message processing rate of controller.
2. Query the controller at a regular interval (e.g., 5 seconds) for
the number of learned flow entries from its northbound interface.
3. Stop the trial when the retrieved value is constant for three
consecutive iterations and record the value received from the last
query (Nrp).
Proactive Flow Provisioning Mode:
1. Install unique flows continuously through controller's northbound
or management interface until a failure response is received from
the controller.
2. Record the total number of successful responses (Nrp).
Note:
Some controller designs for proactive flow provisioning mode may
require the switch to send flow setup requests in order to generate
flow setup responses. In such cases, it is recommended to generate
bi-directional traffic for the provisioned flows.
Measurement:
Proactive Flow Provisioning Mode:
Max Flow Entries = Total number of flows provisioned (Nrp)
Reactive Flow Provisioning Mode:
Max Flow Entries = Total number of learned flow entries (Nrp)
Bhuvan, et al. Expires November 25, 2018 [Page 25]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Forwarding Table Capacity = Max Flow Entries.
Reporting Format:
The Forwarding Table Capacity results MUST be tabulated with the
following information in addition to the configuration parameters
captured in section 4.8.
- Provisioning Type (Proactive/Reactive)
5.3. Security
5.3.1. Exception Handling
Objective:
Determine the effect of handling error packets and notifications on
performance tests. The impact MUST be measured for the following
performance tests
a. Path Provisioning Rate
b. Path Provisioning Time
c. Network Topology Change Detection Time
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. This test MUST be performed after obtaining the baseline
measurement results for the above performance tests.
2. Ensure that the invalid messages are not dropped by the
intermediate devices connecting the controller and Network
Devices.
Bhuvan, et al. Expires November 25, 2018 [Page 26]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Procedure:
1. Perform the above listed performance tests and send 1% of messages
from the Asynchronous Message Processing Rate as invalid messages
from the connected Network Devices emulated at the forwarding
plane test emulator.
2. Perform the above listed performance tests and send 2% of messages
from the Asynchronous Message Processing Rate as invalid messages
from the connected Network Devices emulated at the forwarding
plane test emulator.
Note:
Invalid messages can be frames with incorrect protocol fields or any
form of failure notifications sent towards controller.
Measurement:
Measurement MUST be done as per the equation defined in the
corresponding performance test measurement section.
Reporting Format:
The Exception Handling results MUST be reported in the format of
table with a column for each of the below parameters and row for
each of the listed performance tests.
- Without Exceptions
- With 1% Exceptions
- With 2% Exceptions
5.3.2. Denial of Service Handling
Objective:
Determine the effect of handling DoS attacks on performance and
scalability tests the impact MUST be measured for the following
tests:
a. Path Provisioning Rate
b. Path Provisioning Time
Bhuvan, et al. Expires November 25, 2018 [Page 27]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
c. Network Topology Change Detection Time
d. Network Discovery Size
Reference Test Setup:
The test SHOULD use one of the test setups described in section 3.1
or section 3.2 of this document.
Prerequisite:
This test MUST be performed after obtaining the baseline measurement
results for the above tests.
Procedure:
1. Perform the listed tests and launch a DoS attack towards
controller while the trial is running.
Note:
DoS attacks can be launched on one of the following interfaces.
a. Northbound (e.g., Query for flow entries continuously on
northbound interface)
b. Management (e.g., Ping requests to controller's management
interface)
c. Southbound (e.g., TCP SYN messages on southbound interface)
Measurement:
Measurement MUST be done as per the equation defined in the
corresponding test's measurement section.
Reporting Format:
The DoS Attacks Handling results MUST be reported in the format of
table with a column for each of the below parameters and row for
each of the listed tests.
- Without any attacks
- With attacks
The report should also specify the nature of attack and the
interface.
Bhuvan, et al. Expires November 25, 2018 [Page 28]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
5.4. Reliability
5.4.1. Controller Failover Time
Objective:
The time taken to switch from an active controller to the backup
controller, when the controllers work in redundancy mode and the
active controller fails, defined as the interval starting with the
active controller bringing down, ending with the first re-discovery
message received from the new controller at its Southbound
interface.
Reference Test Setup:
The test SHOULD use the test setup described in section 3.2 of this
document.
Prerequisite:
1. Master controller election MUST be completed.
2. Nodes are connected to the controller cluster as per the
Redundancy Mode (RM).
3. The controller cluster should have successfully completed the
network topology discovery.
4. The Network Device MUST send all new flows to the controller when
it receives from the test traffic generator.
5. Controller should have learned the location of destination (D1) at
TP2.
Procedure:
1. Send uni-directional traffic continuously with incremental
sequence number and source addresses from test traffic generator
TP1 at the rate that the controller processes without any drops.
2. Ensure that there are no packet drops observed at TP2.
3. Bring down the active controller.
4. Stop the trial when a first frame received on TP2 after failover
operation.
5. Record the time at which the last valid frame received (T1) at
test traffic generator TP2 before sequence error and the first
valid frame received (T2) after the sequence error at TP2
Bhuvan, et al. Expires November 25, 2018 [Page 29]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Measurement:
Controller Failover Time = (T2 - T1)
Packet Loss = Number of missing packet sequences.
Reporting Format:
The Controller Failover Time results MUST be tabulated with the
following information.
- Number of cluster nodes
- Redundancy mode
- Controller Failover Time
- Packet Loss
- Cluster keep-alive interval
5.4.2. Network Re-Provisioning Time
Objective:
The time taken to re-route the traffic by the Controller, when there
is a failure in existing traffic paths, defined as the interval
starting from the first failure notification message received by the
controller, ending with the last flow re-provisioning message sent
by the controller at its Southbound interface.
Reference Test Setup:
This test SHOULD use one of the test setup described in section 3.1
or section 3.2 of this document.
Prerequisite:
1. Network with the given number of nodes and redundant paths MUST be
deployed.
2. Ensure that the controller MUST have knowledge about the location
of test traffic generators TP1 and TP2.
Bhuvan, et al. Expires November 25, 2018 [Page 30]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
3. Ensure that the controller does not pre-provision the alternate
path in the emulated Network Devices at the forwarding plane test
emulator.
Procedure:
1. Send bi-directional traffic continuously with unique sequence
number from TP1 and TP2.
2. Bring down a link or switch in the traffic path.
3. Stop the trial after receiving first frame after network re-
convergence.
4. Record the time of last received frame prior to the frame loss at
TP2 (TP2-Tlfr) and the time of first frame received after the
frame loss at TP2 (TP2-Tffr). There must be a gap in sequence
numbers of these frames
5. Record the time of last received frame prior to the frame loss at
TP1 (TP1-Tlfr) and the time of first frame received after the
frame loss at TP1 (TP1-Tffr).
Measurement:
Forward Direction Path Re-Provisioning Time (FDRT)
= (TP2-Tffr - TP2-Tlfr)
Reverse Direction Path Re-Provisioning Time (RDRT)
= (TP1-Tffr - TP1-Tlfr)
Network Re-Provisioning Time = (FDRT+RDRT)/2
Forward Direction Packet Loss = Number of missing sequence frames
at TP1
Reverse Direction Packet Loss = Number of missing sequence frames
at TP2
Reporting Format:
The Network Re-Provisioning Time results MUST be tabulated with the
following information.
- Number of nodes in the primary path
- Number of nodes in the alternate path
- Network Re-Provisioning Time
Bhuvan, et al. Expires November 25, 2018 [Page 31]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
- Forward Direction Packet Loss
- Reverse Direction Packet Loss
6. References
6.1. Normative References
[RFC2119] S. Bradner, "Key words for use in RFCs to Indicate
Requirement Levels", RFC 2119, March 1997.
[RFC8174] B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", RFC 8174, May 2017.
[I-D.sdn-controller-benchmark-term] Bhuvaneswaran.V, Anton Basil,
Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
Benchmarking SDN Controller Performance",
draft-ietf-bmwg-sdn-controller-benchmark-term-10
(Work in progress), May 25, 2018
6.2. Informative References
[OpenFlow Switch Specification] ONF,"OpenFlow Switch Specification"
Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.
7. IANA Considerations
This document does not have any IANA requests.
8. Security Considerations
Benchmarking tests described in this document are limited to the
performance characterization of controllers in a lab environment
with isolated network.
The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test
management network.
Bhuvan, et al. Expires November 25, 2018 [Page 32]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the controller.
Special capabilities SHOULD NOT exist in the controller specifically
for benchmarking purposes. Any implications for network security
arising from the controller SHOULD be identical in the lab and in
production networks.
9. Acknowledgments
The authors would like to thank the following individuals for
providing their valuable comments to the earlier versions of this
document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu
(NAIST), Andrew McGregor (Google), Scott Bradner , Jay Karthik
(Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei), Brian
Castelli (Spirent)
This document was prepared using 2-Word-v2.0.template.dot.
Bhuvan, et al. Expires November 25, 2018 [Page 33]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Appendix A Benchmarking Methodology using OpenFlow Controllers
This section gives an overview of OpenFlow protocol and provides
test methodology to benchmark SDN controllers supporting OpenFlow
southbound protocol. OpenFlow protocol is used as an example to
illustrate the methodologies defined in this document.
A.1. Protocol Overview
OpenFlow is an open standard protocol defined by Open Networking
Foundation (ONF)[ OpenFlow Switch Specification], used for
programming the forwarding plane of network switches or routers via
a centralized controller.
A.2. Messages Overview
OpenFlow protocol supports three messages types namely controller-
to-switch, asynchronous and symmetric.
Controller-to-switch messages are initiated by the controller and
used to directly manage or inspect the state of the switch. These
messages allow controllers to query/configure the switch (Features,
Configuration messages), collect information from switch (Read-State
message), send packets on specified port of switch (Packet-out
message), and modify switch forwarding plane and state (Modify-
State, Role-Request messages etc.).
Asynchronous messages are generated by the switch without a
controller soliciting them. These messages allow switches to update
controllers to denote an arrival of new flow (Packet-in), switch
state change (Flow-Removed, Port-status) and error (Error).
Symmetric messages are generated in either direction without
solicitation. These messages allow switches and controllers to set
up connection (Hello), verify for liveness (Echo) and offer
additional functionalities (Experimenter).
A.3. Connection Overview
OpenFlow channel is used to exchange OpenFlow message between an
OpenFlow switch and an OpenFlow controller. The OpenFlow channel
connection can be setup using plain TCP or TLS. By default, a switch
establishes single connection with SDN controller. A switch may
establish multiple parallel connections to single controller
(auxiliary connection) or multiple controllers to handle controller
failures and load balancing.
Bhuvan, et al. Expires November 25, 2018 [Page 34]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
A.4. Performance Benchmarking Tests
A.4.1. Network Topology Discovery Time
Procedure:
Network Devices OpenFlow SDN
Controller Application
| | |
| |<Initialize controller |
| |app.,NB and SB interfaces> |
| | |
|<Deploy network with | |
| given no. of OF switches> | |
| | |
| OFPT_HELLO Exchange | |
|<-------------------------->| |
| | |
| PACKET_OUT with LLDP | |
| to all switches | |
(Tm1)|<---------------------------| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-1| |
|--------------------------->| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-2| |
|--------------------------->| |
| . | |
| . | |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-n| |
(Tmn)|--------------------------->| |
| | |
| | <Wait for the expiry |
| | of Trial duration (Td)>|
| | |
| | Query the controller for|
| | discovered n/w topo.(Di)|
| |<--------------------------|
| | |
| | <Compare the discovered |
| | & offered n/w topology>|
| | |
Bhuvan, et al. Expires November 25, 2018 [Page 35]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Legend:
NB: Northbound
SB: Southbound
OF: OpenFlow
Tm1: Time of reception of first LLDP message from controller
Tmn: Time of last LLDP message sent to controller
Discussion:
The Network Topology Discovery Time can be obtained by calculating
the time difference between the first PACKET_OUT with LLDP message
received from the controller (Tm1) and the last PACKET_IN with LLDP
message sent to the controller (Tmn) when the comparison is
successful.
A.4.2. Asynchronous Message Processing Time
Procedure:
Network Devices OpenFlow SDN
Controller Application
| | |
|PACKET_IN with single | |
|OFP match header | |
(T0)|--------------------------->| |
| | |
| PACKET_OUT with single OFP | |
| action header | |
(R0)|<---------------------------| |
| . | |
| . | |
| . | |
| | |
|PACKET_IN with single OFP | |
|match header | |
(Tn)|--------------------------->| |
| | |
| PACKET_OUT with single OFP | |
| action header| |
(Rn)|<---------------------------| |
| | |
|<Wait for the expiry of | |
|Trial duration> | |
| | |
|<Record the number of | |
Bhuvan, et al. Expires November 25, 2018 [Page 36]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
|PACKET_INs/PACKET_OUTs | |
|Exchanged (Nrx)> | |
| | |
Legend:
T0,T1, ..Tn are PACKET_IN messages transmit timestamps.
R0,R1, ..Rn are PACKET_OUT messages receive timestamps.
Nrx : Number of successful PACKET_IN/PACKET_OUT message
exchanges
Discussion:
The Asynchronous Message Processing Time will be obtained by sum of
((R0-T0),(R1-T1)..(Rn - Tn))/ Nrx.
A.4.3. Asynchronous Message Processing Rate
Procedure:
Network Devices OpenFlow SDN
Controller Application
| | |
|PACKET_IN with single OFP | |
|match headers | |
|--------------------------->| |
| | |
| PACKET_OUT with single | |
| OFP action headers| |
|<---------------------------| |
| | |
| . | |
| . | |
| . | |
| | |
|PACKET_IN with single OFP | |
|match headers | |
|--------------------------->| |
| | |
| PACKET_OUT with single | |
| OFP action headers| |
|<---------------------------| |
| | |
|<Repeat the steps until the | |
|expiry of Trial Duration> | |
| | |
Bhuvan, et al. Expires November 25, 2018 [Page 37]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
|<Record the number of OFP | |
(Ntx1)|match headers sent> | |
| | |
|<Record the number of OFP | |
(Nrx1)|action headers rcvd> | |
| | |
Note: The Ntx1 on initial trials should be greater than Nrx1 and
repeat the trials until the Nrxn for two consecutive trials equeal
to (+/-P%).
Discussion:
This test will measure two benchmarks using single procedure. 1) The
Maximum Asynchronous Message Processing Rate will be obtained by
calculating the maximum PACKET OUTs (Nrxn) received from the
controller(s) across n trials. 2) The Loss-free Asynchronous Message
Processing Rate will be obtained by calculating the maximum PACKET
OUTs received from controller (s) when Loss Ratio equals zero. The
loss ratio is obtained by 1 - Nrxn/Ntxn
A.4.4. Reactive Path Provisioning Time
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow
Generator TP1 Generator TP2 Controller
| | | |
| |G-ARP (D1) | |
| |--------------------->| |
| | | |
| | |PACKET_IN(D1) |
| | |------------------>|
| | | |
|Traffic (S1,D1) | |
(Tsf1)|----------------------------------->| |
| | | |
| | | |
| | | |
| | |PACKET_IN(S1,D1) |
| | |------------------>|
| | | |
| | | FLOW_MOD(D1) |
| | |<------------------|
| | | |
| |Traffic (S1,D1) | |
| (Tdf1)|<---------------------| |
Bhuvan, et al. Expires November 25, 2018 [Page 38]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | | |
Legend:
G-ARP: Gratuitous ARP message.
Tsf1: Time of first frame sent from TP1
Tdf1: Time of first frame received from TP2
Discussion:
The Reactive Path Provisioning Time can be obtained by finding the
time difference between the transmit and receive time of the traffic
(Tsf1-Tdf1).
A.4.5. Proactive Path Provisioning Time
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| | | | |
| | | | <Install flow|
| | | | for S1,D1> |
| |G-ARP (D1) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |--------------->| |
| | | | |
|Traffic (S1,D1) | | |
Tsf1)|---------------------------->| | |
| | | | |
| | | FLOW_MOD(D1) | |
| | |<---------------| |
| | | | |
| |Traffic (S1,D1)| | |
| (Tdf1)|<--------------| | |
| | | | |
Legend:
G-ARP: Gratuitous ARP message.
Tsf1: Time of first frame sent from TP1
Tdf1: Time of first frame received from TP2
Bhuvan, et al. Expires November 25, 2018 [Page 39]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Discussion:
The Proactive Path Provisioning Time can be obtained by finding the
time difference between the transmit and receive time of the traffic
(Tsf1-Tdf1).
A.4.6. Reactive Path Provisioning Rate
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow
Generator TP1 Generator TP2 Controller
| | | |
| | | |
| | | |
| |G-ARP (D1..Dn) | |
| |--------------------| |
| | | |
| | |PACKET_IN(D1..Dn) |
| | |--------------------->|
| | | |
|Traffic (S1..Sn,D1..Dn) | |
|--------------------------------->| |
| | | |
| | |PACKET_IN(S1.Sn,D1.Dn)|
| | |--------------------->|
| | | |
| | | FLOW_MOD(S1) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(D1) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(S2) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(D2) |
| | |<---------------------|
| | | . |
| | | . |
| | | |
| | | FLOW_MOD(Sn) |
| | |<---------------------|
| | | |
| | | FLOW_MOD(Dn) |
| | |<---------------------|
| | | |
Bhuvan, et al. Expires November 25, 2018 [Page 40]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | Traffic (S1..Sn, | |
| | D1..Dn)| |
| |<-------------------| |
| | | |
| | | |
Legend:
G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n
Discussion:
The Reactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the trial duration.
A.4.7. Proactive Path Provisioning Rate
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1.Dn)| |
| | |--------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
Tsf1)|---------------------------->| | |
| | | | |
| | | | <Install flow|
| | | | for S1,D1> |
| | | | |
| | | | . |
| | | | <Install flow|
| | | | for Sn,Dn> |
| | | | |
| | | FLOW_MOD(S1) | |
| | |<---------------| |
| | | | |
| | | FLOW_MOD(D1) | |
| | |<---------------| |
Bhuvan, et al. Expires November 25, 2018 [Page 41]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | | | |
| | | . | |
| | | FLOW_MOD(Sn) | |
| | |<---------------| |
| | | | |
| | | FLOW_MOD(Dn) | |
| | |<---------------| |
| | | | |
| |Traffic (S1.Sn,| | |
| | D1.Dn)| | |
| (Tdf1)|<--------------| | |
| | | | |
Legend:
G-ARP: Gratuitous ARP
D1..Dn: Destination Endpoint 1, Destination Endpoint 2 ....
Destination Endpoint n
S1..Sn: Source Endpoint 1, Source Endpoint 2 .., Source
Endpoint n
Discussion:
The Proactive Path Provisioning Rate can be obtained by finding the
total number of frames received at TP2 after the trial duration
A.4.8. Network Topology Change Detection Time
Procedure:
Network Devices OpenFlow SDN
Controller Application
| | |
| | <Bring down a link in |
| | switch S1>|
| | |
T0 |PORT_STATUS with link down | |
| from S1 | |
|--------------------------->| |
| | |
|First PACKET_OUT with LLDP | |
|to OF Switch | |
T1 |<---------------------------| |
| | |
| | <Record time of 1st |
| | PACKET_OUT with LLDP T1>|
Bhuvan, et al. Expires November 25, 2018 [Page 42]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Discussion:
The Network Topology Change Detection Time can be obtained by
finding the difference between the time the OpenFlow switch S1 sends
the PORT_STATUS message (T0) and the time that the OpenFlow
controller sends the first topology re-discovery message (T1) to
OpenFlow switches.
A.5. Scalability
A.5.1. Control Sessions Capacity
Procedure:
Network Devices OpenFlow
Controller
| |
| OFPT_HELLO Exchange for Switch 1 |
|<------------------------------------->|
| |
| OFPT_HELLO Exchange for Switch 2 |
|<------------------------------------->|
| . |
| . |
| . |
| OFPT_HELLO Exchange for Switch n |
|X<----------------------------------->X|
| |
Discussion:
The value of Switch n-1 will provide Control Sessions Capacity.
A.5.2. Network Discovery Size
Procedure:
Network Devices OpenFlow SDN
Controller Application
| | |
| | <Deploy network with |
| |given no. of OF switches N>|
| | |
| OFPT_HELLO Exchange | |
|<-------------------------->| |
| | |
| PACKET_OUT with LLDP | |
Bhuvan, et al. Expires November 25, 2018 [Page 43]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| to all switches | |
|<---------------------------| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-1| |
|--------------------------->| |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-2| |
|--------------------------->| |
| . | |
| . | |
| | |
| PACKET_IN with LLDP| |
| rcvd from switch-n| |
|--------------------------->| |
| | |
| | <Wait for the expiry |
| | of Trial duration (Td)>|
| | |
| | Query the controller for|
| | discovered n/w topo.(N1)|
| |<--------------------------|
| | |
| | <If N1==N repeat Step 1 |
| |with N+1 nodes until N1<N >|
| | |
| | <If N1<N repeat Step 1 |
| | with N=N1 nodes once and |
| | exit> |
| | |
Legend:
n/w topo: Network Topology
OF: OpenFlow
Discussion:
The value of N1 provides the Network Discovery Size value. The trial
duration can be set to the stipulated time within which the user
expects the controller to complete the discovery process.
A.5.3. Forwarding Table Capacity
Bhuvan, et al. Expires November 25, 2018 [Page 44]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Procedure:
Test Traffic Network Devices OpenFlow SDN
Generator TP1 Controller Application
| | | |
| | | |
|G-ARP (H1..Hn) | | |
|----------------->| | |
| | | |
| |PACKET_IN(D1..Dn) | |
| |------------------>| |
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F1)
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F2)
| | | |
| | |<Wait for 5 secs>|
| | | |
| | | <Query for FWD |
| | | entry> |(F3)
| | | |
| | | <Repeat Step 2 |
| | |until F1==F2==F3>|
| | | |
Legend:
G-ARP: Gratuitous ARP
H1..Hn: Host 1 .. Host n
FWD: Forwarding Table
Discussion:
Query the controller forwarding table entries for multiple times
until the three consecutive queries return the same value. The last
value retrieved from the controller will provide the Forwarding
Table Capacity value. The query interval is user configurable. The 5
seconds shown in this example is for representational purpose.
Bhuvan, et al. Expires November 25, 2018 [Page 45]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
A.6. Security
A.6.1. Exception Handling
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |------------------>| | |
| | | | |
| | |PACKET_IN(D1..Dn)| |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
|----------------------------->| | |
| | | | |
| | |PACKET_IN(S1..Sa,| |
| | | D1..Da)| |
| | |---------------->| |
| | | | |
| | |PACKET_IN(Sa+1.. | |
| | |.Sn,Da+1..Dn) | |
| | |(1% incorrect OFP| |
| | | Match header)| |
| | |---------------->| |
| | | | |
| | | FLOW_MOD(D1..Dn)| |
| | |<----------------| |
| | | | |
| | | FLOW_MOD(S1..Sa)| |
| | | OFP headers| |
| | |<----------------| |
| | | | |
| |Traffic (S1..Sa, | | |
| | D1..Da)| | |
| |<------------------| | |
| | | | |
| | | | <Wait for |
| | | | Test |
| | | | Duration>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn1)>|
| | | | |
Bhuvan, et al. Expires November 25, 2018 [Page 46]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | | | <Repeat |
| | | | Step1 with |
| | | |2% incorrect|
| | | | PACKET_INs>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn2)>|
| | | | |
Legend:
G-ARP: Gratuitous ARP
PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN with wrong
version number
Rn1: Total number of frames received at Test Port 2 with
1% incorrect frames
Rn2: Total number of frames received at Test Port 2 with
2% incorrect frames
Discussion:
The traffic rate sent towards OpenFlow switch from Test Port 1
should be 1% higher than the Path Programming Rate. Rn1 will provide
the Path Provisioning Rate of controller at 1% of incorrect frames
handling and Rn2 will provide the Path Provisioning Rate of
controller at 2% of incorrect frames handling.
The procedure defined above provides test steps to determine the
effect of handling error packets on Path Programming Rate. Same
procedure can be adopted to determine the effects on other
performance tests listed in this benchmarking tests.
A.6.2. Denial of Service Handling
Procedure:
Test Traffic Test Traffic Network Devic OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1..Dn) | | |
| |------------------>| | |
| | | | |
| | |PACKET_IN(D1..Dn)| |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1..Dn) | | |
Bhuvan, et al. Expires November 25, 2018 [Page 47]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
|----------------------------->| | |
| | | | |
| | |PACKET_IN(S1..Sn,| |
| | | D1..Dn)| |
| | |---------------->| |
| | | | |
| | |TCP SYN Attack | |
| | |from a switch | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1..Dn) | |
| | |<----------------| |
| | | | |
| | | FLOW_MOD(S1..Sn)| |
| | | OFP headers| |
| | |<----------------| |
| | | | |
| |Traffic (S1..Sn, | | |
| | D1..Dn)| | |
| |<------------------| | |
| | | | |
| | | | <Wait for |
| | | | Test |
| | | | Duration>|
| | | | |
| | | | <Record Rx|
| | | | frames at|
| | | | TP2 (Rn1)>|
| | | | |
Legend:
G-ARP: Gratuitous ARP
Discussion:
TCP SYN attack should be launched from one of the emulated/simulated
OpenFlow Switch. Rn1 provides the Path Programming Rate of
controller uponhandling denial of service attack.
The procedure defined above provides test steps to determine the
effect of handling denial of service on Path Programming Rate. Same
procedure can be adopted to determine the effects on other
performance tests listed in this benchmarking tests.
Bhuvan, et al. Expires November 25, 2018 [Page 48]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
A.7. Reliability
A.7.1. Controller Failover Time
Procedure:
Test Traffic Test Traffic Network Device OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1) | | |
| |------------>| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |---------------->| |
| | | | |
|Traffic (S1..Sn,D1) | | |
|-------------------------->| | |
| | | | |
| | | | |
| | |PACKET_IN(S1,D1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| |Traffic (S1,D1)| | |
| |<------------| | |
| | | | |
| | |PACKET_IN(S2,D1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(S2) | |
| | |<----------------| |
| | | | |
| | |PACKET_IN(Sn-1,D1)| |
| | |---------------->| |
| | | | |
| | |PACKET_IN(Sn,D1) | |
| | |---------------->| |
| | | . | |
| | | . |<Bring down the|
| | | . |active control-|
Bhuvan, et al. Expires November 25, 2018 [Page 49]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | | | ler> |
| | | FLOW_MOD(Sn-1) | |
| | | <-X----------| |
| | | | |
| | |FLOW_MOD(Sn) | |
| | |<----------------| |
| | | | |
| |Traffic (Sn,D1)| | |
| |<------------| | |
| | | | |
| | | |<Stop the test |
| | | |after recv. |
| | | |traffic upon |
| | | | failure> |
Legend:
G-ARP: Gratuitous ARP.
Discussion:
The time difference between the last valid frame received before the
traffic loss and the first frame received after the traffic loss
will provide the controller failover time.
If there is no frame loss during controller failover time, the
controller failover time can be deemed negligible.
A.7.2. Network Re-Provisioning Time
Procedure:
Test Traffic Test Traffic Network Devices OpenFlow SDN
Generator TP1 Generator TP2 Controller Application
| | | | |
| |G-ARP (D1) | | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1) | |
| | |---------------->| |
| G-ARP (S1) | | |
|---------------------------->| | |
| | | | |
| | |PACKET_IN(S1) | |
| | |---------------->| |
| | | | |
|Traffic (S1,D1,Seq.no (1..n))| | |
Bhuvan, et al. Expires November 25, 2018 [Page 50]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
|---------------------------->| | |
| | | | |
| | |PACKET_IN(S1,D1) | |
| | |---------------->| |
| | | | |
| |Traffic (D1,S1,| | |
| | Seq.no (1..n))| | |
| |-------------->| | |
| | | | |
| | |PACKET_IN(D1,S1) | |
| | |---------------->| |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | | | |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(1))| | |
| |<--------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(2))| | |
| |<--------------| | |
| | | | |
| | | | |
| Traffic (D1,S1,Seq.no(1))| | |
|<----------------------------| | |
| | | | |
| Traffic (D1,S1,Seq.no(2))| | |
|<----------------------------| | |
| | | | |
| Traffic (D1,S1,Seq.no(x))| | |
|<----------------------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(x))| | |
| |<--------------| | |
| | | | |
| | | | |
| | | | <Bring down |
| | | | the switch in|
| | | |active traffic|
| | | | path> |
| | | | |
| | |PORT_STATUS(Sa) | |
Bhuvan, et al. Expires November 25, 2018 [Page 51]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
| | |---------------->| |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(n-1))| | |
| | X<-----------| | |
| | | | |
| Traffic (D1,S1,Seq.no(n-1))| | |
| X------------------------| | |
| | | | |
| | | | |
| | |FLOW_MOD(D1) | |
| | |<----------------| |
| | | | |
| | |FLOW_MOD(S1) | |
| | |<----------------| |
| | | | |
| Traffic (D1,S1,Seq.no(n))| | |
|<----------------------------| | |
| | | | |
| |Traffic (S1,D1,| | |
| | Seq.no(n))| | |
| |<--------------| | |
| | | | |
| | | |<Stop the test|
| | | | after recv. |
| | | | traffic upon|
| | | | failover> |
Legend:
G-ARP: Gratuitous ARP message.
Seq.no: Sequence number.
Sa: Neighbor switch of the switch that was brought down.
Discussion:
The time difference between the last valid frame received before the
traffic loss (Packet number with sequence number x) and the first
frame received after the traffic loss (packet with sequence number
n) will provide the network path re-provisioning time.
Note that the trial is valid only when the controller provisions the
alternate path upon network failure.
Bhuvan, et al. Expires November 25, 2018 [Page 52]
Internet-Draft SDN Controller Benchmarking Methodology May 2018
Authors' Addresses
Bhuvaneswaran Vengainathan
Veryx Technologies Inc.
1 International Plaza, Suite 550
Philadelphia
PA 19113
Email: bhuvaneswaran.vengainathan@veryxtech.com
Anton Basil
Veryx Technologies Inc.
1 International Plaza, Suite 550
Philadelphia
PA 19113
Email: anton.basil@veryxtech.com
Mark Tassinari
Hewlett-Packard,
8000 Foothills Blvd,
Roseville, CA 95747
Email: mark.tassinari@hpe.com
Vishwas Manral
Nano Sec,
CA
Email: vishwas.manral@gmail.com
Sarah Banks
VSS Monitoring
930 De Guigne Drive,
Sunnyvale, CA
Email: sbanks@encrypted.net
Bhuvan, et al. Expires November 25, 2018 [Page 53]