Internet DRAFT - draft-green-bmwg-seceff-bench-meth
draft-green-bmwg-seceff-bench-meth
INTERNET-DRAFT K. Green, T. Alexander
Intended Status: Informational (Ixia)
Expires: Aug 1, 2012 March 2012
Benchmarking Methodology for Evaluating the Security Effectiveness
of Content Aware Devices
draft-green-bmwg-seceff-bench-meth-01
Abstract
This document defines a methodology for evaluating the ability of
content-aware network devices to correctly detect and block malicious
or administratively disallowed traffic flows. This benchmark
addresses the issue of classification accuracy under well defined
conditions. It is not concerned with measuring forwarding performance
which is covered by other BMWG documents.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on Aug 1, 2012.
K. Green, T. Alexander Expires Aug 1, 2012 [Page 1]
INTERNET DRAFT Security Effectiveness Methodology March 2012
Copyright and License Notice
Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
K. Green, T. Alexander Expires Aug 1, 2012 [Page 2]
INTERNET DRAFT Security Effectiveness Methodology March 2012
Table of Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Requirements Language . . . . . . . . . . . . . . . . . . . 5
2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Existing Terminology . . . . . . . . . . . . . . . . . . . 5
2.1.1 Illegal Traffic . . . . . . . . . . . . . . . . . . . . 5
2.2 New Terminology . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Attack . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Legal Traffic . . . . . . . . . . . . . . . . . . . . . 6
3 Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.1 Application Traffic Mix . . . . . . . . . . . . . . . . . . 6
4 Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 7
4.1 Attack-only Blocking Rate . . . . . . . . . . . . . . . . . 7
4.2 Error-free Attack Blocking Rate . . . . . . . . . . . . . . 7
4.3 Attack Blocking Effectiveness . . . . . . . . . . . . . . . 7
5 Security Considerations . . . . . . . . . . . . . . . . . . . . 7
6 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 8
7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8
8 References . . . . . . . . . . . . . . . . . . . . . . . . . . 8
8.1 Normative References . . . . . . . . . . . . . . . . . . . 8
8.2 Informative References . . . . . . . . . . . . . . . . . . 8
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 8
K. Green, T. Alexander Expires Aug 1, 2012 [Page 3]
INTERNET DRAFT Security Effectiveness Methodology March 2012
1 Introduction
Networks of the 21st century exist in an environment flooded with
complex and highly sophisticated security threats. There is an
intense and enduring arms race under way between those developing and
distributing attack technology and those developing and supplying
defense technology.
In addition there is a growing need to limit access by users inside
private or corporate networks to Internet sites or services deemed
undesirable and to ensure that intellectual property and other
private information is not allowed to pass freely from inside the
protected network to the outside world.
In response to the this dynamic and constantly expanding range of
security threats and privacy requirements there is a growing
diversity of network devices that provide a variety of defensive
services including but not limited to firewall, intrusion detection,
intrusion prevention, anti-virus, anti-malware, anti-spam, anti-dos,
anti-ddos, unified threat management, data leakage prevention and
more. These content-aware devices use a mixture of stateless and
stateful L3 to L7 technologies, including deep packet inspection
(DPI), to categorize traffic flows.
What all of these defensive solutions have in common is the
requirement that they reliably and accurately distinguish between
legal (benign and allowed) traffic and illegal (malicious or
disallowed) traffic.
Categorization of traffic as either legal or illegal is fundamental
to the operation of these devices since it is a prerequisite to all
security functions.
Security Effectiveness is a measure of how accurately the device
under test (DUT) categorizes traffic:
o No false negatives = correctly blocks all illegal traffic
o No false positives = never blocks legal traffic
In contrast, Security Performance is the characterization of the
DUT's forwarding performance while under attack. Security Performance
measures how well the device forwards legal traffic with security
features enabled and in the presence of illegal traffic. This is
addressed in [HAMILTON].
Security Effectiveness is orthogonal to Security Performance.
K. Green, T. Alexander Expires Aug 1, 2012 [Page 4]
INTERNET DRAFT Security Effectiveness Methodology March 2012
1.1 Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
2 Terminology
2.1 Existing Terminology
2.1.1 Illegal Traffic
Definition:
Illegal traffic is defined by [RFC2647] as "Packets specified for
rejection in the rule set of the DUT/SUT".
Discussion:
That definition is interpreted in this context to be any illicit
traffic flow which should be blocked by the DUT. That means
illegal traffic is either malicious (i.e. part of a deliberate
attack) or administratively banned (i.e. disallowed from passing
in to or out of the protected network due to its content and/or
destination).
Unit of measurement:
not applicable
Issues:
2.2 New Terminology
2.2.1 Attack
Definition:
An attack is an attempt to transmit illegal traffic across the
DUT.
Discussion:
Attacks can be classified as follows:
Attempted = injected into the DUT
Blocked = dropped by the DUT
Successful = passed through the DUT
K. Green, T. Alexander Expires Aug 1, 2012 [Page 5]
INTERNET DRAFT Security Effectiveness Methodology March 2012
A successful attack indicates a failure by the DUT to recognize
and block the illegal traffic.
Unit of measurement:
not applicable
Issues:
See also:
legal traffic
2.2.1 Legal Traffic
Definition:
Legal traffic is any traffic flow which should not be blocked by
the DUT.
Discussion:
Legal traffic is implicitly benign and allowed.
Unit of measurement:
not applicable
Issues:
See also:
illegal traffic
3 Test Setup
3.1 Application Traffic Mix
Some test cases require the test equipment to inject legal traffic
mixed with the illegal traffic. The purpose of the legal traffic is
to force the DUT to distinguish between legal and illegal traffic and
it is not used to quantify the forwarding performance of the DUT from
an application perspective.
Given this purpose, in order to protect the integrity and
repeatability of the benchmark, a single fixed definition of the
legal traffic application mix is provided. No attempt is made to
accurately model any particular mix of application traffic such as
might be seen in an operational network.
Rather, the traffic mix includes an appropriate mix of traffic types
to ensure that the security engine cannot blindly assume that every
packet is either legal or illegal and so deliver unrealistically high
K. Green, T. Alexander Expires Aug 1, 2012 [Page 6]
INTERNET DRAFT Security Effectiveness Methodology March 2012
performance or otherwise undermine the benchmark.
In those test scenarios where application traffic is specified, the
following mix MUST be used:
*** TBD but likely to include at least UDP, TCP, HTTP ***
4 Benchmarking Tests
4.1 Attack-only Blocking Rate
Attack-only Blocking Rate (attacks/second) is defined as the largest
number of attacks per second where 100% of attacks are blocked with
no application traffic present.
4.2 Error-free Attack Blocking Rate
Error-free Attack Blocking Rate (attacks/second) is defined as the
largest number of attacks per second where 100% of attacks are
blocked in the presence of legal traffic and 0% of the legal traffic
is blocked or dropped.
4.3 Attack Blocking Effectiveness
Attack Blocking Effectiveness (percentage) is the ratio of blocked
attacks/attempted attacks counted over the total number of different
types of attack in the presence of legal traffic and where 0% of the
legal traffic is dropped or blocked.
5 Security Considerations
Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the other constraints
defined in [RFC2544].
The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test
management network.
Further, benchmarking is performed on a "black-box" basis, relying
solely on measurements observable external to the DUT/SUT.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
benchmarking purposes. Any implications for network security arising
K. Green, T. Alexander Expires Aug 1, 2012 [Page 7]
INTERNET DRAFT Security Effectiveness Methodology March 2012
from the DUT/SUT SHOULD be identical in the lab and in production
networks.
6 IANA Considerations
This memo includes no request to IANA.
7 Acknowledgements
Thanks to X, Y & Z for their review and comments.
8 References
8.1 Normative References
[RFC1242] Bradner, S., "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, July 1991.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2647] Newman, D., "Benchmarking Terminology for Firewall
Performance", RFC 2647, August 1999.
8.2 Informative References
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999.
[RFC3511] Hickman, B., Newman, D., Tadjudin, S., and T. Martin,
"Benchmarking Methodology for Firewall Performance",
RFC 3511, April 2003.
[HAMILTON] "Benchmarking Methodology for Content Aware Network
Devices", draft-hamilton-bmwg-ca-bench-07.txt
Authors' Addresses
Kenneth Green
Ixia
Australia
K. Green, T. Alexander Expires Aug 1, 2012 [Page 8]
INTERNET DRAFT Security Effectiveness Methodology March 2012
EMail: kgreen@ixiacom.com
Tom Alexander
Ixia
USA
EMail: talexander@ixiacom.com
K. Green, T. Alexander Expires Aug 1, 2012 [Page 9]