One document matched: draft-hamilton-bmwg-ca-bench-meth-07.txt
Differences from draft-hamilton-bmwg-ca-bench-meth-06.txt
Internet Engineering Task Force M. Hamilton
Internet-Draft BreakingPoint Systems
Intended status: Informational S. Banks
Expires: January 12, 2012 Cisco Systems
July 11, 2011
Benchmarking Methodology for Content-Aware Network Devices
draft-hamilton-bmwg-ca-bench-meth-07
Abstract
The purpose of this document is to define a set of test scenarios
which may be used to create a series of statistics that will help to
better understand the performance of network devices that operate at
nework layers above IP. More specifically, these scenarios are
designed to most accurately predict performance of these devices when
subjected to dynamic traffic patterns. This document will operate
within the constraints of the Benchmarking Working Group charter,
namely black box characterization in a laboratory environment.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 12, 2012.
Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
Hamilton & Banks Expires January 12, 2012 [Page 1]
Internet-Draft Methodology for Content-Aware Devices July 2011
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5
2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.1. Test Considerations . . . . . . . . . . . . . . . . . . . 6
3.2. Clients and Servers . . . . . . . . . . . . . . . . . . . 6
3.3. Traffic Generation Requirements . . . . . . . . . . . . . 6
3.4. Discussion of Network Mathematics . . . . . . . . . . . . 7
3.5. Framework for Traffic Specification . . . . . . . . . . . 8
3.6. Multiple Client/Server Testing . . . . . . . . . . . . . . 8
3.7. Device Configuration Considerations . . . . . . . . . . . 9
3.7.1. Network Addressing . . . . . . . . . . . . . . . . . . 9
3.7.2. Network Address Translation . . . . . . . . . . . . . 9
3.7.3. TCP Stack Considerations . . . . . . . . . . . . . . . 9
3.7.4. Other Considerations . . . . . . . . . . . . . . . . . 9
4. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 9
4.1. Maximum Application Flow Rate . . . . . . . . . . . . . . 10
4.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 10
4.1.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 10
4.1.2.1. Application-Layer Parameters . . . . . . . . . . . 10
4.1.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 10
4.1.4. Measurement . . . . . . . . . . . . . . . . . . . . . 10
4.1.4.1. Maximum Application Flow Rate . . . . . . . . . . 10
4.1.4.2. Application Flow Duration . . . . . . . . . . . . 10
4.1.4.3. Packet Loss . . . . . . . . . . . . . . . . . . . 11
4.1.4.4. Application Flow Latency . . . . . . . . . . . . . 11
4.2. Application Throughput . . . . . . . . . . . . . . . . . . 11
4.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 11
4.2.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 11
4.2.2.1. Parameters . . . . . . . . . . . . . . . . . . . . 11
4.2.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 11
4.2.4. Measurement . . . . . . . . . . . . . . . . . . . . . 11
4.2.4.1. Maximum Throughput . . . . . . . . . . . . . . . . 11
4.2.4.2. Packet Loss . . . . . . . . . . . . . . . . . . . 12
4.2.4.3. Maximum Application Flow Rate . . . . . . . . . . 12
4.2.4.4. Application Flow Duration . . . . . . . . . . . . 12
4.2.4.5. Packet Loss . . . . . . . . . . . . . . . . . . . 12
4.2.4.6. Application Flow Latency . . . . . . . . . . . . . 12
4.3. Malicious Traffic Handling . . . . . . . . . . . . . . . . 12
4.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 12
Hamilton & Banks Expires January 12, 2012 [Page 2]
Internet-Draft Methodology for Content-Aware Devices July 2011
4.3.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 12
4.3.2.1. Parameters . . . . . . . . . . . . . . . . . . . . 12
4.3.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 13
4.3.4. Measurement . . . . . . . . . . . . . . . . . . . . . 13
4.4. Malformed Traffic Handling . . . . . . . . . . . . . . . . 13
4.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 13
4.4.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 13
4.4.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 13
4.4.4. Measurement . . . . . . . . . . . . . . . . . . . . . 14
5. Appendix A: Example Test Case . . . . . . . . . . . . . . . . 14
6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16
7. Security Considerations . . . . . . . . . . . . . . . . . . . 16
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8.1. Normative References . . . . . . . . . . . . . . . . . . . 16
8.2. Informative References . . . . . . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17
Hamilton & Banks Expires January 12, 2012 [Page 3]
Internet-Draft Methodology for Content-Aware Devices July 2011
1. Introduction
Content-aware and deep packet inspection (DPI) device penetration has
grown significantly over the last decade. No longer are devices
simply using Ethernet headers and IP headers to make forwarding
decisions. Devices that could historically be classified as
'stateless' or raw forwarding devices are now seeing more DPI
functionality. Devices such as core and edge routers are now being
developed with DPI functionality to make more intelligent routing and
forwarding decisions.
The Benchmarking Working Group (BMWG) has historically produced
Internet Drafts and Requests for Comment that are focused
specifically on creating output metrics that are derived from a very
specific and well-defined set of input parameters that are completely
and unequivocally reproducible from testbed to testbed. The end goal
of such methodologies is to, in the words of the BMWG charter "reduce
specmanship" from network equipment manufacturers(NEM's). Existing
BMWG work has certainly met this stated goal.
Today, device sophistication has expanded beyond existing
methodologies, allowing vendors to reengage in specmanship. In order
to achieve the stated BMWG goals, the methodologies designed to hold
vendors accountable must evolve with the enhanced device
functionality.
The BMWG has historically avoided the use of the term "realistic"
throughout all of its drafts and RFCs. While this document will not
explicitly use this term, the end goal of the terminology and
methodology is to generate performance metrics that will be as close
as possible to equivalent metrics in a production environment. It
should be further noted than any metrics acquired from a production
network MUST be captured according to the policies and procedures of
the IPPM or PMOL working groups.
An explicit non-goal of this document is to replace existing
methodology/terminology pairs such as RFC 2544 [1]/RFC 1242 [2] or
RFC 3511 [3]/RFC 2647 [4]. The explicit goal of this document is to
create a methodology and terminology pair that is more suited for
modern devices while complementing the data acquired using existing
BMWG methodologies. Existing BMWG work generally revolves around
completely repeatable input stimulus, expecting fully repeatable
output. This document departs from this mantra due to the nature of
modern traffic and is more focused on output repeatability than on
static input stimulus.
Some of the terms used throughout this draft have previously been
defined in "Benchmarking Terminology for Firewall Performance" RFC
Hamilton & Banks Expires January 12, 2012 [Page 4]
Internet-Draft Methodology for Content-Aware Devices July 2011
2647 [4]. This document SHOULD be consulted prior to using this
document.
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [5].
2. Scope
Content-aware devices take many forms, shapes and architectures.
These devices are advanced network interconnect devices that inspect
deep into the application payload of network data packets to do
classification. They may be as simple as a firewall that uses
application data inspection for rule set enforcement, or they may
have advanced functionality such as performing protocol decoding and
validation, anti-virus, anti-spam and even application exploit
filtering.
It shall be explicitly stated that this methodology does not imply
the use of traffic captured from live networks and replayed.
This document is strictly focused on examining performance and
robustness across a focused set of metrics that may be used to more
accurately predict device performance when deployed in modern
networks. These metrics will be implementation independent.
It should also be noted that the purpose of this document is not to
perform functional testing of the potential features in the Device/
System Under Test (DUT/SUT)[4] nor specify the configurations that
should be tested. Various definitions of proper operation and
configuration may be appropriate within different contexts. While
the definition of these parameters are outside the scope of this
document, the specific configuration of both the DUT and tester
SHOULD be published with the test results for repeatability and
comparison purposes.
While a list of devices that fall under this category will quickly
become obsolete, an initial list of devices that would be well served
by utilizing this type of methodology should prove useful. Devices
such as firewalls, intrusion detection and prevention devices,
application delivery controllers, deep packet inspection devices, and
unified threat management systems generally fall into the content-
aware category.
Hamilton & Banks Expires January 12, 2012 [Page 5]
Internet-Draft Methodology for Content-Aware Devices July 2011
3. Test Setup
This document will be applicable to most test configurations and will
not be confined to a discussion on specific test configurations.
Since each DUT/SUT will have their own unique configuration, users
MUST configure their device with the same parameters that would be
used in the actual deployment of the device. The DUT configuration
MUST be published with the final benchmarking results. If available,
command-line scripts used to configured the DUT and any configuration
information for the tester SHOULD be published with the final results
3.1. Test Considerations
3.2. Clients and Servers
Content-aware device testing SHOULD involve multiple clients and
multiple servers. As with RFC 3511 [3], this methodology will use
the terms virtual clients/servers because both the client and server
will be represented by the tester and not actual clients/servers.
Similarly defined in RFC 3511 [3], a data source may emulate multiple
clients and/or servers within the context of the same test scenario.
The test report MUST indicate the number of virtual clients/servers
used during the test. In Appendix C of RFC 2544 [1], the range of IP
addresses assigned to the BMWG by the IANA are listed. This address
range SHOULD be adhered to in accordance with RFC 2544 [1].
Additionally, section 5.2 of RFC 5180 [6] SHOULD be consulted for the
appropriate address ranges when testing IPv6-enabled configurations.
3.3. Traffic Generation Requirements
The explicit purposes of content-aware devices vary widely, but these
devices use information deeper inside the application flow to make
decisions and classify traffic. This methodology will utilize
traffic flows that resemble real application traffic without
utilizing captures from live production networks. Application Flows,
as defined in RFC 2722 [7] are able to be well-defined without simply
referring to a network capture. The traffic template is defined and
listed in the appendix of this document.
The test tool MUST be able to create application flows between every
client and server, regardless of direction. The tester MUST be able
to open TCP connections on multiple destination ports and MUST be
able to direct UDP traffic to multiple destination ports.
While it is duly noted that no two production networks look alike,
this document will illustrate an example mix of what traffic may look
like on a sample production network. A user of this methodology is
free to utilize the sample mix as provided in the appendix. If a
Hamilton & Banks Expires January 12, 2012 [Page 6]
Internet-Draft Methodology for Content-Aware Devices July 2011
user of this methodology understands the traffic patterns in their
production network, that user SHOULD use the framework provided in
the appendix to create a traffic mix appropriate to their
environment.
3.4. Discussion of Network Mathematics
Prior to executing the methodology as outlined in the following
sections, it is imperative to understan the implications of utilizing
representative application flows for the actual traffic content of
the benchmarking effort. One interesting aspect of utilizing
application flows is that each flow is inherently different from
every other application flow. The content of each flow will vary
from application to application, and in most cases, even varies
within the same type of application flow. The following description
of the methodology will individually benchmark every individual type
and subset of application flow, prior to performing similar tests
with a traffic mix as specified either by the sample mix in the
appendix, or as defined by the user of this methodology.
The purpose of this process is to ensure that any performance
implications that are discovered during the mixed testing aren't due
to the inherent physical network limitations. As an example of this
phenomina, let's take a simgle-homed network device, as illustrated
in the following diagram.
+----------+
+---+ 1gE | DUT/ | 1gE +---+
|C/S|------| SUT |------|C/S|
+---+ +----------+ +---+
Simple DUT Configruation
Figure 1: Single-Homed Example
For the purpose of this discussion, let's take a theoretical
application flow that utilizes UDP for the transport layer. Assume
that the sample transaction we will be using to model this particular
flow requires 10 UDP datagrams to complete the transaction. For
simplicity, each datagram within the flow is exactly 64 bytes,
including associated Ethernet, IP, and UDP overhead. With any
network device,there are always three metrics which interact with
each other: concurrent application flows, application flows per
second, and throughput.
Our example testbed is a single-homed device connected with 1 gigabit
ethernet links. The purpose of this benchmark effort is to quantify
the number of application flows per second that may be processed
Hamilton & Banks Expires January 12, 2012 [Page 7]
Internet-Draft Methodology for Content-Aware Devices July 2011
through our device under test. Let's assume that the result from our
scenario is that the DUT is able to process 10,000 application flows
per second. The question is whether that ceiling is the actual
ceiling of the device, or if it is actual being gated by one of the
other metrics. If we do the appropriate math, 10000 flows per
second, with each flow at 640 total bytes means that we are acheiving
a throughput of roughtly 49 Mbps. This is dramatically less than the
1 gigabit physical link we are using. We can conclude that 10,000
flows per second is in fact the performance limit of the device.
If we change the example slightly and change the size of each
datagram to 1312 bytes, then we'll need to redo our math. Assuming
the same observed DUT limitation of 10,000 flows per second, we need
to ensure that this is an artifact of the DUT, and not of physical
limitations. For each flow, we'll require 104,960 bits. 10,000 flows
per second implies a throughput of roughly 1 Gbps. At this point, we
cannot difinitively answer whether the DUT is actually limited to
10,000 flows per second. If we are able to modify the scenario, and
utilize 10 Gigabit interfaces, then perhaps the flow per second
ceiling will be reached at a higher number than 10,000.
This example illustrates why a user of this methodology MUST
benchmark each application variant individually to ensure that each
flow's ceilings are true ceilings, rather than an artifact of a
different limitation.
3.5. Framework for Traffic Specification
The following table MUST be specified for each application flow
variant.
o Flow Size in Bits
o Percentage of Aggregate Flows: 25%
o Transport Protocol(s): TCP,UDP
o Destination Port(s): 80
3.6. Multiple Client/Server Testing
In actual network deployments, connections are being established
between multiple clients and multiple servers simultaneously. Device
vendors have been known to optimize the operation of their devices
for easily defined patterns. The connection sequence ordering
scenarios a device will see on a network will likely be much less
deterministic. In fact, many application flows have multiple layer 4
connections within a single flow, with client and server reversing
Hamilton & Banks Expires January 12, 2012 [Page 8]
Internet-Draft Methodology for Content-Aware Devices July 2011
roles. This methodology makes no assumptions about flow initiation
sequence across multiple ports.
3.7. Device Configuration Considerations
The configruation of the DUT may have an effect on the observed
results of the following methodology. A comprehensive, but certainly
not exhaustive, list of potential considerations is listed below.
3.7.1. Network Addressing
The IANA has issued a range of IP addresses to the BMWG for purposes
of benchmarking. Please refer to RFC 2544 [1] for more details.
3.7.2. Network Address Translation
Many content-aware devices are capable of performing Network Address
Translation (NAT)[4]. If the final deployment of the DUT will have
this functionality enabled, then the DUT MUST also have it enabled
during the execution of this methodology. It MAY be beneficial to
perform the test series in both modes in order to determine the
performance differential when using NAT. The test report MUST
indicate whether NAT was enabled during the testing process.
3.7.3. TCP Stack Considerations
As with RFC 3511 [3], TCP options SHOULD remain constant across all
devices under test in order to ensure truly comparable results. This
document does not attempt to specify which TCP options should be
used, but all devices tested SHOULD be subject to the same
configuration options.
3.7.4. Other Considerations
Various content-aware devices will have widely varying feature sets.
In the interest of representative test results, the DUT features that
will likely be enabled in the final deployment SHOULD be used. This
methodology is not intended to advise on which features should be
enabled, but to suggest using actual deployment configurations.
4. Benchmarking Tests
Each of the following benchmark scenarios SHOULD be run with each of
the single application flow templates. Upon completion of all
iterations, the mixed test SHOULD be completed, subject to the
traffic mix as defined by the user.
Hamilton & Banks Expires January 12, 2012 [Page 9]
Internet-Draft Methodology for Content-Aware Devices July 2011
4.1. Maximum Application Flow Rate
4.1.1. Objective
To determine the maximum rate through which a device is able to
establish and complete application flows as defined by RFC 2647 [4].
4.1.2. Setup Parameters
The following parameters MUST be defined for all tests:
4.1.2.1. Application-Layer Parameters
For each application protocol in use during the test run, the table
provided in Section 3.5 must be published.
4.1.3. Procedure
The test SHOULD generate application network traffic that meets the
conditions of Section 3.3. The traffic pattern SHOULD begin with an
application flow rate of 10% of expected maximum. The test SHOULD be
configured to increase the attempt rate in units of 10% up through
110% of expected maximum. The duration of each loading phase SHOULD
be at least 30 seconds. This test MAY be repeated, each subsequent
iteration beginning at 5% of expected maximum and increasing session
establishment rate to 10% more than the maximum observed from the
previous test run.
This procedure MAY be repeated any number of times with the results
being averaged together.
4.1.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix:
4.1.4.1. Maximum Application Flow Rate
The test tool SHOULD report the maximum rate at which application
flows were completed, as defined by RFC 2647 [4], Section 3.7. This
rate SHOULD be reported individually for each application protocol
present within the traffic mix.
4.1.4.2. Application Flow Duration
The test tool SHOULD report the minimum, maximum and average
application duration, as defined by RFC 2647 [4], Section 3.9. This
duration SHOULD be reported individually for each application
Hamilton & Banks Expires January 12, 2012 [Page 10]
Internet-Draft Methodology for Content-Aware Devices July 2011
protocol present within the traffic mix.
4.1.4.3. Packet Loss
The test tool SHOULD report the number of flow packets lost or
dropped from source to destination.
4.1.4.4. Application Flow Latency
The test tool SHOULD report the minimum, maximum and average amount
of time an application flow member takes to traverse the DUT, as
defined by RFC 1242 [2], Section 3.13. This rate SHOULD be reported
individually for each application protocol present within the traffic
mix.
4.2. Application Throughput
4.2.1. Objective
To determine the maximum rate through which a device is able to
forward bits when using application flows as defined in the previous
sections.
4.2.2. Setup Parameters
The following parameters MUST be defined and reported for all tests:
4.2.2.1. Parameters
The same parameters as described in Section 4.1.2 MUST be used.
4.2.3. Procedure
This test will attempt to send application flows through the device
at a flow rate of 30% of the maximum, as observed in Section 4.1.
This procedure MAY be repeated with the results from each iteration
averaged together.
4.2.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix:
4.2.4.1. Maximum Throughput
The test tool SHOULD report the minimum, maximum and average
application throughput.
Hamilton & Banks Expires January 12, 2012 [Page 11]
Internet-Draft Methodology for Content-Aware Devices July 2011
4.2.4.2. Packet Loss
The test tool SHOULD report the number of network packets lost or
dropped from source to destination.
4.2.4.3. Maximum Application Flow Rate
The test tool SHOULD report the maximum rate at which application
flows were completed, as defined by RFC 2647 [4], Section 3.7. This
rate SHOULD be reported individually for each application protocol
present within the traffic mix.
4.2.4.4. Application Flow Duration
The test tool SHOULD report the minimum, maximum and average
application duration, as defined by RFC 2647 [4], Section 3.9. This
duration SHOULD be reported individually for each application
protocol present within the traffic mix.
4.2.4.5. Packet Loss
The test tool SHOULD report the number of flow packets lost or
dropped from source to destination.
4.2.4.6. Application Flow Latency
The test tool SHOULD report the minimum, maximum and average amount
of time an application flow member takes to traverse the DUT, as
defined by RFC 1242 [2], Section 3.13. This rate SHOULD be reported
individually for each application protocol present within the traffic
mix.
4.3. Malicious Traffic Handling
4.3.1. Objective
To determine the effects on performance that malicious traffic may
have on the DUT. While this test is not designed to characterize
accuracy of detection or classification, it MAY be useful to record
these measurements as specified below.
4.3.2. Setup Parameters
4.3.2.1. Parameters
The same parameters as described in Section 4.1.2 MUST be used.
Additionally, the following parameters MUST be defined and reported
Hamilton & Banks Expires January 12, 2012 [Page 12]
Internet-Draft Methodology for Content-Aware Devices July 2011
for all tests:
o Attack List: A listing of the malicious traffic that was generated
by the test.
4.3.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3 and Section 4.2.3. When performing the procedures
listed previously, the tester should generate malicious traffic
representative of the final network deployment. The mix of attacks
MAY include software vulnerability exploits, network worms, back-door
access attempts, network probes and other malicious traffic.
If a DUT may be run with and without the attack mitigation, both
procedures SHOULD be run with and without the feature enabled on the
DUT to determine the affects of the malicious traffic on the baseline
metrics previously derived. If a DUT does not have active attack
mitigation capabilities, this procedure SHOULD be run regardless.
Certain malicious traffic could affect device performance even if the
DUT does not actively inspect packet data for malicious traffic.
4.3.4. Measurement
The metrics specified by Section 4.1.4 and Section 4.2.4 SHOULD be
determined from this test.
4.4. Malformed Traffic Handling
4.4.1. Objective
To determine the effects on performance and stability that malformed
traffic may have on the DUT.
4.4.2. Setup Parameters
The same parameters must be used for Transport-Layer and Application
Layer Parameters previously specified in Section 4.1.2 and
Section 4.2.2.
4.4.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3 and Section 4.2.3. When performing the procedures
listed previously, the tester should generate malformed traffic at
all protocol layers. This is commonly known as fuzzed traffic.
Fuzzing techniques generally modify portions of packets, including
checksum errors, invalid protocol options, and improper protocol
Hamilton & Banks Expires January 12, 2012 [Page 13]
Internet-Draft Methodology for Content-Aware Devices July 2011
conformance. This test SHOULD be run on a DUT regardless of whether
it has built-in mitigation capabilities.
4.4.4. Measurement
For each protocol present in the traffic mix, the metrics specified
by Section 4.1.4 and Section 4.2.4 MAY be determined. This data may
be used to ascertain the effects of fuzzed traffic on the DUT.
5. Appendix A: Example Test Case
This appendix shows an example case of a protocol mix that may be
used with this methodology.
Hamilton & Banks Expires January 12, 2012 [Page 14]
Internet-Draft Methodology for Content-Aware Devices July 2011
+---------------------------+-----------------------+-------------+
| Protocol | Label | Value |
+---------------------------+-----------------------+-------------+
| Web 1kB | | |
| | Flow Size | 1kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| Web 10kB | | |
| | Flow Size | 10kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| Web 100kB | | |
| | Flow Size | 100kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 80 |
| BitTorrent Movie Download | | |
| | Flow Size | 500 MB |
| | Flow Percentage | 5% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 6881-6889 |
| SMTP Email | | |
| | Flow Size | 50 kB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 25 |
| IMAP Email | | |
| | Flow Size | 100 kB |
| | Flow Percentage | 15% |
| | Transport Protocol(s) | TCP |
| | Destination Port(s) | 143 |
| DNS | | |
| | Flow Size | 2 kB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | UDP |
| | Destination Port(s) | 53 |
| RTP | | |
| | Flow Size | 100 mB |
| | Flow Percentage | 10% |
| | Transport Protocol(s) | UDP |
| | Destination Port(s) | 20000-65535 |
+---------------------------+-----------------------+-------------+
Table 1: Sample Traffic Pattern
Hamilton & Banks Expires January 12, 2012 [Page 15]
Internet-Draft Methodology for Content-Aware Devices July 2011
6. IANA Considerations
This memo includes no request to IANA.
All drafts are required to have an IANA considerations section (see
the update of RFC 2434 [8] for a guide). If the draft does not
require IANA to do anything, the section contains an explicit
statement that this is the case (as above). If there are no
requirements for IANA, the section will be removed during conversion
into an RFC by the RFC Editor.
7. Security Considerations
Benchmarking activities as described in this memo are limited to
technology characterization using controlled stimuli in a laboratory
environment, with dedicated address space and the other constraints
RFC 2544 [1].
The benchmarking network topology will be an independent test setup
and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test
management network
8. References
8.1. Normative References
[1] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999.
[2] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991.
[3] Hickman, B., Newman, D., Tadjudin, S., and T. Martin,
"Benchmarking Methodology for Firewall Performance", RFC 3511,
April 2003.
[4] Newman, D., "Benchmarking Terminology for Firewall Performance",
RFC 2647, August 1999.
[5] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", BCP 14, RFC 2119, March 1997.
[6] Popoviciu, C., Hamza, A., Van de Velde, G., and D. Dugatkin,
"IPv6 Benchmarking Methodology for Network Interconnect
Devices", RFC 5180, May 2008.
Hamilton & Banks Expires January 12, 2012 [Page 16]
Internet-Draft Methodology for Content-Aware Devices July 2011
[7] Brownlee, N., Mills, C., and G. Ruth, "Traffic Flow Measurement:
Architecture", RFC 2722, October 1999.
8.2. Informative References
[8] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA
Considerations Section in RFCs", BCP 26, RFC 5226, May 2008.
Authors' Addresses
Mike Hamilton
BreakingPoint Systems
Austin, TX 78717
US
Phone: +1 512 636 2303
Email: mhamilton@breakingpoint.com
Sarah Banks
Cisco Systems
San Jose, CA 95134
US
Email: sabanks@cisco.com
Hamilton & Banks Expires January 12, 2012 [Page 17]
| PAFTECH AB 2003-2026 | 2026-04-25 02:42:43 |