One document matched: draft-ietf-ippm-reporting-metrics-04.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="info" docName="draft-ietf-ippm-reporting-metrics-04"
ipr="pre5378Trust200902">
<front>
<title abbrev="Reporting Metrics">Reporting Metrics: Different Points of
View</title>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<city>Middletown</city>
<region>NJ</region>
<code>07748</code>
<country>USA</country>
</postal>
<phone>+1 732 420 1571</phone>
<facsimile>+1 732 368 1192</facsimile>
<email>acmorton@att.com</email>
<uri>http://home.comcast.net/~acmacm/</uri>
</address>
</author>
<author fullname="Gomathi Ramachandran" initials="G."
surname="Ramachandran">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<city>Middletown</city>
<region>New Jersey</region>
<code>07748</code>
<country>USA</country>
</postal>
<phone>+1 732 420 2353</phone>
<facsimile></facsimile>
<email>gomathi@att.com</email>
<uri></uri>
</address>
</author>
<author fullname="Ganga Maguluri" initials="G." surname="Maguluri">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue</street>
<city>Middletown</city>
<region>New Jersey</region>
<code>07748</code>
<country>USA</country>
</postal>
<phone>732-420-2486</phone>
<facsimile></facsimile>
<email>gmaguluri@att.com</email>
<uri></uri>
</address>
</author>
<date day="25" month="October" year="2010" />
<abstract>
<t>Consumers of IP network performance metrics have many different uses
in mind. The memo provides "long-term" reporting considerations (e.g,
days, weeks or months, as opposed to 10 seconds), based on analysis of
the two key audience points-of-view. It describes how the audience
categories affect the selection of metric parameters and options when
seeking info that serves their needs.</t>
</abstract>
<note title="Requirements Language">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref
target="RFC2119">RFC 2119</xref>.</t>
</note>
</front>
<middle>
<section title="Introduction">
<t>When designing measurements of IP networks and presenting the
results, knowledge of the audience is a key consideration. To present a
useful and relevant portrait of network conditions, one must answer the
following question:</t>
<t>"How will the results be used?"</t>
<t>There are two main audience categories:</t>
<t><list style="numbers">
<t>Network Characterization - describes conditions in an IP network
for quality assurance, troubleshooting, modeling, Service Level
Agreements (SLA), etc. The point-of-view looks inward, toward the
network, and the consumer intends their actions there.</t>
<t>Application Performance Estimation - describes the network
conditions in a way that facilitates determining affects on user
applications, and ultimately the users themselves. This
point-of-view looks outward, toward the user(s), accepting the
network as-is. This consumer intends to estimate a network-dependent
aspect of performance, or design some aspect of an application's
accommodation of the network. (These are *not* application metrics,
they are defined at the IP layer.)</t>
</list>This memo considers how these different points-of-view affect
both the measurement design (parameters and options of the metrics) and
statistics reported when serving their needs.</t>
<t>The IPPM framework <xref target="RFC2330"></xref> and other RFCs
describing IPPM metrics provide a background for this memo.</t>
</section>
<section title="Purpose and Scope">
<t>The purpose of this memo is to clearly delineate two points-of-view
(POV) for using measurements, and describe their effects on the test
design, including the selection of metric parameters and reporting the
results.</t>
<t>The scope of this memo primarily covers the design and reporting of
the loss and delay metrics <xref target="RFC2680"></xref> <xref
target="RFC2679"></xref>. It will also discuss the delay variation <xref
target="RFC3393"></xref> and reordering metrics <xref
target="RFC4737"></xref> where applicable.</t>
<t>With capacity metrics growing in relevance to the industry, the memo
also covers POV and reporting considerations for metrics resulting from
the Bulk Transfer Capacity Framework <xref target="RFC3148"></xref> and
Network Capacity Definitions <xref target="RFC5136"></xref>. These memos
effectively describe two different categories of metrics,</t>
<t><list style="symbols">
<t><xref target="RFC3148"></xref> with congestion flow-control and
the notion of unique data bits delivered, and</t>
<t><xref target="RFC5136"></xref> using a definition of raw capacity
without the restrictions of data uniqueness or
congestion-awareness.</t>
</list>It might seem at first glance that each of these metrics has an
obvious audience (Raw = Network Characterization, Restricted =
Application Performance), but reality is more complex and consistent
with the overall topic of capacity measurement and reporting. For
example, TCP is usually used in Restricted capacity measurement methods,
while UDP appears in Raw capacity measurement. The Raw and Restricted
capacity metrics will be treated in separate sections, although they
share one common reporting issue: representing variability in capacity
metric results as part of a long-term report.</t>
<t>Sampling, or the design of the active packet stream that is the basis
for the measurements, is also discussed.</t>
</section>
<section title="Reporting Results">
<t>This section gives an overview of recommendations, followed by
additional considerations for reporting results in the "long-term",
based on the discussion and conclusions of the major sections that
follow.</t>
<section title="Overview of Metric Statistics">
<t>This section gives an overview of reporting recommendations for the
loss, delay, and delay variation metrics.</t>
<t>The minimal report on measurements MUST include both Loss and Delay
Metrics.</t>
<t>For Packet Loss, the loss ratio defined in <xref
target="RFC2680"></xref> is a sufficient starting point, especially
the guidance for setting the loss threshold waiting time. We have
calculated a waiting time above that should be sufficient to
differentiate between packets that are truly lost or have long finite
delays under general measurement circumstances, 51 seconds. Knowledge
of specific conditions can help to reduce this threshold, but 51
seconds is considered to be manageable in practice.</t>
<t>We note that a loss ratio calculated according to <xref
target="Y.1540"></xref> would exclude errored packets from the
numerator. In practice, the difference between these two loss metrics
is small if any, depending on whether the last link prior to the
destination contributes errored packets.</t>
<t>For Packet Delay, we recommend providing both the mean delay and
the median delay with lost packets designated undefined (as permitted
by <xref target="RFC2679"></xref>). Both statistics are based on a
conditional distribution, and the condition is packet arrival prior to
a waiting time dT, where dT has been set to take maximum packet
lifetimes into account, as discussed below. Using a long dT helps to
ensure that delay distributions are not truncated.</t>
<t>For Packet Delay Variation (PDV), the minimum delay of the
conditional distribution should be used as the reference delay for
computing PDV according to <xref target="Y.1540"></xref> or <xref
target="RFC5481"></xref> and <xref target="RFC3393"></xref>. A useful
value to report is a pseudo range of delay variation based on
calculating the difference between a high percentile of delay and the
minimum delay. For example, the 99.9%-ile minus the minimum will give
a value that can be compared with objectives in <xref
target="Y.1541"></xref>.</t>
</section>
<section title="Long-Term Reporting Considerations">
<t><xref target="I-D.ietf-ippm-reporting"></xref> describes methods to
conduct measurements and report the results on a near-immediate time
scale (10 seconds, which we consider to be "short-term").</t>
<t>Measurement intervals and reporting intervals need not be the same
length. Sometimes, the user is only concerned with the performance
levels achieved over a relatively long interval of time (e.g, days,
weeks, or months, as opposed to 10 seconds). However, there can be
risks involved with running a measurement continuously over a long
period without recording intermediate results:</t>
<t><list style="symbols">
<t>Temporary power failure may cause loss of all the results to
date.</t>
<t>Measurement system timing synchronization signals may
experience a temporary outage, causing sub-sets of measurements to
be in error or invalid.</t>
<t>Maintenance may be necessary on the measurement system, or its
connectivity to the network under test.</t>
</list>For these and other reasons, such as <list style="symbols">
<t>the constraint to collect measurements on intervals similar to
user session length, or</t>
<t>the dual-use of measurements in monitoring activities where
results are needed on a period of a few minutes,</t>
</list>there is value in conducting measurements on intervals that
are much shorter than the reporting interval.</t>
<t>There are several approaches for aggregating a series of
measurement results over time in order to make a statement about the
longer reporting interval. One approach requires the storage of all
metric singletons collected throughout the reporting interval, even
though the measurement interval stops and starts many times.</t>
<t>Another approach is described in <xref target="RFC5835"></xref> as
"temporal aggregation". This approach would estimate the results for
the reporting interval based on many individual measurement interval
statistics (results) alone. The result would ideally appear in the
same form as though a continuous measurement was conducted. A memo to
address the details of temporal aggregation is yet to be prepared.</t>
<t>Yet another approach requires a numerical objective for the metric,
and the results of each measurement interval are compared with the
objective. Every measurement interval where the results meet the
objective contribute to the fraction of time with performance as
specified. When the reporting interval contains many measurement
intervals it is possible to present the results as "metric A was less
than or equal to objective X during Y% of time.</t>
<t>NOTE that numerical thresholds of acceptability are not set in IETF
performance work and are explicitly excluded from the IPPM
charter.</t>
<t>In all measurement, it is important to avoid unintended
synchronization with network events. This topic is treated in <xref
target="RFC2330"></xref> for Poisson-distributed inter-packet time
streams, and <xref target="RFC3432"></xref> for Periodic streams. Both
avoid synchronization through use of random start times.</t>
<t>There are network conditions where it is simply more useful to
report the connectivity status of the Source-Destination path, and to
distinguish time intervals where connectivity can be demonstrated from
other time intervals (where connectivity does not appear to exist).
<xref target="RFC2678"></xref> specifies a number of one-way and two
connectivity metrics of increasing complexity. In this memo, we
RECOMMEND that long term reporting of loss, delay, and other metrics
be limited to time intervals where connectivity can be demonstrated,
and other intervals be summarized as percent of time where
connectivity does not appear to exist. We note that this same approach
has been adopted in ITU-T Recommendation <xref target="Y.1540"></xref>
where performance parameters are only valid during periods of service
"availability" (evaluated according to a function based on packet
loss, and sustained periods of loss ratio greater than a threshold are
declared "unavailable").</t>
</section>
</section>
<section title="Effect of POV on the Loss Metric">
<t>This section describes the ways in which the Loss metric can be tuned
to reflect the preferences of the two audience categories, or different
POV. The waiting time to declare a packet lost, or loss threshold is one
area where there would appear to be a difference, but the ability to
post-process the results may resolve it.</t>
<section title="Loss Threshold">
<t><xref target="RFC2680">RFC 2680</xref> defines the concept of a
waiting time for packets to arrive, beyond which they are declared
lost. The text of the RFC declines to recommend a value, instead
saying that "good engineering, including an understanding of packet
lifetimes, will be needed in practice." Later, in the methodology,
they give reasons for waiting "a reasonable period of time", and
leaving the definition of "reasonable" intentionally vague.</t>
<section title="Network Characterization">
<t>Practical measurement experience has shown that unusual network
circumstances can cause long delays. One such circumstance is when
routing loops form during IGP re-convergence following a failure or
drastic link cost change. Packets will loop between two routers
until new routes are installed, or until the IPv4 Time-to-Live (TTL)
field (or the IPv6 Hop Limit) decrements to zero. Very long delays
on the order of several seconds have been measured <xref
target="Casner"></xref> <xref target="Cia03"></xref>.</t>
<t>Therefore, network characterization activities prefer a long
waiting time in order to distinguish these events from other causes
of loss (such as packet discard at a full queue, or tail drop). This
way, the metric design helps to distinguish more reliably between
packets that might yet arrive, and those that are no longer
traversing the network.</t>
<t>It is possible to calculate a worst-case waiting time, assuming
that a routing loop is the cause. We model the path between Source
and Destination as a series of delays in links (t) and queues (q),
as these two are the dominant contributors to delay. The normal path
delay across n hops without encountering a loop, D, is<figure
anchor="eqD" title="Normal Path Delay">
<preamble></preamble>
<artwork align="center"><![CDATA[ n
---
\
D = t + > t + q
0 / i i
---
i = 1]]></artwork>
<postamble></postamble>
</figure></t>
<t>and the time spent in the loop with L hops, is</t>
<t><figure anchor="eqR" title="Delay due to Rotations in a Loop">
<preamble></preamble>
<artwork align="center"><![CDATA[ i + L-1
---
\ (TTL - n)
R = C > t + q where C = ---------
/ i i max L
---
i ]]></artwork>
<postamble></postamble>
</figure></t>
<t>and where C is the number of times a packet circles the loop.</t>
<t>If we take the delays of all links and queues as 100ms each, the
TTL=255, the number of hops n=5 and the hops in the loop L=4,
then</t>
<t>D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds</t>
<t>We note that the link delays of 100ms would span most continents,
and a constant queue length of 100ms is also very generous. When a
loop occurs, it is almost certain to be resolved in 10 seconds or
less. The value calculated above is an upper limit for almost any
realistic circumstance.</t>
<t>A waiting time threshold parameter, dT, set consistent with this
calculation would not truncate the delay distribution (possibly
causing a change in its mathematical properties), because the
packets that might arrive have been given sufficient time to
traverse the network.</t>
<t>It is worth noting that packets that are stored and deliberately
forwarded at a much later time constitute a replay attack on the
measurement system, and are beyond the scope of normal performance
reporting.</t>
</section>
<section title="Application Performance">
<t>Fortunately, application performance estimation activities are
not adversely affected by the estimated worst-case transfer time.
Although the designer's tendency might be to set the Loss Threshold
at a value equivalent to a particular application's threshold, this
specific threshold can be applied when post-processing the
measurements. A shorter waiting time can be enforced by locating
packets with delays longer than the application's threshold, and
re-designating such packets as lost. Thus, the measurement system
can use a single loss threshold and support both application and
network performance POVs simultaneously.</t>
</section>
</section>
<section title="Errored Packet Designation">
<t>RFC 2680 designates packets that arrive containing errors as lost
packets. Many packets that are corrupted by bit errors are discarded
within the network and do not reach their intended destination.</t>
<t>This is consistent with applications that would check the payload
integrity at higher layers, and discard the packet. However, some
applications prefer to deal with errored payloads on their own, and
even a corrupted payload is better than no packet at all.</t>
<t>To address this possibility, and to make network characterization
more complete, it is recommended to distinguish between packets that
do not arrive (lost) and errored packets that arrive (conditionally
lost).</t>
</section>
<section title="Causes of Lost Packets">
<t>Although many measurement systems use a waiting time to determine
if a packet is lost or not, most of the waiting is in vain. The
packets are no-longer traversing the network, and have not reached
their destination.</t>
<t>There are many causes of packet loss, including:</t>
<t><list style="numbers">
<t>Queue drop, or discard</t>
<t>Corruption of the IP header, or other essential header info</t>
<t>TTL expiration (or use of a TTL value that is too small)</t>
<t>Link or router failure</t>
</list>After waiting sufficient time, packet loss can probably be
attributed to one of these causes.</t>
</section>
<section title="Summary for Loss">
<t>Given that measurement post-processing is possible (even encouraged
in the definitions of IPPM metrics), measurements of loss can easily
serve both points of view:</t>
<t><list style="symbols">
<t>Use a long waiting time to serve network characterization and
revise results for specific application delay thresholds as
needed.</t>
<t>Distinguish between errored packets and lost packets when
possible to aid network characterization, and combine the results
for application performance if appropriate.</t>
</list></t>
</section>
</section>
<section title="Effect of POV on the Delay Metric">
<t>This section describes the ways in which the Delay metric can be
tuned to reflect the preferences of the two consumer categories, or
different POV.</t>
<section title="Treatment of Lost Packets">
<t>The Delay Metric <xref target="RFC2679"></xref> specifies the
treatment of packets that do not successfully traverse the network:
their delay is undefined.</t>
<t>" >>The *Type-P-One-way-Delay* from Src to Dst at T is
undefined (informally, infinite)<< means that Src sent the first
bit of a Type-P packet to Dst at wire-time T and that Dst did not
receive that packet."</t>
<t>It is an accepted, but informal practice to assign infinite delay
to lost packets. We next look at how these two different treatments
align with the needs of measurement consumers who wish to characterize
networks or estimate application performance. Also, we look at the way
that lost packets have been treated in other metrics: delay variation
and reordering.</t>
<section title="Application Performance">
<t>Applications need to perform different functions, dependent on
whether or not each packet arrives within some finite tolerance. In
other words, a receivers' packet processing takes one of two
directions (or "forks" in the road):</t>
<t><list style="symbols">
<t>Packets that arrive within expected tolerance are handled by
processes that remove headers, restore smooth delivery timing
(as in a de-jitter buffer), restore sending order, check for
errors in payloads, and many other operations.</t>
<t>Packets that do not arrive when expected spawn other
processes that attempt recovery from the apparent loss, such as
retransmission requests, loss concealment, or forward error
correction to replace the missing packet.</t>
</list>So, it is important to maintain a distinction between
packets that actually arrive, and those that do not. Therefore, it
is preferable to leave the delay of lost packets undefined, and to
characterize the delay distribution as a conditional distribution
(conditioned on arrival).</t>
</section>
<section title="Network Characterization">
<t>In this discussion, we assume that both loss and delay metrics
will be reported for network characterization (at least).</t>
<t>Assume packets that do not arrive are reported as Lost, usually
as a fraction of all sent packets. If these lost packets are
assigned undefined delay, then network's inability to deliver them
(in a timely way) is captured only in the loss metric when we report
statistics on the Delay distribution conditioned on the event of
packet arrival (within the Loss waiting time threshold). We can say
that the Delay and Loss metrics are Orthogonal, in that they convey
non-overlapping information about the network under test.</t>
<t>However, if we assign infinite delay to all lost packets,
then:</t>
<t><list style="symbols">
<t>The delay metric results are influenced both by packets that
arrive and those that do not.</t>
<t>The delay singleton and the loss singleton do not appear to
be orthogonal (Delay is finite when Loss=0, Delay is infinite
when Loss=1).</t>
<t>The network is penalized in both the loss and delay metrics,
effectively double-counting the lost packets.</t>
</list></t>
<t>As further evidence of overlap, consider the Cumulative
Distribution Function (CDF) of Delay when the value positive
infinity is assigned to all lost packets. <xref target="CDF"></xref>
shows a CDF where a small fraction of packets are lost.</t>
<t><figure anchor="CDF"
title="Cumulative Distribution Function for Delay when Loss = +Infinity">
<preamble></preamble>
<artwork align="center"><![CDATA[ 1 | - - - - - - - - - - - - - - - - - -+
| |
| _..----''''''''''''''''''''
| ,-''
| ,'
| / Mass at
| / +infinity
| / = fraction
|| lost
|/
0 |_____________________________________
0 Delay +o0]]></artwork>
<postamble></postamble>
</figure></t>
<t>We note that a Delay CDF that is conditioned on packet arrival
would not exhibit this apparent overlap with loss.</t>
<t>Although infinity is a familiar mathematical concept, it is
somewhat disconcerting to see any time-related metric reported as
infinity, in the opinion of the authors. Questions are bound to
arise, and tend to detract from the goal of informing the consumer
with a performance report.</t>
</section>
<section title="Delay Variation">
<t><xref target="RFC3393"></xref> excludes lost packets from
samples, effectively assigning an undefined delay to packets that do
not arrive in a reasonable time. Section 4.1 describes this
specification and its rationale (ipdv = inter-packet delay variation
in the quote below).</t>
<t>"The treatment of lost packets as having "infinite" or
"undefined" delay complicates the derivation of statistics for ipdv.
Specifically, when packets in the measurement sequence are lost,
simple statistics such as sample mean cannot be computed. One
possible approach to handling this problem is to reduce the event
space by conditioning. That is, we consider conditional statistics;
namely we estimate the mean ipdv (or other derivative statistic)
conditioned on the event that selected packet pairs arrive at the
destination (within the given timeout). While this itself is not
without problems (what happens, for example, when every other packet
is lost), it offers a way to make some (valid) statements about
ipdv, at the same time avoiding events with undefined outcomes."</t>
<t>We note that the argument above applies to all forms of packet
delay variation that can be constructed using the "selection
function" concept of <xref target="RFC3393"></xref>. In recent work
the two main forms of delay variation metrics have been compared and
the results are summarized in <xref target="RFC5481"></xref>.</t>
</section>
<section title="Reordering">
<t><xref target="RFC4737"></xref>defines metrics that are based on
evaluation of packet arrival order, and include a waiting time to
declare a packet lost (to exclude them from further processing).</t>
<t>If packets are assigned a delay value, then the reordering metric
would declare any packets with infinite delay to be reordered,
because their sequence numbers will surely be less than the "Next
Expected" threshold when (or if) they arrive. But this practice
would fail to maintain orthogonality between the reordering metric
and the loss metric. Confusion can be avoided by designating the
delay of non-arriving packets as undefined, and reserving delay
values only for packets that arrive within a sufficiently long
waiting time.</t>
</section>
</section>
<section title="Preferred Statistics">
<t>Today in network characterization, the sample mean is one statistic
that is almost ubiquitously reported. It is easily computed and
understood by virtually everyone in this audience category. Also, the
sample is usually filtered on packet arrival, so that the mean is
based a conditional distribution.</t>
<t>The median is another statistic that summarizes a distribution,
having somewhat different properties from the sample mean. The median
is stable in distributions with a few outliers or without them.
However, the median's stability prevents it from indicating when a
large fraction of the distribution changes value. 50% or more values
would need to change for the median to capture the change.</t>
<t>Both the median and sample mean have difficulty with bimodal
distributions. The median will reside in only one of the modes, and
the mean may not lie in either mode range. For this and other reasons,
additional statistics such as the minimum, maximum, and 95%-ile have
value when summarizing a distribution.</t>
<t>When both the sample mean and median are available, a comparison
will sometimes be informative, because these two statistics are equal
only when the delay distribution is perfectly symmetrical.</t>
<t>Also, these statistics are generally useful from the Application
Performance POV, so there is a common set that should satisfy
audiences.</t>
<t>Plots of the delay distribution may also be useful when
single-value statistics indicate that new conditions are present. An
empirically-derived probability distribution function will usually
describe multiple modes more efficiently than any other form of
result.</t>
</section>
<section title="Summary for Delay">
<t>From the perspectives of:</t>
<t><list style="numbers">
<t>application/receiver analysis, where subsequent processing
depends on whether the packet arrives or times-out,</t>
<t>straightforward network characterization without
double-counting defects, and</t>
<t>consistency with Delay variation and Reordering metric
definitions,</t>
</list></t>
<t>the most efficient practice is to distinguish between truly lost
and delayed packets with a sufficiently long waiting time, and to
designate the delay of non-arriving packets as undefined.</t>
</section>
</section>
<section title="Effect of POV on Raw Capacity Metrics">
<t>This section describes the ways that raw capacity metrics can be
tuned to reflect the preferences of the two audiences, or different
Points-of-View (POV). Raw capacity refers to the metrics defined in
<xref target="RFC5136"></xref> which do not include restrictions such as
data uniqueness or flow-control response to congestion.</t>
<t>In summary, the metrics considered are IP-layer Capacity, Utilization
(or used capacity), and Available Capacity, for individual links and
complete paths. These three metrics form a triad: knowing one metric
constrains the other two (within their allowed range), and knowing two
determines the third. The link metrics have another key aspect in
common: they are single-measurement-point metrics at the egress of a
link. The path Capacity and Available Capacity are derived by examining
the set of single-point link measurements and taking the minimum
value.</t>
<section title="Type-P Parameter">
<t>The concept of "packets of type-P" is defined in <xref
target="RFC2330"></xref>. The type-P categorization has critical
relevance in all forms of capacity measurement and reporting. The
ability to categorize packets based on header fields for assignment to
different queues and scheduling mechanisms is now common place. When
un-used resources are shared across queues, the conditions in all
packet categories will affect capacity and related measurements. This
is one source of variability in the results that all audiences would
prefer to see reported in a useful and easily understood way.</t>
<t>Type-P in OWAMP and TWAMP is essentially confined to the Diffserv
Codepoint [ref]. DSCP is the most common qualifier for type-P.</t>
<t>Each audience will have a set of type-P qualifications and value
combinations that are of interest. Measurements and reports SHOULD
have the flexibility to per-type and aggregate performance.</t>
</section>
<section title="a priori Factors">
<t>The audience for Network Characterization may have detailed
information about each link that comprises a complete path (due to
ownership, for example), or some of the links in the path but not
others, or none of the links.</t>
<t>There are cases where the measurement audience only has information
on one of the links (the local access link), and wishes to measure one
or more of the raw capacity metrics. This scenario is quite common,
and has spawned a substantial number of experimental measurement
methods [ref to CAIDA survey page, etc.]. Many of these methods
respect that their users want a result fairly quickly and in a
one-trial. Thus, the measurement interval is kept short (a few seconds
to a minute). For long-term reporting, a sample of short term results
need to be summarized.</t>
</section>
<section title="IP-layer Capacity">
<t>For links, this metric's theoretical maximum value can be
determined from the physical layer bit rate and the bit rate reduction
due to the layers between the physical layer and IP. When measured,
this metric takes additional factors into account, such as the ability
of the sending device to process and forward traffic under various
conditions. For example, the arrival of routing updates may spawn high
priority processes that reduce the sending rate temporarily. Thus, the
measured capacity of a link will be variable, and the maximum capacity
observed applies to a specific time, time interval, and other relevant
circumstances.</t>
<t>For paths composed of a series of links, it is easy to see how the
sources of variability for the results grow with each link in the
path. Results variability will be discussed in more detail below.</t>
</section>
<section title="IP-layer Utilization">
<t>The ideal metric definition of Link Utilization <xref
target="RFC5136"></xref> is based on the actual usage (bits
successfully received during a time interval) and the Maximum Capacity
for the same interval.</t>
<t>In practice, Link Utilization can be calculated by counting the
IP-layer (or other layer) octets received over a time interval and
dividing by the theoretical maximum of octets that could have been
delivered in the same interval. A commonly used time interval is 5
minutes, and this interval has been sufficient to support network
operations and design for some time. 5 minutes is somewhat long
compared with the expected download time for web pages, but short with
respect to large file transfers and TV program viewing. It is fair to
say that considerable variability is concealed by reporting a single
(average) Utilization value for each 5 minute interval. Some
performance management systems have begun to make 1 minute averages
available.</t>
<t>There is also a limit on the smallest useful measurement interval.
Intervals on the order of the serialization time for a single Maximum
Transmission Unit (MTU) packet will observe on/off behavior and report
100% or 0%. The smallest interval needs to be some multiple of MTU
serialization time for averaging to be effective.</t>
</section>
<section title="IP-layer Available Capacity">
<t>The Available Capacity of a link can be calculated using the
Capacity and Utilization metrics.</t>
<t>When Available capacity of a link or path is estimated through some
measurement technique, the following parameters SHOULD be
reported:</t>
<t><list style="symbols">
<t>Name and reference to the exact method of measurement</t>
<t>IP packet length, octets (including IP header)</t>
<t>Maximum Capacity that can be assessed in the measurement
configuration</t>
<t>The time a duration of the measurement</t>
<t>All other parameters specific to the measurement method</t>
</list>Many methods of Available capacity measurement have a maximum
capacity that they can measure, and this maximum may be less than the
actual Available capacity of the link or path. Therefore, it is
important to know the capacity value beyond which there will be no
measured improvement.</t>
<t>The Application Design audience may have a target capacity value
and simply wish to assess whether there is sufficient Available
Capacity. This case simplifies measurement of link and path capacity
to some degree, as long as the measurable maximum exceeds the target
capacity.</t>
</section>
<section title="Variability in Utilization and Avail. Capacity">
<t>As with most metrics and measurements, assessing the consistency or
variability in the results gives a the user an intuitive feel for the
degree (or confidence) that any one value is representative of other
results, or the underlying distribution from which these singleton
measurements have come.</t>
<t>Two questions are raised here for further discussion:</t>
<t>What ways can Utilization be measured and summarized to describe
the potential variability in a useful way?</t>
<t>How can the variability in Available Capacity estimates be
reported, so that the confidence in the results is also conveyed?</t>
</section>
</section>
<section title="Effect of POV on Restricted Capacity Metrics">
<t>This section describes the ways that restricted capacity metrics can
be tuned to reflect the preferences of the two audiences, or different
Points-of-View (POV). Raw capacity refers to the metrics defined in
<xref target="RFC3148"></xref> which include restrictions such as data
uniqueness or flow-control response to congestion.</t>
<t>In primary metric considered is Bulk Transfer Capacity (BTC) for
complete paths. <xref target="RFC3148"></xref> defines</t>
<t> BTC = data_sent / elapsed_time</t>
<t>for a connection with congestion-aware flow control, where data_sent
is the total of unique payload bits (no headers).</t>
<t>We note that this definition *differs* from the raw capacity
definition in Section 2.3.1 of <xref target="RFC5136"></xref>, where
IP-layer Capacity *includes* all bits in the IP header and payload. This
means that Restricted Capacity BTC is already operating at a
disadvantage when compared to the raw capacity at layers below TCP.
Further, there are cases where "THE IP-layer" is encapsulated in another
IP-layer or other form of tunneling protocol, designating more and more
of the fundamental transport capacity as header bits that are pure
overhead to the BTC measurement.</t>
<t>When thinking about the triad of raw capacity metrics, BTC is most
akin to the "IP-Type-P Available Path Capacity", at least in the eyes of
a network user who seeks to know what transmission performance a path
might support.</t>
<section title="Type-P Parameter and Type-C Parameter">
<t>The concept of "packets of type-P" is defined in <xref
target="RFC2330"></xref>. The considerations for Restricted Capacity
are identical to the raw capacity section on this topic, with the
addition that the various fields and options in the TCP header MUST be
included in the description.</t>
<t>The vast array of TCP flow control options are not well-captured by
Type-P, because they do not exist in the TCP header bits. Therefore,
we introduce a new notion here: TCP Configuration of "Type-C". The
elements of Type-C describe all of the settings for TCP options and
congestion control algorithm variables, including the main form of
congestion control in use.</t>
</section>
<section title="a priori Factors">
<t>The audience for Network Characterization may have detailed
information about each link that comprises a complete path (due to
ownership, for example), or some of the links in the path but not
others, or none of the links.</t>
<t>There are cases where the measurement audience only has information
on one of the links (the local access link), and wishes to measure one
or more BTC metrics. This scenario is quite common, and has spawned a
substantial number of experimental measurement methods [ref to CAIDA
survey page, etc.]. Many of these methods respect that their users
want a result fairly quickly and in a one-trial. Thus, the measurement
interval is kept short (a few seconds to a minute). For long-term
reporting, a sample of short term results need to be summarized.</t>
</section>
<section title="Measurement Interval">
<t>There are limits on a useful measurement interval for BTC. Three
factors that influence the interval duration are listed below:<list
style="numbers">
<t>Measurements may choose to include or exclude the 3-way
handshake of TCP connection establishment, which requires at least
1.5 * RTT and contains both the delay of the path and the host
processing time for responses. However, user experience includes
the 3-way handshake for all new TCP connections.</t>
<t>Measurements may choose to include or exclude Slow-Start,
preferring instead to focus on a portion of the transfer that
represents "equilibrium" <<<< which needs a definition
for this purpose >>>>. However, user experience
includes the Slow-Start for all new TCP connections.</t>
<t>Measurements may choose to use a fixed block of data to
transfer, where the size of the block has a relationship to the
file size of the application of interest. This approach yields
variable size measurement intervals, where a path faster BTC is
measured for less time than a slower path, an this has
implications when path impairments are time-varying, or transient.
Users are likely to turn their immediate attention elsewhere when
a very large file must be transferred, thus they do not directly
experience such a long transfer -- they see the result (success or
fail) and possibly an objective measurement of the transfer time
(which will likely include the 3-way handshake, Slow-start, and
application file management processing time as well as the
BTC).</t>
</list></t>
<t>Individual measurement intervals may be short or long, but there is
a need to report the results on a long-term basis that captures the
BTC variability experienced between each interval. Consistent BTC is a
valuable commodity along with the value attained.</t>
</section>
<section title="Bulk Transfer Capacity Reporting">
<t>When BTC of a link or path is estimated through some measurement
technique, the following parameters SHOULD be reported:</t>
<t><list style="symbols">
<t>Name and reference to the exact method of measurement</t>
<t>Maximum Transmission Unit (MTU)</t>
<t>Maximum BTC that can be assessed in the measurement
configuration</t>
<t>The time and duration of the measurement</t>
<t>The number of BTC connections used simultaneously</t>
<t>*All* other parameters specific to the measurement method,
especially the Congestion Control algorithm in use</t>
</list></t>
<t>See also
[http://tools.ietf.org/wg/ippm/draft-ietf-ippm-tcp-throughput-tm/]</t>
<t>Many methods of Bulk Transfer Capacity measurement have a maximum
capacity that they can measure, and this maximum may be less than the
available capacity of the link or path. Therefore, it is important to
specify the measured BTC value beyond which there will be no measured
improvement.</t>
<t>The Application Design audience may have a target capacity value
and simply wish to assess whether there is sufficient BTC. This case
simplifies measurement of link and path capacity to some degree, as
long as the measurable maximum exceeds the target capacity.</t>
</section>
<section title="Variability in Bulk Transfer Capacity">
<t>As with most metrics and measurements, assessing the consistency or
variability in the results gives a the user an intuitive feel for the
degree (or confidence) that any one value is representative of other
results, or the underlying distribution from which these singleton
measurements have come.</t>
<t>Two questions are raised here for further discussion:</t>
<t>What ways can BTC be measured and summarized to describe the
potential variability in a useful way?</t>
<t>How can the variability in BTC estimates be reported, so that the
confidence in the results is also conveyed?</t>
</section>
</section>
<section title="Test Streams and Sample Size">
<t>This section discusses two key aspects of measurement that are
sometimes omitted from the report: the description of the test stream on
which the measurements are based, and the sample size.</t>
<section title="Test Stream Characteristics">
<t>Network Characterization has traditionally used Poisson-distributed
inter-packet spacing, as this provides an unbiased sample. The average
inter-packet spacing may be selected to allow observation of specific
network phenomena. Other test streams are designed to sample some
property of the network, such as the presence of congestion, link
bandwidth, or packet reordering.</t>
<t>If measuring a network in order to make inferences about
applications or receiver performance, then there are usually
efficiencies derived from a test stream that has similar
characteristics to the sender. In some cases, it is essential to
synthesize the sender stream, as with Bulk Transfer Capacity
estimates. In other cases, it may be sufficient to sample with a
"known bias", e.g., a Periodic stream to estimate real-time
application performance.</t>
</section>
<section title="Sample Size">
<t>Sample size is directly related to the accuracy of the results, and
plays a critical role in the report. Even if only the sample size (in
terms of number of packets) is given for each value or summary
statistic, it imparts a notion of the confidence in the result.</t>
<t>In practice, the sample size will be selected taking both
statistical and practical factors into account. Among these factors
are:</t>
<t><list style="numbers">
<t>The estimated variability of the quantity being measured</t>
<t>The desired confidence in the result (although this may be
dependent on assumption of the underlying distribution of the
measured quantity).</t>
<t>The effects of active measurement traffic on user traffic</t>
<t>etc.</t>
</list>A sample size may sometimes be referred to as "large". This
is a relative, and qualitative term. It is preferable to describe what
one is attempting to achieve with their sample. For example, stating
an implication may be helpful: this sample is large enough such that a
single outlying value at ten times the "typical" sample mean (the mean
without the outlying value) would influence the mean by no more than
X.</t>
</section>
</section>
<section anchor="IANA" title="IANA Considerations">
<t>This document makes no request of IANA.</t>
<t>Note to RFC Editor: this section may be removed on publication as an
RFC.</t>
</section>
<section anchor="Security" title="Security Considerations">
<t>The security considerations that apply to any active measurement of
live networks are relevant here as well. See <xref
target="RFC4656"></xref>.</t>
</section>
<section anchor="Acknowledgements" title="Acknowledgements">
<t>The authors thank: Phil Chimento for his suggestion to employ
conditional distributions for Delay, Steve Konish Jr. for his careful
review and suggestions, Dave Mcdysan and Don McLachlan for useful
comments based on their long experience with measurement and reporting,
and Matt Zekauskas for suggestions on organizing the memo for easier
consumption.</t>
</section>
</middle>
<back>
<references title="Normative References">
<?rfc include="reference.RFC.2119"?>
<?rfc include='reference.RFC.2330'?>
<?rfc include='reference.RFC.2679'?>
<?rfc include='reference.RFC.2680'?>
<?rfc include='reference.RFC.2678'?>
<?rfc include='reference.RFC.3148'?>
<?rfc include='reference.RFC.3393'?>
<?rfc include='reference.RFC.4656'?>
<?rfc include='reference.RFC.3432'?>
<?rfc include='reference.RFC.4737'?>
<?rfc include='reference.RFC.5136'?>
</references>
<references title="Informative References">
<reference anchor="Casner">
<front>
<title>A Fine-Grained View of High Performance Networking, NANOG 22
Conf.; http://www.nanog.org/mtg-0105/agenda.html</title>
<author fullname="S. Casner, C. Alaettinoglu, and C. Kuan,"
surname="">
<organization></organization>
</author>
<date month="May 20-22" year="2001" />
</front>
</reference>
<reference anchor="Cia03">
<front>
<title>Standardized Active Measurements on a Tier 1 IP Backbone,
IEEE Communications Mag., pp 90-97.</title>
<author fullname="L.Ciavattone, A.Morton, and G.Ramachandran">
<organization></organization>
</author>
<date month="June" year="2003" />
</front>
</reference>
<reference anchor="Y.1540">
<front>
<title>Internet protocol data communication service - IP packet
transfer and availability performance parameters</title>
<author fullname="" surname="ITU-T Recommendation Y.1540">
<organization></organization>
</author>
<date month="December " year="2002" />
</front>
</reference>
<reference anchor="Y.1541">
<front>
<title>Network Performance Objectives for IP-Based Services</title>
<author fullname="" surname="ITU-T Recommendation Y.1540">
<organization></organization>
</author>
<date month="February " year="2006" />
</front>
</reference>
<?rfc include='reference.RFC.5835'?>
<?rfc include='reference.I-D.ietf-ippm-reporting'?>
<?rfc include='reference.RFC.5481'?>
<?rfc ?>
<?rfc ?>
</references>
</back>
</rfc>| PAFTECH AB 2003-2026 | 2026-04-24 08:56:32 |