One document matched: draft-ietf-ippm-metrictest-01.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!-- This template is for creating an Internet Draft using xml2rfc,
which is available here: http://xml.resource.org. -->
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<!-- used by XSLT processors -->
<!-- For a complete list and description of processing instructions (PIs),
please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
(Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space
(using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->
<rfc category="std" docName="draft-ietf-ippm-metrictest-01" ipr="trust200902">
<!-- ipr= "trust200811" old template -->
<!-- ipr= "full3667" older template -->
<!-- category values: std, bcp, info, exp, and historic
ipr values: full5378, noModification5378, noDerivatives5378
you can add the attributes updates="NNNN" and obsoletes="NNNN"
they will automatically be output with "(if approved)" -->
<!-- ***** FRONT MATTER ***** -->
<front>
<!-- The abbreviated title is used in the page header - it is only necessary if the
full title is longer than 39 characters -->
<title abbrev="IPPM standard advancement testing">IPPM standard
advancement testing</title>
<!-- add 'role="editor"' below for the editors if appropriate -->
<!-- Another author who claims to be an editor -->
<author fullname="Ruediger Geib" initials="R." role="editor"
surname="Geib">
<organization>Deutsche Telekom</organization>
<address>
<postal>
<street>Heinrich Hertz Str. 3-7</street>
<!-- Reorder these if your country does things differently -->
<code>64295</code>
<city>Darmstadt</city>
<region></region>
<country>Germany</country>
</postal>
<phone>+49 6151 628 2747</phone>
<email>Ruediger.Geib@telekom.de</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<!-- Reorder these if your country does things differently -->
<code>07748</code>
<city>Middletown</city>
<region>NJ</region>
<country>USA</country>
</postal>
<phone>+1 732 420 1571</phone>
<facsimile>+1 732 368 1192</facsimile>
<email>acmorton@att.com</email>
<uri>http://home.comcast.net/~acmacm/</uri>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Reza Fardid" initials="R." surname="Fardid">
<organization>Cariden Technologies</organization>
<address>
<postal>
<street>888 Villa Street, Suite 500</street>
<!-- Reorder these if your country does things differently -->
<city>Mountain View</city>
<region>CA</region>
<code>94041</code>
<country>USA</country>
</postal>
<phone></phone>
<email>rfardid@cariden.com</email>
</address>
</author>
<author fullname="Alexander Steinmitz" initials="A." surname="Steinmitz">
<organization>HS Fulda</organization>
<address>
<postal>
<street>Marquardstr. 35</street>
<!-- Reorder these if your country does things differently -->
<city>Fulda</city>
<region></region>
<code>36039</code>
<country>Germany</country>
</postal>
<phone></phone>
<email>steinionline@gmx.de</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<date day="24" month="October" year="2010" />
<!-- If the month and year are both specified and are the current ones, xml2rfc will fill
in the current day for you. If only the current year is specified, xml2rfc will fill
in the current day and month for you. If the year is not the current one, it is
necessary to specify at least a month (xml2rfc assumes day="1" if not specified for the
purpose of calculating the expiry date). With drafts it is normally sufficient to
specify just the year. -->
<!-- Meta-data Declarations -->
<area>Transport</area>
<workgroup>Internet Engineering Task Force</workgroup>
<!-- WG name at the upperleft corner of the doc,
IETF is fine for individual submissions.
If this element is not present, the default is "Network Working Group",
which is used by the RFC Editor as a nod to the history of the IETF. -->
<keyword>inter-operability, equivalence, measurement, compliance,
metric</keyword>
<!-- Keywords will be incorporated into HTML output
files in a meta tag but they have no effect on text or nroff
output. If you submit your draft to the RFC Editor, the
keywords will be used for the search engine. -->
<abstract>
<t>This document specifies tests to determine if multiple independent
instantiations of a performance metric RFC have implemented the
specifications in the same way. This is the performance metric
equivalent of interoperability, required to advance RFCs along the
standards track. Results from different implementations of metric RFCs
will be collected under the same underlying network conditions and
compared using state of the art statistical methods. The goal is an
evaluation of the metric RFC itself, whether its definitions are clear
and unambiguous to implementors and therefore a candidate for
advancement on the IETF standards track.</t>
</abstract>
</front>
<middle>
<section title="Introduction">
<t>The Internet Standards Process <xref target="RFC2026">RFC2026 </xref>
requires that for a IETF specification to advance beyond the Proposed
Standard level, at least two genetically unrelated implementations must
be shown to interoperate correctly with all features and options. This
requirement can be met by supplying:<list style="symbols">
<t>evidence that (at least a sub-set of) the specification has been
implemented by multiple parties, thus indicating adoption by the
IETF community and the extent of feature coverage.</t>
<t>evidence that each feature of the specification is sufficiently
well-described to support interoperability, as demonstrated through
testing and/or user experience with deployment.</t>
</list></t>
<t>In the case of a protocol specification, the notion of
"interoperability" is reasonably intuitive - the implementations must
successfully "talk to each other", while exercising all features and
options. To achieve interoperability, two implementors need to interpret
the protocol specifications in equivalent ways. In the case of IP
Performance Metrics (IPPM), this definition of interoperability is only
useful for test and control protocols like the One-Way Active
Measurement Protocol, OWAMP <xref target="RFC4656"></xref>, and the
Two-Way Active Measurement Protocol, TWAMP <xref
target="RFC5357"></xref>.</t>
<t>A metric specification RFC describes one or more metric definitions,
methods of measurement and a way to report the results of measurement.
One example would be a way to test and report the One-way Delay that
data packets incur while being sent from one network location to
another, One-way Delay Metric.</t>
<t>In the case of metric specifications, the conditions that satisfy the
"interoperability" requirement are less obvious, and there was a need
for IETF agreement on practices to judge metric specification
"interoperability" in the context of the IETF Standards Process. This
memo provides methods which should be suitable to evaluate metric
specifications for standards track advancement. The methods proposed
here MAY be generally applicable to metric specification RFCs beyond
those developed under the IPPM Framework <xref
target="RFC2330"></xref>.</t>
<t>Since many implementations of IP metrics are embedded in measurement
systems that do not interact with one another (they were built before
OWAMP and TWAMP), the interoperability evaluation called for in the IETF
standards process cannot be determined by observing that independent
implementations interact properly for various protocol exchanges.
Instead, verifying that different implementations give statistically
equivalent results under controlled measurement conditions takes the
place of interoperability observations. Even when evaluating OWAMP and
TWAMP RFCs for standards track advancement, the methods described here
are useful to evaluate the measurement results because their validity
would not be ascertained in typical interoperability testing.</t>
<t>The standards advancement process aims at producing confidence that
the metric definitions and supporting material are clearly worded and
unambiguous, or reveals ways in which the metric definitions can be
revised to achieve clarity. The process also permits identification of
options that were not implemented, so that they can be removed from the
advancing specification. Thus, the product of this process is
information about the metric specification RFC itself: determination of
the specifications or definitions that are clear and unambiguous and
those that are not (as opposed to an evaluation of the implementations
which assist in the process).</t>
<t>This document defines a process to verify that implementations (or
practically, measurement systems) have interpreted the metric
specifications in equivalent ways, and produce equivalent results.</t>
<t>Testing for statistical equivalence requires ensuring identical test
setups (or awareness of differences) to the best possible extent. Thus,
producing identical test conditions is a core goal of the memo. Another
important aspect of this process is to test individual implementations
against specific requirements in the metric specifications using
customized tests for each requirement. These tests can distinguish
equivalent interpretations of each specific requirement.</t>
<t>Conclusions on equivalence are reached by two measures.</t>
<t>First, implementations are compared against individual metric
specifications to make sure that differences in implementation are
minimised or at least known.</t>
<t>Second, a test setup is proposed ensuring identical networking
conditions so that unknowns are minimized and comparisons are
simplified. The resulting separate data sets may be seen as samples
taken from the same underlying distribution. Using state of the art
statistical methods, the equivalence of the results is verified. To
illustrate application of the process and methods defined here,
evaluation of the <xref target="RFC2679">One-way Delay Metric </xref> is
provided in an Appendix. While test setups will vary with the metrics to
be validated, the general methodology of determining equivalent results
will not. Documents defining test setups to evaluate other metrics
should be developed once the process proposed here has been agreed and
approved.</t>
<t>The metric RFC advancement process begins with a request for protocol
action accompanied by a memo that documents the supporting tests and
results. The procedures of <xref target="RFC2026"></xref> are expanded
in<xref target="RFC5657"></xref>, including sample implementation and
interoperability reports. Section 3 of <xref
target="morton-advance-metrics-01"></xref> can serve as a template for a
metric RFC report which accompanies the protocol action request to the
Area Director, including description of the test set-up, procedures,
results for each implementation and conclusions.</t>
<t>Changes from WG -00 to WG -01 draft</t>
<t><list style="symbols">
<t>Discussion on merits and requirements of a distributed lab test
using only local load generators.</t>
<t>Proposal of metrics suitable for tests using the proposed
measurement configuration.</t>
<t>Hint on delay caused by software based L2TPv3 implementation.</t>
<t>Added an appendix with a test configuration allowing remote tests
comparing different implementations accross the network.</t>
<t>Proposal for maximum error of "equivalence", based on performance
comparison of identical implementations. This may be useful for both
ADK and non-ADK comparisons.</t>
</list></t>
<t>Changes from prior ID -02 to WG -00 draft</t>
<t><list style="symbols">
<t>Incorporation of aspects of reporting to support the protocol
action request in the Introduction and section 3.5</t>
<t>Overhaul of sectcion 3.2 regarding tunneling: Added generic
tunneling requirements and L2TPv3 as an example tunneling mechanism
fulfilling the tunneling requirements. Removed and adapted some of
the prior references to other tunneling protocols</t>
<t>Softened a requirement within section 3.4 (MUST to SHOULD on
precision) and removed some comments of the authors.</t>
<t>Updated contact information of one author and added a new
author.</t>
<t>Added example C++ code of an Anderson-Darling two sample test
implementation.</t>
</list></t>
<t>Changes from ID -01 to ID -02 version</t>
<t><list style="symbols">
<t>Major editorial review, rewording and clarifications on all
contents.</t>
<t>Additional text on parrallel testing using VLANs and GRE or
Pseudowire tunnels.</t>
<t>Additional examples and a glossary.</t>
</list></t>
<t>Changes from ID -00 to ID -01 version</t>
<t><list style="symbols">
<t>Addition of a comparison of individual metric implementations
against the metric specification (trying to pick up <xref
target="morton-advance-metrics">problems and solutions for metric
advancement</xref>).</t>
<t>More emphasis on the requirement to carefully design and document
the measurement setup of the metric comparison.</t>
<t>Proposal of testing conditions under identical WAN network
conditions using IP in IP tunneling or Pseudo Wires and parallel
measurement streams.</t>
<t>Proposing the requirement to document the smallest resolution at
which an ADK test was passed by 95%. As no minimum resolution is
specified, IPPM metric compliance is not linked to a particular
performance of an implementation.</t>
<t>Reference to RFC 2330 and RFC 2679 for the 95% confidence
interval as preferred criterion to decide on statistical
equivalence</t>
<t>Reducing the proposed statistical test to ADK with 95%
confidence.</t>
</list></t>
<section title="Requirements Language">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref
target="RFC2119">RFC 2119</xref>.</t>
</section>
</section>
<section title="Basic idea">
<t>The implementation of a standard compliant metric is expected to meet
the requirements of the related metric specification. So before
comparing two metric implementations, each metric implementation is
individually compared against the metric specification.</t>
<t>Most metric specifications leave freedom to implementors on
non-fundamental aspects of an individual metric (or options). Comparing
different measurement results using a statistical test with the
assumption of identical test path and testing conditions requires
knowledge of all differences in the overall test setup. Metric
specification options chosen by implementors have to be documented. It
is REQUIRED to use identical implementation options wherever possible
for any test proposed here. Calibrations proposed by metric standards
should be performed to further identify (and possibly reduce) potential
sources of errors in the test setup.</t>
<t>The Framework for <xref target="RFC2330">IP Performance Metrics
</xref> expects that a "methodology for a metric should have the
property that it is repeatable: if the methodology is used multiple
times under identical conditions, it should result in consistent
measurements." This means an implementation is expected to repeatedly
measure a metric with consistent results (repeatability with the same
result). Small deviations in the test setup are expected to lead to
small deviations in results only. To characterise statistical
equivalence in the case of small deviations, RFC 2330 <xref
target="RFC2679"> and </xref> suggest to apply a 95% confidence
interval. Quoting RFC 2679, "95 percent was chosen because ... a
particular confidence level should be specified so that the results of
independent implementations can be compared."</t>
<t>Two different implementations are expected to produce statistically
equivalent results if they both measure a metric under the same
networking conditions. Formulating in statistical terms: separate metric
implementations collect separate samples from the same underlying
statistical process (the same network conditions). The statistical
hypothesis to be tested is the expectation that both samples do not
expose statistically different properties. This requires careful test
design:</t>
<t><list style="symbols">
<t>The measurement test setup must be self-consistent to the largest
possible extent. To minimize the influence of the test and
measurement setup on the result, network conditions and paths MUST
be identical for the compared implementations to the largest
possible degree. This includes both the stability and non-ambiguity
of routes taken by the measurement packets. See RFC 2330 for a
discussion on self-consistency.</t>
<t>The error induced by the sample size must be small enough to
minimize its influence on the test result. This may have to be
respected, especially if two implementations measure with different
average probing rates.</t>
<t>Every comparison must be repeated several times based on
different measurement data to avoid random indications of
compatibility (or the lack of it).</t>
<t>To minimize the influence of implementation options on the
result, metric implementations SHOULD use identical options and
parameters for the metric under evaluation.</t>
<t>The implementation with the lowest probing frequency determines
the smallest temporal interval for which samples can be
compared.</t>
</list></t>
<t>The metric specifications themselves are the primary focus of
evaluation, rather than the implementations of metrics. The
documentation produced by the advancement process should identify which
metric definitions and supporting material were found to be clearly
worded and unambiguous, OR, it should identify ways in which the metric
specification text should be revised to achieve clarity and unified
interpretation.</t>
<t>The process should also permit identification of options that were
not implemented, so that they can be removed from the advancing
specification (this is an aspect more typical of protocol advancement
along the standards track).</t>
<t>Note that this document does not propose to base interoperability
indications of performance metric implementations on comparisons of
individual singletons. Individual singletons may be impacted by many
statistical effects while they are measured. Comparing two singletons of
different implementations may result in failures with higher probability
than comparing samples.</t>
</section>
<section title="Verification of conformance to a metric specification">
<t>This section specifies how to verify compliance of two or more IPPM
implementations against a metric specification. This document only
proposes a general methodology. Compliance criteria to a specific metric
implementation need to be defined for each individual metric
specification. The only exception is the statistical test comparing two
metric implementations which are simultaneously tested. This test is
applicable without metric specific decision criteria.</t>
<t>Several testing options exist to compare two or more
implementations:</t>
<t><list style="symbols">
<t>Use a single test lab to compare the implementations and emulate
the Internet with an impairment generator.</t>
<t>Use a single test lab to compare the implementations and measure
across the Internet.</t>
<t>Use remotely separated test labs to compare the implementations
and emulate the Internet with two "identically" configured
impairment generators.</t>
<t>Use remotely separated test labs to compare the implementations
and measure across the Internet.</t>
<t>Use remotely separated test labs to compare the implementations
and measure across the Internet and include a single impairment
generator to impact all measurement flows in non discriminatory
way.</t>
</list></t>
<t>The first two approaches work, but cause higher expenses than the
other ones (due to travel and/or shipping+installation). For the third
option, ensuring two identically configured impairment generators
requires well defined test cases and possibly identical hard- and
software. >>>Comment: for some specific tests, impairment
generator accuracy requirements are less-demanding than others, and in
such cases there is more flexibility in impairment generator
configuration. <<<</t>
<t>It is a fair question, whether the last two options can result in any
applicable test set up at all. While an experimental approach is given
in Appendix C, the tradeoff that measurement packets of different sites
pass the path segments but always in a different order of segments
probably can't be avoided.</t>
<t>The question of which option above results in identical networking
conditions and is broadly accepted can't be answered without more
practical experience in comparing implementations. The last proposal has
the advantage that, while the measurement equipment is remotely
distributed, a single network impairment generator and the Internet can
be used in combination to impact all measurement flows.</t>
<section title="Tests of an individual implementation against a metric specification">
<t>A metric implementation MUST support the requirements classified as
"MUST" and "REQUIRED" of the related metric specification to be
compliant to the latter.</t>
<t>Further, supported options of a metric implementation SHOULD be
documented in sufficient detail. The documentation of chosen options
is RECOMMENDED to minimise (and recognise) differences in the test
setup if two metric implementations are compared. Further, this
documentation is used to validate and improve the underlying metric
specification option, to remove options which saw no implementation or
which are badly specified from the metric specification to be promoted
to a standard. This documentation SHOULD be made for all
implementation relevant specifications of a metric picked for a
comparison, which aren't explicitly marked as "MUST" or "REQUIRED" in
the metric specification. This applies for the following sections of
all metric specifications:</t>
<t><list style="symbols">
<t>Singleton Definition of the Metric.</t>
<t>Sample Definition of the Metric.</t>
<t>Statistics Definition of the Metric. As statistics are compared
by the test specified here, this documentation is required even in
the case, that the metric specification does not contain a
Statistics Definition.</t>
<t>Timing and Synchronisation related specification (if relevant
for the Metric).</t>
<t>Any other technical part present or missing in the metric
specification, which is relevant for the implementation of the
Metric.</t>
</list></t>
<t>RFC2330 and RFC2679 emphasise precision as an aim of IPPM metric
implementations. A single IPPM conformant implementation MUST under
otherwise identical network conditions produce precise results for
repeated measurements of the same metric.</t>
<t>RFC 2330 prefers the "empirical distribution function" EDF to
describe collections of measurements. RFC 2330 determines, that
"unless otherwise stated, IPPM goodness-of-fit tests are done using 5%
significance." The goodness of fit test determines by which precision
two or more samples of a metric implementation belong to the same
underlying distribution (of measured network performance events). The
goodness of fit test to be applied is the <xref
target="ADK">Anderson-Darling K sample test (ADK sample test, K stands
for the number of samples to be compared) </xref>. Please note that
RFC 2330 and RFC 2679 apply an Anderson Darling goodness of fit test
too.</t>
<t>The results of a repeated test with a single implementation MUST
pass an ADK sample test with confidence level of 95%. The resolution
for which the ADK test has been passed with the specified confidence
level MUST be documented. To formulate this differently: The
requirement is to document the smallest resolution, at which the
results of the tested metric implementation pass an ADK test with a
confidence level of 95%. The minimum resolution available in the
reported results from each implementation MUST be taken into account
in the ADK test.</t>
</section>
<section title="Test setup resulting in identical live network testing conditions">
<t>Two major issues complicate tests for metric compliance across live
networks under identical testing conditions. One is the general point
that metric definition implementations cannot be conveniently examined
in field measurement scenarios. The other one is more broadly
described as "parallelism in devices and networks", including
mechanisms like those that achieve load balancing (<xref
target="RFC4928">see </xref>).</t>
<t>This section proposes two measures to deal with both issues.
Tunneling mechanisms can be used to avoid parallel processing of
different flows in the network. Measuring by separate parallel probe
flows results in repeated collection of data. If both measures are
combined, WAN network conditions are identical for a number of
independent measurement flows, no matter what the network conditions
are in detail.</t>
<t>Any measurement setup MUST be made to avoid the probing traffic
itself to impede the metric measurement. The created measurement load
MUST NOT result in congestion at the access link connecting the
measurement implementation to the WAN. The created measurement load
MUST NOT overload the measurement implementation itself, eg. by
causing a high CPU load or by creating imprecisions due to internal
transmit (receive respectively) probe packet collisions.</t>
<t>Tunneling multiple flows reaching a network element on a single
physical port may allow to transmit all packets of the tunnel via the
same path. Applying tunnels to avoid undesired influence of standard
routing for measurement purposes is a concept known from literature,
see e.g. <xref target="GU+Duffield"> GRE encapsulated multicast
probing</xref>. An existing IP in IP tunnel protocol can be applied to
avoid Equal-Cost Multi-Path (ECMP) routing of different measurement
streams if it meets the following criteria:</t>
<t><list style="symbols">
<t>Inner IP packets from different measurement implementations are
mapped into a single tunnel with single outer IP origin and
destination address as well as origing and destination port
numbers which are identical for all packets.</t>
<t>An easily accessible commodity tunneling protocol allows to
carry out a metric test from more test sites.</t>
<t>A low operational overhead may enable a broader audience to set
up a metric test with the desired properties.</t>
<t>The tunneling protocol should be reliable and stable in set up
and operation to avoid disturbances or influence on the test
results.</t>
<t>The tunneling protocol should not incurr any extra cost for
those interested in setting up a metric test.</t>
</list></t>
<t>An illustration of a test setup with two tunnels and two flows
between two linecards of one implementation is given in <xref
target="Figure 1"> </xref>.</t>
<figure align="center" anchor="Figure 1">
<preamble />
<artwork align="left"><![CDATA[
Implementation ,---. +--------+
+~~~~~~~~~~~/ \~~~~~~| Remote |
+------->-----F2->-| / \ |->---+ |
| +---------+ | Tunnel 1( ) | | |
| | transmit|-F1->-| ( ) |->+ | |
| | LC1 | +~~~~~~~~~| |~~~~| | | |
| | receive |-<--+ ( ) | F1 F2 |
| +---------+ | |Internet | | | | |
*-------<-----+ F2 | | | | | |
+---------+ | | +~~~~~~~~~| |~~~~| | | |
| transmit|-* *-| | | |--+<-* |
| LC2 | | Tunnel 2( ) | | |
| receive |-<-F1-| \ / |<-* |
+---------+ +~~~~~~~~~~~\ /~~~~~~| Router |
`-+-' +--------+
]]></artwork>
<postamble>Illustration of a test setup with two tunnels. For
simplicity, only two linecards of one implementation and two flows F
between them are shown.</postamble>
</figure>
<t><xref target="Figure 2"> </xref> shows the network elements
required to set up GRE tunnels or as shown by figure 1.</t>
<figure align="center" anchor="Figure 2">
<preamble />
<artwork align="left"><![CDATA[
Implementation
+-----+ ,---.
| LC1 | / \
+-----+ / \ +------+
| +-------+ ( ) +-------+ |Remote|
+--------+ | | | | | | | |
|Ethernet| | Tunnel| |Internet | | Tunnel| | |
|Switch |--| Head |--| |--| Head |--| |
+--------+ | Router| | | | Router| | |
| | | ( ) | | |Router|
+-----+ +-------+ \ / +-------+ +------+
| LC2 | \ /
+-----+ `-+-' ]]></artwork>
<postamble>Illustration of a hardware setup to realise the test
setup illustrated by figure 1 with GRE tunnels or
Pseudowires.</postamble>
</figure>
<t>If tunneling is applied, two tunnels MUST carry all test traffic in
between the test site and the remote site. For example, if 802.1Q
Ethernet Virtual LANs (VLAN) are applied and the measurement streams
are carried in different VLANs, the IP tunnel or Pseudo Wires
respectively MUST be set up in physical port mode to avoid set up of
Pseudo Wires per VLAN (which may see different paths due to ECMP
routing), see RFC 4448. The remote router and the Ethernet switch
shown in figure 2 must support 802.1Q in this set up.</t>
<t>The IP packet size of the metric implementation SHOULD be chosen
small enough to avoid fragmentation due to the added Ethernet and
tunnel headers. Otherwise, the impact of tunnel overhead on
fragmentation and interface MTU size MUST be understood and taken into
account (see <xref target="RFC4459"></xref>).</t>
<t>An Ethernet port mode IP tunnel carrying several 802.1Q VLANs each
containing measurement traffic of a single measurement system was set
up as a proof of concept using <xref target="RFC4719">RFC4719</xref>,
Transport of Ethernet Frames over L2TPv3. Ethernet over L2TPv3 seems
to fulfill most of the desired tunneling protocol criteria mentioned
above.</t>
<t>The following headers may have to be accounted for when calculating
total packet length, if VLANs and Ethernet over L2TPv3 tunnels are
applied:</t>
<t><list style="symbols">
<t>Ethernet 802.1Q: 22 Byte.</t>
<t>L2TPv3 Header: 4-16 Byte for L2TPv3 data messages over IP;
16-28 Byte for L2TPv3 data messages over UDP.</t>
<t>IPv4 Header (outer IP header): 20 Byte.</t>
<t>MPLS Labels may be added by a carrier. Each MPLS Label has a
length of 4 Bytes. By the time of writing, between 1 and 4 Labels
seems to be a fair guess of what's expectable.</t>
</list></t>
<t>The applicability of one or more of the following tunneling
protocols may be investigated by interested parties if Ethernet over
L2TPv3 is felt to be not suitable: <xref target="RFC2003">IP in
IP</xref> or <xref target="RFC2784">Generic Routing Encapsulation
(GRE)</xref>. <xref target="RFC4928">RFC 4928</xref> proposes measures
how to avoid ECMP treatment in MPLS networks.</t>
<t>L2TP is a commodity tunneling protocol <xref
target="RFC2661"></xref>. By the time of writing, L2TPv3 <xref
target="RFC3931"></xref>is the latest version of L2TP. If L2TPv3 is
applied, software based implementations of this protocol are not
suitable for the test set up, as such implementations may cause
uncalculable delay shifts.</t>
<t>Ethernet Pseudo Wires may also be set up on <xref
target="RFC4448">MPLS networks</xref>. While there's no technical
issue with this solution, MPLS interfaces are mostly found in the
network provider domain. Hence not all of the above tunneling criteria
are met.</t>
<t>Appendix C provides an experimental tunneling set up for metric
implementation testing between two (or more) remote sites.</t>
<t>Each test is repeated several times. WAN conditions may change over
time. Sequential testing is desirable, but may not be a useful metric
test option. It is RECOMMENDED that tests be carried out by
establishing N different parallel measurement flows. Two or three
linecards per implementation serving to send or receive measurement
flows should be sufficient to create 5 or more parallel measurement
flows. If three linecards are used, each card sends and receives 2
flows. Other options are to separate flows by DiffServ marks (without
deploying any QoS in the inner or outer tunnel) or using a single CBR
flow and evaluating every n-th singleton to belong to a specific
measurement flow.</t>
<t>Some additional rules to calculate and compare samples have to be
respected to perform a metric test:</t>
<t><list style="symbols">
<t>To compare different probes of a common underlying distribution
in terms of metrics characterising a communication network
requires to respect the temporal nature for which the assumption
of common underlying distribution may hold. Any singletons or
samples to be compared MUST be captured within the same time
interval.</t>
<t>Whenever statistical events like singletons or rates are used
to characterise measured metrics of a time-interval, at least 5
singletons of a relevant metric SHOULD be present to ensure a
minimum confidence into the reported value (see <xref
target="Rule of thumb">Wikipedia on confidence</xref>). Note that
this criterion also is to be respected e.g. when comparing packet
loss metrics. Any packet loss measurement interval to be compared
with the results of another implementation SHOULD contain at least
five lost packets to have a minimum confidence that the observed
loss rate wasn't caused by a small number of random packet
drops.</t>
<t>The minimum number of singletons or samples to be compared by
an Anderson-Darling test SHOULD be 100 per tested metric
implementation. Note that the Anderson-Darling test detects small
differences in distributions fairly well and will fail for high
number of compared results (RFC2330 mentions an example with 8192
measurements where an Anderson-Darling test always failed).</t>
<t>Generally, the Anderson-Darling test is sensitive to
differences in the accuracy or bias associated with varying
implementations or test conditions. These dissimilarities may
result in differing averages of samples to be compared. An example
may be different packet sizes, resulting in a constant delay
difference between compared samples. Therefore samples to be
compared by an Anderson Darling test MAY be calibrated by the
difference of the average values of the samples. Any calibration
of this kind MUST be documented in the test result.</t>
</list></t>
</section>
<section title="Tests of two or more different implementations against a metric specification">
<t>RFC2330 expects "a methodology for a given metric [to] exhibit
continuity if, for small variations in conditions, it results in small
variations in the resulting measurements. Slightly more precisely, for
every positive epsilon, there exists a positive delta, such that if
two sets of conditions are within delta of each other, then the
resulting measurements will be within epsilon of each other." A small
variation in conditions in the context of the metric test proposed
here can be seen as different implementations measuring the same
metric along the same path.</t>
<t>IPPM metric specification however allow for implementor options to
the largest possible degree. It can't be expected that two
implementors pick identical options for the implementations.
Implementors SHOULD to the highest degree possible pick the same
configurations for their systems when comparing their implementations
by a metric test.</t>
<t>In some cases, a goodness of fit test may not be possible or show
disappointing results. To clarify the difficulties arising from
different implementation options, the individual options picked for
every compared implementation SHOULD be documented in sufficient
detail. Based on this documentation, the underlying metric
specification should be improved before it is promoted to a
standard.</t>
<t>The same statistical test as applicable to quantify precision of a
single metric implementation MUST be passed to compare metric
conformance of different implementations. To document compatibility,
the smallest measurement resolution at which the compared
implementations passed the ADK sample test MUST be documented.</t>
<t>For different implementations of the same metric, "variations in
conditions" are reasonably expected. The ADK test comparing samples of
the different implementations may result in a lower precision than the
test for precision of each implementation individually.</t>
</section>
<section title="Clock synchronisation">
<t>Clock synchronization effects require special attention. Accuracy
of one-way active delay measurements for any metrics implementation
depends on clock synchronization between the source and destination of
tests. Ideally, one-way active delay measurement (<xref
target="RFC2679">RFC 2679,</xref>) test endpoints either have direct
access to independent GPS or CDMA-based time sources or indirect
access to nearby NTP primary (stratum 1) time sources, equipped with
GPS receivers. Access to these time sources may not be available at
all test locations associated with different Internet paths, for a
variety of reasons out of scope of this document.</t>
<t>When secondary (stratum 2 and above) time sources are used with NTP
running across the same network, whose metrics are subject to
comparative implementation tests, network impairments can affect clock
synchronization, distort sample one-way values and their interval
statistics. It is RECOMMENDED to discard sample one-way delay values
for any implementation, when one of the following reliability
conditions is met:</t>
<t><list style="symbols">
<t>Delay is measured and is finite in one direction, but not the
other.</t>
<t>Absolute value of the difference between the sum of one-way
measurements in both directions and round-trip measurement is
greater than X% of the latter value.</t>
</list></t>
<t>Examination of the second condition requires RTT measurement for
reference, e.g., based on TWAMP (RFC5357, <xref target="RFC5357">RFC
5357</xref>), in conjunction with one-way delay measurement.</t>
<t>Specification of X% to strike a balance between identification of
unreliable one-way delay samples and misidentification of reliable
samples under a wide range of Internet path RTTs probably requires
further study.</t>
<t>An IPPM compliant metric implementation whose measurement requires
synchronized clocks is however expected to provide precise measurement
results. Any IPPM metric implementation SHOULD be of a precision of 1
ms (+/- 500 us) with a confidence of 95% if the metric is captured
along an Internet path which is stable and not congested during a
measurement duration of an hour or more.</t>
</section>
<section title="Recommended Metric Verification Measurement Process">
<t>In order to meet their obligations under the IETF Standards Process
the IESG must be convinced that each metric specification advanced to
Draft Standard or Internet Standard status is clearly written, that
there are the required multiple verifiably equivalent implementations,
and that all options have been implemented.</t>
<t>In the context of this document, metrics are designed to measure
some characteristic of a data network. An aim of any metric definition
should be that it should be specified in a way that can reliably
measure the specific characteristic in a repeatable way across
multiple independent implementations.</t>
<t>Each metric, statistic or option of those to be validated MUST be
compared against a reference measurement or another implementation by
at least 5 different basic data sets, each one with sufficient size to
reach the specified level of confidence, as specified by this
document.</t>
<t>Finally, the metric definitions, embodied in the text of the RFCs,
are the objects that require evaluation and possible revision in order
to advance to the next step on the standards track.</t>
<t>IF two (or more) implementations do not measure an equivalent
metric as specified by this document,</t>
<t>AND sources of measurement error do not adequately explain the lack
of agreement,</t>
<t>THEN the details of each implementation should be audited along
with the exact definition text, to determine if there is a lack of
clarity that has caused the implementations to vary in a way that
affects the correspondence of the results.</t>
<t>IF there was a lack of clarity or multiple legitimate
interpretations of the definition text,</t>
<t>THEN the text should be modified and the resulting memo proposed
for consensus and (possible) advancement along the standards
track.</t>
<t>Finally, all the findings MUST be documented in a report that can
support advancement on the standards track, similar to those described
in <xref target="RFC5657"></xref>. The list of measurement devices
used in testing satisfies the implementation requirement, while the
test results provide information on the quality of each specification
in the metric RFC (the surrogate for feature interoperability).</t>
<t>The complete process of advancing a metric specification to a
standard as defined by this document is illustrated in <xref
target="Figure 3"> </xref>.</t>
<figure align="center" anchor="Figure 3">
<preamble />
<artwork align="center"><![CDATA[
,---.
/ \
( Start )
\ / Implementations
`-+-' +-------+
| /| 1 `.
+---+----+ / +-------+ `.-----------+ ,-------.
| RFC | / |Check for | ,' was RFC `. YES
| | / |Equivalence.... clause x ------+
| |/ +-------+ |under | `. clear? ,' |
| Metric \.....| 2 ....relevant | `---+---' +----+-----+
| Metric |\ +-------+ |identical | No | |Report |
| Metric | \ |network | +--+----+ |results + |
| ... | \ |conditions | |Modify | |Advance |
| | \ +-------+ | | |Spec +--+RFC |
+--------+ \| n |.'+-----------+ +-------+ |request(?)|
+-------+ +----------+
]]></artwork>
<postamble>Illustration of the metric standardisation
process</postamble>
</figure>
<t>Any recommendation for the advancement of a metric specification
MUST be accompanied by an implementation report, as is the case with
all requests for the advancement of IETF specifications. The
implementation report needs to include the tests performed, the
applied test setup, the specific metrics in the RFC and reports of the
tests performed with two or more implementations. The test plan needs
to specify the precision reached for each measured metric and thus
define the meaning of "statistically equivalent" for the specific
metrics being tested.</t>
<t>Ideally, the test plan would co-evolve with the development of the
metric, since that's when people have the most context in their
thinking regarding the different subtleties that can arise.</t>
<t>In particular, the implementation report MUST as a minimum
document:</t>
<t><list style="symbols">
<t>The metric compared and the RFC specifying it. This includes
statements as required by the section "Tests of an individual
implementation against a metric specification" of this
document.</t>
<t>The measurement configuration and setup.</t>
<t>A complete specification of the measurement stream (mean rate,
statistical distribution of packets, packet size or mean packet
size and their distribution), DSCP and any other measurement
stream properties which could result in deviating results.
Deviations in results can be caused also if chosen IP addresses
and ports of different implementations can result in different
layer 2 or layer 3 paths due to operation of Equal Cost Multi-Path
routing in an operational network.</t>
<t>The duration of each measurement to be used for a metric
validation, the number of measurement points collected for each
metric during each measurement interval (i.e. the probe size) and
the level of confidence derived from this probe size for each
measurement interval.</t>
<t>The result of the statistical tests performed for each metric
validation as required by the section "Tests of two or more
different implementations against a metric specification" of this
document.</t>
<t>A parameterization of laboratory conditions and applied traffic
and network conditions allowing reproduction of these laboratory
conditions for readers of the implementation report.</t>
<t>The documentation helping to improve metric specifications
defined by this section.</t>
</list></t>
<t>All of the tests for each set SHOULD be run in a test setup as
specified in the section "Test setup resulting in identical live
network testing conditions."</t>
<!-- <t>>>>> Comment: The message of the paragraph below is not
clear at all, primarily the first sentence. -> Better now?</t> -->
<t>If a different test set up is chosen, it is RECOMMENDED to avoid
effects falsifying results of validation measurements caused by real
data networks (like parallelism in devices and networks). Data
networks may forward packets differently in the case of:</t>
<t><list style="symbols">
<t>Different packet sizes chosen for different metric
implementations. A proposed countermeasure is selecting the same
packet size when validating results of two samples or a sample
against an original distribution.</t>
<t>Selection of differing IP addresses and ports used by different
metric implementations during metric validation tests. If ECMP is
applied on IP or MPLS level, different paths can result (note that
it may be impossible to detect an MPLS ECMP path from an IP
endpoint). A proposed counter measure is to connect the
measurement equipment to be compared by a NAT device, or
establishing a single tunnel to transport all measurement traffic
The aim is to have the same IP addresses and port for all
measurement packets or to avoid ECMP based local routing diversion
by using a layer 2 tunnel.</t>
<t>Different IP options.</t>
<t>Different DSCP.</t>
<t>If the N measurements are captured using sequential
measurements instead of simultaneous ones, then the following
factors come into play: Time varying paths and load
conditions.</t>
</list></t>
</section>
<section title="Miscellaneous">
<t>A minimum amount of singletons per metric is required if results
are to be compared. To avoid accidental singletons from impacting a
metric comparison, a minimum number of 5 singletons per compared
interval was proposed above. Commercial Internet service is not
operated to reliably create enough rare events of singletons to
characterize bad measurement engineering or bad implementations. In
the case that a metric validation requires capturing rare events, an
impairment generator may have to be added to the test set up.
Inclusion of an impairment generator and the parameterisation of the
impairments generated MUST be documented.</t>
<t>A metric characterising a common impairment condition would be one,
which by expectation creates a singleton result for each measured
packet. Delay or Delay Variation are examples of this type, and in
such cases, the Internet may be used to compare metric
implementations.</t>
<t>Rare events are those, where by expectation no or a rather low
number of "event is present" singletons are captured during a
measurement interval. Packet duplications, packet loss rates above one
digit percentages, loss patterns and packet reordering are examples.
Note especially that a packet reordering or loss pattern metric
implementation comparison may require a more sophisticated test set up
than described here. Spatial and temporal effects combine in the case
of packet re-ordering and measurements with different packet rates may
always lead to different results.</t>
<!-- <t>>>> Comment: The case of "rate measurements" below is not
clear yet - are packet delivery rates (or frequency) discussed here?
Or, are these measurement results expressed in terms of ratios? (or
percent?) Better now?</t> -->
<t>As specified above, 5 singletons are the recommended basis to
minimise interference of random events with the statistical test
proposed by this document. In the case of ratio measurements (like
packet loss), the underlying sum of basic events, against the which
the metric's monitored singletons are "rated", determines the
resolution of the test. A packet loss statistic with a resolution of
1% requires one packet loss statistic-datapoint to consist of 500
delay singletons (of which at least 5 were lost). To compare EDFs on
packet loss requires one hundred such statistics per flow. That means,
all in all at least 50 000 delay singletons are required per single
measurement flow. Live network packet loss is assumed to be present
during main traffic hours only. Let this interval be 5 hours. The
required minimum rate of a single measurement flow in that case is 2.8
packets/sec (assuming a loss of 1% during 5 hours). If this
measurement is too demanding under live network conditions, an
impairment generator should be used.</t>
</section>
<section title="Proposal to determine an "equivalence" threshold for each metric evaluated">
<t>This section describes a proposal for maximum error of
"equivalence", based on performance comparison of identical
implementations. This comparison may be useful for both ADK and
non-ADK comparisons.</t>
<t>Each metric tested by two or more implementations
(cross-implementation testing).</t>
<t>Each metric is also tested twice simultaneously by the *same*
implementation, using different Src/Dst Address pairs and other
differences such that the connectivity differences of the
cross-implementation tests are also experienced and measured by the
same implementation.</t>
<t>Comparative results for the same implementation represent a bound
on cross-implementation equivalence. This should be particularly
useful when the metric does *not* produces a continuous distribution
of singleton values, such as with a loss metric, or a duplication
metric. Appendix A indicates how the ADK will work for 0ne-way delay,
and should be likewise applicable to distributions of delay
variation.</t>
<t>Proposal: the implementation with the largest difference in
homogeneous comparison results is the lower bound on the equivalence
threshold, noting that there may be other systematic errors to account
for when comparing between implementations.</t>
<t>Thus, when evaluationg equivalence in cross-implementation
results:</t>
<t>Maximum_Error = Same_Implementation_Error + Systematic_Error</t>
<t>and only the systematic error need be decided beforehand.</t>
<t>In the case of ADK comparison, the largest same-implementation
resolution of distribution equivalence can be used as a limit on
cross-implementation resolutions (at the same confidence level).</t>
</section>
</section>
<section anchor="Acknowledgements" title="Acknowledgements">
<t>Gerhard Hasslinger commented a first version of this document,
suggested statistical tests and the evaluation of time series
information. Henk Uijterwaal and Lars Eggert have encouraged and helped
to orgainize this work. Mike Hamilton, Scott Bradner, David Mcdysan and
Emile Stephan commented on this draft. Carol Davids reviewed the 01
version of the ID before it was promoted to WG draft.</t>
</section>
<section anchor="Contributors" title="Contributors">
<t>Scott Bradner, Vern Paxson and Allison Mankin drafted <xref
target="bradner-metrictest">bradner-metrictest</xref>, and major parts
of it are included in this document.</t>
</section>
<!-- Possibly a 'Contributors' section. -->
<section anchor="IANA" title="IANA Considerations">
<t>This memo includes no request to IANA.</t>
</section>
<section anchor="Security" title="Security Considerations">
<t>This draft does not raise any specific security issues.</t>
</section>
</middle>
<!-- *****BACK MATTER ***** -->
<back>
<!-- References split into informative and normative -->
<!-- There are 2 ways to insert reference entries from the citation libraries:
1. define an ENTITY at the top, and use "ampersand character"RFC2629; here (as shown)
2. simply use a PI "less than character"?rfc include="reference.RFC.2119.xml"?> here
(for I-Ds: include="reference.I-D.narten-iana-considerations-rfc2434bis.xml")
Both are cited textually in the same manner: by using xref elements.
If you use the PI option, xml2rfc will, by default, try to find included files in the same
directory as the including file. You can also define the XML_LIBRARY environment variable
with a value containing a set of directories to search. These can be either in the local
filing system or remote ones accessed by http (http://domain/dir/... ).-->
<references title="Normative References">
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2026.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2679.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2330.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2003.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2784.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2661.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.3931.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.4448.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.4459.xml"?>
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.4928.xml"?>
<?rfc include='http://xml.resource.org/public/rfc/bibxml/reference.RFC.2680.xml'?>
<?rfc include='http://xml.resource.org/public/rfc/bibxml/reference.RFC.2681.xml'?>
<?rfc include='http://xml.resource.org/public/rfc/bibxml/reference.RFC.4656.xml'?>
<?rfc include='http://xml.resource.org/public/rfc/bibxml/reference.RFC.4719.xml'?>
<?rfc include='reference.RFC.5657'?>
<?rfc ?>
<!-- <reference anchor="RFC2119"> -->
</references>
<references title="Informative References">
<!-- Here we use entities that we defined at the beginning. -->
<?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.5357.xml"?>
<reference anchor="bradner-metrictest">
<front>
<title>Advancement of metrics specifications on the IETF Standards
Track</title>
<author fullname="Scott Bradner" initials="S." surname="Bradner">
<organization abbrev="Harvard University">Harvard
University</organization>
</author>
<author fullname="Allison Mankin" initials="A." surname="Mankin">
<organization abbrev="USC/ISI">Harvard University</organization>
</author>
<author fullname="Vern Paxson" initials="V." surname="Paxson">
<organization abbrev="ACIRI">Harvard University</organization>
</author>
<date month="July" year="2007" />
</front>
<seriesInfo name="draft"
value="-bradner-metricstest-03, (work in progress)" />
</reference>
<reference anchor="morton-advance-metrics">
<front>
<title>Problems and Possible Solutions for Advancing Metrics on the
Standards Track</title>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization abbrev="AT&T Labs">AT&T Labs</organization>
</author>
<date day="4" month="July" year="2009" />
</front>
<seriesInfo name="draft"
value="-morton-ippm-advance-metrics-00, (work in progress)" />
</reference>
<reference anchor="morton-advance-metrics-01">
<front>
<title>Lab Test Results for Advancing Metrics on the Standards
Track</title>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization abbrev="AT&T Labs">AT&T Labs</organization>
</author>
<date day="25" month="June" year="2010" />
</front>
<seriesInfo name="draft"
value="-morton-ippm-advance-metrics-01, (work in progress)" />
</reference>
<?rfc ?>
<?rfc ?>
<?rfc ?>
<reference anchor="GU+Duffield">
<front>
<title>GRE Encapsulated Multicast Probing: A Scalable Technique for
Measuring One-Way Loss</title>
<author fullname="Yu Gu" initials="Y." surname="Gu">
<organization
abbrev="University of Massachusetts, Amherst">University of
Massachusetts, Amherst</organization>
</author>
<author fullname="Nick Duffield" initials="N." surname="Duffield">
<organization abbrev="AT&T ">AT&T Labs –
Research</organization>
</author>
<author fullname="Lee Breslau" initials="L." surname="Breslau">
<organization abbrev="AT&T ">AT&T Labs –
Research</organization>
</author>
<author fullname="Subhabrata Sen" initials="S." surname="Sen">
<organization abbrev="AT&T ">AT&T Labs –
Research</organization>
</author>
<date month="June" year="2007" />
</front>
<seriesInfo name="SIGMETRICS’07"
value="San Diego, California, USA" />
</reference>
<reference anchor="ADK">
<front>
<title>K-sample Anderson-Darling Tests of fit, for continuous and
discrete cases</title>
<author initials="F.W." surname="Scholz">
<!-- fullname="F.W. Scholz" -->
<organization abbrev="Boeing">Boeing Computer
Services</organization>
</author>
<author initials="M.A." surname="Stephens">
<!-- fullname="M.A. Stephens" -->
<organization>Simon Fraser University</organization>
</author>
<date month="May" year="1986" />
</front>
<seriesInfo name="University of Washington, Technical Report"
value="No. 81" />
</reference>
<reference anchor="Rule of thumb">
<front>
<title>Confidence interval</title>
<author fullname="Michael Hardy" initials="M." surname="Hardy">
<organization abbrev="Wikipedia">Wikipedia</organization>
</author>
<date month="March" year="2010" />
</front>
</reference>
</references>
<section anchor="Appendix A"
title="An example on a One-way Delay metric validation">
<t>The text of this appendix is not binding. It is an example how parts
of a One-way Delay metric test could look like.
http://xml.resource.org/public/rfc/bibxml/</t>
<section title="Compliance to Metric specification requirements">
<t>One-way Delay, Loss threshold, RFC 2679</t>
<t>This test determines if implementations use the same configured
maximum waiting time delay from one measurement to another under
different delay conditions, and correctly declare packets arriving in
excess of the waiting time threshold as lost. See Section 3.5 of
RFC2679, 3rd bullet point and also Section 3.8.2 of RFC2679.</t>
<t><list style="format (%d)">
<t>Configure a path with 1 sec one-way constant delay.</t>
<t>Measure one-way delay with 2 or more implementations, using
identical waiting time thresholds for loss set at 2 seconds.</t>
<t>Configure the path with 3 sec one-way delay.</t>
<t>Repeat measurements.</t>
<t>Observe that the increase measured in step 4 caused all packets
to be declared lost, and that all packets that arrive successfully
in step 2 are assigned a valid one-way delay.</t>
</list></t>
<t>One-way Delay, First-bit to Last bit, RFC 2679</t>
<t>This test determines if implementations register the same relative
increase in delay from one measurement to another under different
delay conditions. This test tends to cancel the sources of error which
may be present in an implementation. See Section 3.7.2 of RFC2679, and
Section 10.2 of RFC2330.</t>
<t><list style="format (%d)">
<t>Configure a path with X ms one-way constant delay, and ideally
including a low-speed link.</t>
<t>Measure one-way delay with 2 or more implementations, using
identical options and equal size small packets (e.g., 100 octet IP
payload).</t>
<t>Maintain the same path with X ms one-way delay.</t>
<t>Measure one-way delay with 2 or more implementations, using
identical options and equal size large packets (e.g., 1500 octet
IP payload).</t>
<t>Observe that the increase measured in steps 2 and 4 is
equivalent to the increase in ms expected due to the larger
serialization time for each implementation. Most of the
measurement errors in each system should cancel, if they are
stationary.</t>
</list></t>
<t>One-way Delay, RFC 2679</t>
<t>This test determines if implementations register the same relative
increase in delay from one measurement to another under different
delay conditions. This test tends to cancel the sources of error which
may be present in an implementation. This test is intended to evaluate
measurments in sections 3 and 4 of RFC2679.</t>
<t><list style="format (%d)">
<t>Configure a path with X ms one-way constant delay.</t>
<t>Measure one-way delay with 2 or more implementations, using
identical options.</t>
<t>Configure the path with X+Y ms one-way delay.</t>
<t>Repeat measurements.</t>
<t>Observe that the increase measured in steps 2 and 4 is ~Y ms
for each implementation. Most of the measurement errors in each
system should cancel, if they are stationary.</t>
</list></t>
<t>Error Calibration, RFC 2679</t>
<t>This is a simple check to determine if an implementation reports
the error calibration as required in Section 4.8 of RFC2679. Note that
the context (Type-P) must also be reported.</t>
</section>
<section title="Examples related to statistical tests for One-way Delay">
<t>A one way delay measurement may pass an ADK test with a timestamp
resultion of 1 ms. The same test may fail, if timestamps with a
resolution of 100 microseconds are eavluated. The implementation then
is then conforming to the metric specification up to a timestamp
resolution of 1 ms.</t>
<t>Let's assume another one way delay measurement comparison between
implementation 1, probing with a frequency of 2 probes per second and
implementation 2 probing at a rate of 2 probes every 3 minutes. To
ensure reasonable confidence in results, sample metrics are calculated
from at least 5 singletons per compared time interval. This means,
sample delay values are calculated for each system for identical 6
minute intervals for the whole test duration. Per 6 minute interval,
the sample metric is calculated from 720 singletons for implementation
1 and from 6 singletons for implementation 2. Note, that if outliers
are not filtered, moving averages are an option for an evaluation too.
The minimum move of an averaging interval is three minutes in this
example.</t>
<t>The data in table 1 may result from measuring One-Way Delay with
implementation 1 (see column Implemnt_1) and implementation 2 (see
column implemnt_2). Each data point in the table represents a
(rounded) average of the sampled delay values per interval. The
resolution of the clock is one micro-second. The difference in the
delay values may result eg. from different probe packet sizes.</t>
<texttable anchor="table_example_data">
<ttcol align="center">Implemnt_1</ttcol>
<ttcol align="center">Implemnt_2</ttcol>
<ttcol align="center">Implemnt_2 - Delta_Averages</ttcol>
<c>5000</c>
<c>6549</c>
<c>4997</c>
<c>5008</c>
<c>6555</c>
<c>5003</c>
<c>5012</c>
<c>6564</c>
<c>5012</c>
<c>5015</c>
<c>6565</c>
<c>5013</c>
<c>5019</c>
<c>6568</c>
<c>5016</c>
<c>5022</c>
<c>6570</c>
<c>5018</c>
<c>5024</c>
<c>6573</c>
<c>5021</c>
<c>5026</c>
<c>6575</c>
<c>5023</c>
<c>5027</c>
<c>6577</c>
<c>5025</c>
<c>5029</c>
<c>6580</c>
<c>5028</c>
<c>5030</c>
<c>6585</c>
<c>5033</c>
<c>5032</c>
<c>6586</c>
<c>5034</c>
<c>5034</c>
<c>6587</c>
<c>5035</c>
<c>5036</c>
<c>6588</c>
<c>5036</c>
<c>5038</c>
<c>6589</c>
<c>5037</c>
<c>5039</c>
<c>6591</c>
<c>5039</c>
<c>5041</c>
<c>6592</c>
<c>5040</c>
<c>5043</c>
<c>6599</c>
<c>5047</c>
<c>5046</c>
<c>6606</c>
<c>5054</c>
<c>5054</c>
<c>6612</c>
<c>5060</c>
</texttable>
<t>Average values of sample metrics captured during identical time
intervals are compared. This excludes random differences caused by
differing probing intervals or differing temporal distance of
singletons resulting from their Poisson distributed sending times.</t>
<t>In the example, 20 values have been picked (note that at least 100
values are recommended for a single run of a real test). Data must be
ordered by ascending rank. The data of Implemnt_1 and Implemnt_2 as
shown in the first two columns of table 1 clearly fails an ADK test
with 95% confidence.</t>
<t>The results of Implemnt_2 are now reduced by difference of the
averages of column 2 (rounded to 6581 us) and column 1 (rounded to
5029 us), which is 1552 us. The result may be found in column 3 of
table 1. Comparing column 1 and column 3 of the table by an ADK test
shows, that the data contained in these columns passes an ADK tests
with 95% confidence.</t>
<t>>>> Comment: Extensive averaging was used in this example,
because of the vastly different sampling frequencies. As a result, the
distributions compared do not exactly align with a metric in <xref
target="RFC2679"></xref>, but illustrate the ADK process
adequately.</t>
</section>
</section>
<section anchor="Appendix B" title="Anderson-Darling 2 sample C++ code">
<figure align="center" anchor=" ">
<preamble />
<artwork align="left"><![CDATA[
/* Routines for computing the Anderson-Darling 2 sample
* test statistic.
*
* Implemented based on the description in
* "Anderson-Darling K Sample Test" Heckert, Alan and
* Filliben, James, editors, Dataplot Reference Manual,
* Chapter 15 Auxiliary, NIST, 2004.
* Official Reference by 2010
* Heckert, N. A. (2001). Dataplot website at the
* National Institute of Standards and Technology:
* http://www.itl.nist.gov/div898/software/dataplot.html/
* June 2001.
*/
#include <iostream>
#include <fstream>
#include <vector>
#include <sstream>
using namespace std;
vector<double> vec1, vec2;
double adk_result;
double adk_criterium = 1.993;
/* vec1 and vec2 to be initialised with sample 1 and
* sample 2 values in ascending order.
*/
/* example for iterating the vectors
* for(vector<double>::iterator it = vec1->begin();
* it != vec1->end(); it++
* {
* cout << *it << endl;
* }
*/
static int k, val_st_z_samp1, val_st_z_samp2,
val_eq_z_samp1, val_eq_z_samp2,
j, n_total, n_sample1, n_sample2, L,
max_number_samples, line, maxnumber_z;
static int column_1, column_2;
static double adk, n_value, z, sum_adk_samp1,
sum_adk_samp2, z_aux;
static double H_j, F1j, hj, F2j, denom_1_aux, denom_2_aux;
static bool next_z_sample2, equal_z_both_samples;
static int stop_loop1, stop_loop2, stop_loop3,old_eq_line2,
old_eq_line1;
static double adk_criterium = 1.993;
k = 2;
n_sample1 = vec1->size() - 1;
n_sample2 = vec2->size() - 1;
// -1 because vec[0] is a dummy value
n_total = n_sample1 + n_sample2;
/* value equal to the line with a value = zj in sample 1.
* Here j=1, so the line is 1.
*/
val_eq_z_samp1 = 1;
/* value equal to the line with a value = zj in sample 2.
* Here j=1, so the line is 1.
*/
val_eq_z_samp2 = 1;
/* value equal to the last line with a value < zj
* in sample 1. Here j=1, so the line is 0.
*/
val_st_z_samp1 = 0;
/* value equal to the last line with a value < zj
* in sample 1. Here j=1, so the line is 0.
*/
val_st_z_samp2 = 0;
sum_adk_samp1 = 0;
sum_adk_samp2 = 0;
j = 1;
// as mentioned above, j=1
equal_z_both_samples = false;
next_z_sample2 = false;
//assuming the next z to be of sample 1
stop_loop1 = n_sample1 + 1;
// + 1 because vec[0] is a dummy, see n_sample1 declaration
stop_loop2 = n_sample2 + 1;
stop_loop3 = n_total + 1;
/* The required z values are calculated until all values
* of both samples have been taken into account. See the
* lines above for the stoploop values. Construct required
* to avoid a mathematical operation in the While condition
*/
while (((stop_loop1 > val_eq_z_samp1)
|| (stop_loop2 > val_eq_z_samp2)) && stop_loop3 > j)
{
if(val_eq_z_samp1 < n_sample1+1)
{
/* here, a preliminary zj value is set.
* See below how to calculate the actual zj.
*/
z = (*vec1)[val_eq_z_samp1];
/* this while sequence calculates the number of values
* equal to z.
*/
while ((val_eq_z_samp1+1 < n_sample1)
&& z == (*vec1)[val_eq_z_samp1+1] )
{
val_eq_z_samp1++;
}
}
else
{
val_eq_z_samp1 = 0;
val_st_z_samp1 = n_sample1;
// this should be val_eq_z_samp1 - 1 = n_sample1
}
if(val_eq_z_samp2 < n_sample2+1)
{
z_aux = (*vec2)[val_eq_z_samp2];;
/* this while sequence calculates the number of values
* equal to z_aux
*/
while ((val_eq_z_samp2+1 < n_sample2)
&& z_aux == (*vec2)[val_eq_z_samp2+1] )
{
val_eq_z_samp2++;
}
/* the smaller of the two actual data values is picked
* as the next zj.
*/
if(z > z_aux)
{
z = z_aux;
next_z_sample2 = true;
}
else
{
if (z == z_aux)
{
equal_z_both_samples = true;
}
/* This is the case, if the last value of column1 is
* smaller than the remaining values of column2.
*/
if (val_eq_z_samp1 == 0)
{
z = z_aux;
next_z_sample2 = true;
}
}
}
else
{
val_eq_z_samp2 = 0;
val_st_z_samp2 = n_sample2;
// this should be val_eq_z_samp2 - 1 = n_sample2
}
/* in the following, sum j = 1 to L is calculated for
* sample 1 and sample 2.
*/
if (equal_z_both_samples)
{
/* hj is the number of values in the combined sample
* equal to zj
*/
hj = val_eq_z_samp1 - val_st_z_samp1
+ val_eq_z_samp2 - val_st_z_samp2;
/* H_j is the number of values in the combined sample
* smaller than zj plus one half the the number of
* values in the combined sample equal to zj
* (that's hj/2).
*/
H_j = val_st_z_samp1 + val_st_z_samp2
+ hj / 2;
/* F1j is the number of values in the 1st sample
* which are less than zj plus one half the number
* of values in this sample which are equal to zj.
*/
F1j = val_st_z_samp1 + (double)
(val_eq_z_samp1 - val_st_z_samp1) / 2;
/* F2j is the number of values in the 1st sample
* which are less than zj plus one half the number
* of values in this sample which are equal to zj.
*/
F2j = val_st_z_samp2 + (double)
(val_eq_z_samp2 - val_st_z_samp2) / 2;
/* set the line of values equal to zj to the
* actual line of the last value picked for zj.
*/
val_st_z_samp1 = val_eq_z_samp1;
/* Set the line of values equal to zj to the actual
* line of the last value picked for zjof each
* sample. This is required as data smaller than zj
* is accounted differently than values equal to zj.
*/
val_st_z_samp2 = val_eq_z_samp2;
/* next the lines of the next values z, ie. zj+1
* are addressed.
*/
val_eq_z_samp1++;
/* next the lines of the next values z, ie.
* zj+1 are addressed
*/
val_eq_z_samp2++;
}
else
{
/* the smaller z value was contained in sample 2,
* hence this value is the zj to base the following
* calculations on.
*/
if (next_z_sample2)
{
/* hj is the number of values in the combined
* sample equal to zj, in this case these are
* within sample 2 only.
*/
hj = val_eq_z_samp2 - val_st_z_samp2;
/* H_j is the number of values in the combined sample
* smaller than zj plus one half the the number of
* values in the combined sample equal to zj
* (that's hj/2).
*/
H_j = val_st_z_samp1 + val_st_z_samp2
+ hj / 2;
/* F1j is the number of values in the 1st sample which
* are less than zj plus one half the number of values in
* this sample which are equal to zj.
* As val_eq_z_samp2 < val_eq_z_samp1, these are the
* val_st_z_samp1 only.
*/
F1j = val_st_z_samp1;
/* F2j is the number of values in the 1st sample which
* are less than zj plus one half the number of values in
* this sample which are equal to zj. The latter are from
* sample 2 only in this case.
*/
F2j = val_st_z_samp2 + (double)
(val_eq_z_samp2 - val_st_z_samp2) / 2;
/* Set the line of values equal to zj to the actual line
* of the last value picked for zj of sample 2 only in
* this case.
*/
val_st_z_samp2 = val_eq_z_samp2;
/* next the line of the next value z, ie. zj+1 is
* addressed. Here, only sample 2 must be addressed.
*/
val_eq_z_samp2++;
if (val_eq_z_samp1 == 0)
{
val_eq_z_samp1 = stop_loop1;
}
}
/* the smaller z value was contained in sample 2,
* hence this value is the zj to base the following
* calculations on.
*/
else
{
/* hj is the number of values in the combined
* sample equal to zj, in this case these are
* within sample 1 only.
*/
hj = val_eq_z_samp1 - val_st_z_samp1;
/* H_j is the number of values in the combined
* sample smaller than zj plus one half the the number
* of values in the combined sample equal to zj
* (that's hj/2).
*/
H_j = val_st_z_samp1 + val_st_z_samp2
+ hj / 2;
/* F1j is the number of values in the 1st sample which
* are less than zj plus, in this case these are within
* sample 1 only one half the number of values in this
* sample which are equal to zj. The latter are from
* sample 1 only in this case.
*/
F1j = val_st_z_samp1 + (double)
(val_eq_z_samp1 - val_st_z_samp1) / 2;
/* F2j is the number of values in the 1st sample which
* are less than zj plus one half the number of values
* in this sample which are equal to zj. As
* val_eq_z_samp1 < val_eq_z_samp2, these are the
* val_st_z_samp2 only.
*/
F2j = val_st_z_samp2;
/* Set the line of values equal to zj to the actual line
* of the last value picked for zj of sample 1 only in
* this case
*/
val_st_z_samp1 = val_eq_z_samp1;
/* next the line of the next value z, ie. zj+1 is
* addressed. Here, only sample 1 must be addressed.
*/
val_eq_z_samp1++;
if (val_eq_z_samp2 == 0)
{
val_eq_z_samp2 = stop_loop2;
}
}
}
denom_1_aux = n_total * F1j - n_sample1 * H_j;
denom_2_aux = n_total * F2j - n_sample2 * H_j;
sum_adk_samp1 = sum_adk_samp1 + hj
* (denom_1_aux * denom_1_aux) /
(H_j * (n_total - H_j)
- n_total * hj / 4);
sum_adk_samp2 = sum_adk_samp2 + hj
* (denom_2_aux * denom_2_aux) /
(H_j * (n_total - H_j)
- n_total * hj / 4);
next_z_sample2 = false;
equal_z_both_samples = false;
/* index to count the z. It is only required to prevent
* the while slope to execute endless
*/
j++;
}
// calculating the adk value is the final step.
adk_result = (double) (n_total - 1) / (n_total
* n_total * (k - 1))
* (sum_adk_samp1 / n_sample1
+ sum_adk_samp2 / n_sample2);
/* if(adk_result <= adk_criterium)
* adk_2_sample test is passed
*/
]]></artwork>
<postamble />
</figure>
</section>
<section anchor="Appendix C"
title="A tunneling set up for remote metric implementation testing">
<t>Parties interested in testing metric compliance is most convenient if
all involved parties can stay in their local test laboratories. Figure 4
shows a test configuration which may enable remote metric compliance
testing.</t>
<figure align="center" anchor="Figure 4">
<preamble />
<artwork align="left"><![CDATA[
+----+ +----+ +----+ +----+
|LC10| |LC11| ,---. |LC20| |LC21|
+----+ +----+ / \ +-------+ +----+ +----+
| V10 | V11 / \ | Tunnel| | V20 | V21
| | ( ) | Head | | |
+--------+ +------+ | | | Router|__+----------+
|Ethernet| |Tunnel| |Internet | +---B---+ |Ethernet |
|Switch |--|Head |-| | | |Switch |
+-+--+---+ |Router| | | +---+---+ +--+--+----+
|__| +--A---+ ( )--|Option.| |__|
\ / |Impair.|
Bridge \ / |Gener. | Bridge
V20 to V21 `-+-? +-------+ V10 to V11
]]></artwork>
<postamble />
</figure>
<t>LC10 identify measurement clients /line cards. V10 and the others
denote VLANs. All VLANs are using the same tunnel from A to B and in the
reverse direction. The remote site VLANs are U-bridged at the local site
Ethernet switch. The measurement packets of site 1 travel tunnel A->B
first, are U-bridged at site 2 and travel tunnel B->A second.
Measurement packets of site 2 travel tunnel B->A first, are U-bridged
at site 1 and travel tunnel A->B second. So all measurement packets
pass the same tunnel segments, but in different segment order. An
experiment to prove or reject the above test set up shown in figure 4
has been agreed but not yet scheduled between Deutsche Telekom and
RIPE.</t>
<t>Figure 4 includes an optional impairment generator. If this
impairment generator is inserted in the IP path between the tunnel head
end routers, it equally impacts all measurement packets and flows. Thus
trouble with ensuring identical test set up by configuring two separated
impairment generators identically is avoided (which was another proposal
allowing remote metric compliance testing).</t>
</section>
<section anchor="Appendix D" title="Glossary">
<texttable anchor="table_glossary">
<ttcol align="left"></ttcol>
<ttcol align="left"></ttcol>
<c>ADK</c>
<c>Anderson-Darling K-Sample test, a test used to check whether two
samples have the same statistical distribution.</c>
<c>ECMP</c>
<c>Equal Cost Multipath, a load balancing mechanism evaluating MPLS
labels stacks, IP addresses and ports.</c>
<c>EDF</c>
<c>The "Empirical Distribution Function" of a set of scalar
measurements is a function F(x) which for any x gives the fractional
proportion of the total measurements that were smaller than or equal
as x.</c>
<c>Metric</c>
<c>A measured quantity related to the performance and reliability of
the Internet, expressed by a value. This could be a singleton (single
value), a sample of single values or a statistic based on a sample of
singletons.</c>
<c>OWAMP</c>
<c>One-way Active Measurement Protocol, a protocol for communication
between IPPM measurement systems specified by IPPM.</c>
<c>OWD</c>
<c>One-Way Delay, a performance metric specified by IPPM.</c>
<c>Sample metric</c>
<c>A sample metric is derived from a given singleton metric by
evaluating a number of distinct instances together.</c>
<c>Singleton metric</c>
<c>A singleton metric is, in a sense, one atomic measurement of this
metric.</c>
<c>Statistical metric</c>
<c>A 'statistical' metric is derived from a given sample metric by
computing some statistic of the values defined by the singleton metric
on the sample.</c>
<c>TWAMP</c>
<c>Two-way Active Measurement Protocol, a protocol for communication
between IPPM measurement systems specified by IPPM.</c>
</texttable>
</section>
<!-- Change Log
v00 2008-10-13 RG Initial version
v00 2008-12-17 RG after internal review
v00 2009-06-25 RG own review and comments of Fardid Reza
v00 2009-07-01 RG including Fardid Rezas input and small changes
v00 2009-07-06 RG comments of Al Morton, Scott Bradner and Mike Hamilton, submitted as -00
v01 2009-10-14 RG restructured simplified new version picking up some of the ideas of Al Mortons draft
v02 2009-12-18 RG inclusion of most remaining ideas of Al Mortons, changing the appendix to contain an example on OWD and editorial imprevements
v02 2010-01-20 AM review and textual changes, editorial comments of Carroll Davids
v02 2010-02-10 RG clarification on sections commented by Al Morton, addition of a figure and text on tunnels,
added contents suggested by Carroll Davids last ID version, from here on WG draft
-->
<!--v00 2010-07-01 RG Added tunnel requirements agreed with Al Morton and included results of the "validation of the IPPM metric test" thesis
RF supported by Deutsche Telekom and changes following comments by Reza.
v01 2010-10-22 RG Mainly added discussion on test set up options and remote testing across the Internet test set up (appendix C in this version).
AM Edits and proposal to determine comparison thresholds. -->
</back>
</rfc>| PAFTECH AB 2003-2026 | 2026-04-24 08:56:09 |