One document matched: draft-morton-ippm-testplan-rfc2679-01.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<rfc category="info" docName="draft-morton-ippm-testplan-rfc2679-01"
ipr="pre5378Trust200902">
<front>
<title abbrev="Stds Track Tests RFC2679">Test Plan and Results for
Advancing RFC 2679 on the Standards Track</title>
<author fullname="Len Ciavattone" initials="L." surname="Ciavattone">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<city>Middletown</city>
<region>NJ</region>
<code>07748</code>
<country>USA</country>
</postal>
<phone>+1 732 420 1239</phone>
<facsimile></facsimile>
<email>lencia@att.com</email>
<uri></uri>
</address>
</author>
<author fullname="Ruediger Geib" initials="R." surname="Geib">
<organization>Deutsche Telekom</organization>
<address>
<postal>
<street>Heinrich Hertz Str. 3-7</street>
<!-- Reorder these if your country does things differently -->
<code>64295</code>
<city>Darmstadt</city>
<region></region>
<country>Germany</country>
</postal>
<phone>+49 6151 58 12747</phone>
<email>Ruediger.Geib@telekom.de</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization>AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<city>Middletown</city>
<region>NJ</region>
<code>07748</code>
<country>USA</country>
</postal>
<phone>+1 732 420 1571</phone>
<facsimile>+1 732 368 1192</facsimile>
<email>acmorton@att.com</email>
<uri>http://home.comcast.net/~acmacm/</uri>
</address>
</author>
<author fullname="Matthias Wieser" initials="M." surname="Wieser">
<organization>University of Applied Sciences Darmstadt</organization>
<address>
<postal>
<street>Birkenweg 8 Department EIT</street>
<!-- Reorder these if your country does things differently -->
<code>64295</code>
<city>Darmstadt</city>
<region></region>
<country>Germany</country>
</postal>
<phone></phone>
<email>matthias.wieser@stud.h-da.de</email>
<!-- uri and facsimile elements may also be added -->
</address>
</author>
<date day="29" month="June" year="2011" />
<abstract>
<t>This memo proposes to advance a performance metric RFC along the
standards track, specifically RFC 2679 on One-way Delay Metrics.
Observing that the metric definitions themselves should be the primary
focus rather than the implementations of metrics, this memo describes
the test procedures to evaluate specific metric requirement clauses to
determine if the requirement has been interpreted and implemented as
intended. Two completely independent implementations have been tested
against the key specifications of RFC 2679.</t>
</abstract>
<note title="Requirements Language">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref
target="RFC2119">RFC 2119</xref>.</t>
</note>
</front>
<middle>
<section title="Introduction">
<t>The IETF (IP Performance Metrics working group, IPPM) has considered
how to advance their metrics along the standards track since 2001, with
the initial publication of Bradner/Paxson/Mankin's memo [ref to work in
progress, draft-bradner-metricstest-]. The original proposal was to
compare the results of implementations of the metrics, because the usual
procedures for advancing protocols did not appear to apply. It was found
to be difficult to achieve consensus on exactly how to compare
implementations, since there were many legitimate sources of variation
that would emerge in the results despite the best attempts to keep the
network paths equal, and because considerable variation was allowed in
the parameters (and therefore implementation) of each metric.
Flexibility in metric definitions, essential for customization and broad
appeal, made the comparison task quite difficult.</t>
<t>A renewed work effort sought to investigate ways in which the
measurement variability could be reduced and thereby simplify the
problem of comparison for equivalence.</t>
<t>There is *preliminary* consensus <xref
target="I-D.ietf-ippm-metrictest"></xref> that the metric definitions
should be the primary focus of evaluation rather than the
implementations of metrics, and equivalent results are deemed to be
evidence that the metric specifications are clear and unambiguous. This
is the metric specification equivalent of protocol interoperability. The
advancement process either produces confidence that the metric
definitions and supporting material are clearly worded and unambiguous,
OR, identifies ways in which the metric definitions should be revised to
achieve clarity.</t>
<t>The process should also permit identification of options that were
not implemented, so that they can be removed from the advancing
specification (this is an aspect more typical of protocol advancement
along the standards track).</t>
<t>This memo's purpose is to implement the current approach for <xref
target="RFC2679"></xref>. It was prepared to help progress discussions
on the topic of metric advancement, both through e-mail and at the
upcoming IPPM meeting at IETF.</t>
<t>In particular, consensus is sought on the extent of tolerable errors
when assessing equivalence in the results. In discussions, the IPPM
working group agreed that test plan and procedures should include the
threshold for determining equivalence, and this information should be
available in advance of cross-implementation comparisons. This memo
includes procedures for same-implementation comparisons to help set the
equivalence threshold.</t>
<t>Another aspect of the metric RFC advancement process is the
requirement to document the work and results. The procedures of <xref
target="RFC2026"></xref> are expanded in<xref target="RFC5657"></xref>,
including sample implementation and interoperability reports. This memo
follows the template in <xref
target="I-D.morton-ippm-advance-metrics"></xref> for the report that
accompanies the protocol action request submitted to the Area Director,
including description of the test set-up, procedures, results for each
implementation and conclusions.</t>
<section title="RFC 2679 Coverage">
<t>This plan, in it's first draft version, does not cover all critical
requirements and sections of <xref target="RFC2679"></xref>. Material
will be added as it is "discovered" (not all requirements use
requirements language).</t>
</section>
</section>
<section title="A Definition-centric metric advancement process">
<t>The process described in Section 3.5 of <xref
target="I-D.ietf-ippm-metrictest"></xref> takes as a first principle
that the metric definitions, embodied in the text of the RFCs, are the
objects that require evaluation and possible revision in order to
advance to the next step on the standards track.</t>
<t>IF two implementations do not measure an equivalent singleton or
sample, or produce the an equivalent statistic,</t>
<t>AND sources of measurement error do not adequately explain the lack
of agreement,</t>
<t>THEN the details of each implementation should be audited along with
the exact definition text, to determine if there is a lack of clarity
that has caused the implementations to vary in a way that affects the
correspondence of the results.</t>
<t>IF there was a lack of clarity or multiple legitimate interpretations
of the definition text,</t>
<t>THEN the text should be modified and the resulting memo proposed for
consensus and advancement along the standards track.</t>
<t>Finally, all the findings MUST be documented in a report that can
support advancement on the standards track, similar to those described
in <xref target="RFC5657"></xref>. The list of measurement devices used
in testing satisfies the implementation requirement, while the test
results provide information on the quality of each specification in the
metric RFC (the surrogate for feature interoperability).</t>
<t>The figure below illustrates this process:</t>
<t><figure>
<preamble></preamble>
<artwork><![CDATA[ ,---.
/ \
( Start )
\ / Implementations
`-+-' +-------+
| /| 1 `.
+---+----+ / +-------+ `.-----------+ ,-------.
| RFC | / |Check for | ,' was RFC `. YES
| | / |Equivalence..... clause x -------+
| |/ +-------+ |under | `. clear? ,' |
| Metric \.....| 2 ....relevant | `---+---' +----+---+
| Metric |\ +-------+ |identical | No | |Report |
| Metric | \ |network | +---+---. |results+|
| ... | \ |conditions | |Modify | |Advance |
| | \ +-------+ | | |Spec +----+ RFC |
+--------+ \| n |.'+-----------+ +-------+ |request?|
+-------+ +--------+
]]></artwork>
<postamble></postamble>
</figure></t>
</section>
<section title="Test configuration">
<t>One metric implementation used was NetProbe version 5.8.5, (an
earlier version is used in the WIPM system and deployed world-wide).
NetProbe uses UDP packets of variable size, and can produce test streams
with Periodic <xref target="RFC3432"></xref> or Poisson <xref
target="RFC2330"></xref> sample distributions.</t>
<t>The other metric implementation used was Perfas+ version 3.1,
developed by Deutsche Telekom. Perfas+ uses UDP unicast packets of
variable size (but supports also TCP and multicast). Test streams with
periodic, Poisson or uniform sample distributions may be used.</t>
<t>Figure 2 shows a view of the test path as each Implementation's test
flows pass through the Internet and the L2TPv3 tunnel IDs (1 and 2),
based on Figure 1 of <xref
target="I-D.ietf-ippm-metrictest"></xref>.</t>
<t><figure align="center" anchor="L2TPv3_tunnel">
<preamble />
<artwork align="center"><![CDATA[ +----+ +----+ +----+ +----+
|Imp1| |Imp1| ,---. |Imp2| |Imp2|
+----+ +----+ / \ +-------+ +----+ +----+
| V100 | V200 / \ | Tunnel| | V300 | V400
| | ( ) | Head | | |
+--------+ +------+ | |__| Router| +----------+
|Ethernet| |Tunnel| |Internet | +---B---+ |Ethernet |
|Switch |--|Head |-| | | |Switch |
+-+--+---+ |Router| | | +---+---+--+--+--+----+
|__| +--A---+ ( ) |Network| |__|
\ / |Emulat.|
U-turn \ / |"netem"| U-turn
V300 to V400 `-+-' +-------+ V100 to V200
Implementations ,---. +--------+
+~~~~~~~~~~~/ \~~~~~~| Remote |
+------->-----F2->-| / \ |->---. |
| +---------+ | Tunnel ( ) | | |
| | transmit|-F1->-| ID 1 ( ) |->. | |
| | Imp 1 | +~~~~~~~~~| |~~~~| | | |
| | receive |-<--+ ( ) | F1 F2 |
| +---------+ | |Internet | | | | |
*-------<-----+ F1 | | | | | |
+---------+ | | +~~~~~~~~~| |~~~~| | | |
| transmit|-* *-| | | |<-* | |
| Imp 2 | | Tunnel ( ) | | |
| receive |-<-F2-| ID 2 \ / |<----* |
+---------+ +~~~~~~~~~~~\ /~~~~~~| Switch |
`-+-' +--------+
]]></artwork>
<postamble>Illustrations of a test setup with a bi-directional
tunnel. The upper diagram emphasizes the VLAN connectivity and
geographical location. The lower diagram shows example flows
traveling between two measurement implementations (for simplicity,
only two flows are shown).</postamble>
</figure></t>
<t>The testing employs the Layer 2 Tunnel Protocol, version 3 (L2TPv3)
<xref target="RFC3931"></xref> tunnel between test sites on the
Internet. The tunnel IP and L2TPv3 headers are intended to conceal the
test equipment addresses and ports from hash functions that would tend
to spread different test streams across parallel network resources, with
likely variation in performance as a result.</t>
<t>At each end of the tunnel, one pair of VLANs encapsulated in the
tunnel are looped-back so that test traffic is returned to each test
site. Thus, test streams traverse the L2TP tunnel twice, but appear to
be one-way tests from the test equipment point of view.</t>
<t>The network emulator is a host running Fedora 14 Linux
[http://fedoraproject.org/] with IP forwarding enabled and the "netem"
Network emulator as part of the Fedora Kernel 2.6.35.11
[http://www.linuxfoundation.org/collaborate/workgroups/networking/netem]
loaded and operating. Connectivity across the netem/Fedora host was
accomplished by bridging Ethernet VLAN interfaces together with "brctl"
commands (e.g., eth1.100 <-> eth2.100). The netem emulator was
activated on one interface (eth1) and only operates on test streams
traveling in one direction. In some tests, independent netem instances
operated separately on each VLAN.</t>
<t>The links between the netem emulator host and router and switch were
found to be 100baseTx-HD (100Mbps half duplex) as reported by
"mii-tool"when the testing was complete. Use of Half Duplex was not
intended, but probably added a small amount of delay variation that
could have been avoided in full duplex mode.</t>
<t>Each individual test was run with common packet rates (1 pps, 10pps)
Poisson/Periodic distributions, and IP packet sizes of 64, 340, and 500
Bytes.</t>
<t>For these tests, a stream of at least 300 packets were sent from
Source to Destination in each implementation. Periodic streams (as per
<xref target="RFC3432"></xref>) with 1 second spacing were used, except
as noted.</t>
<t>With the L2TPv3 tunnel in use, the metric name for the testing
configured here (with respect to the IP header exposed to Internet
processing) is:</t>
<t>Type-IP-protocol-115-One-way-Delay-<StreamType>-Stream</t>
<t>With (Section 4.2. <xref target="RFC2679"></xref>) Metric
Parameters:</t>
<t>+ Src, the IP address of a host (12.3.167.16 or 193.159.144.8)</t>
<t>+ Dst, the IP address of a host (193.159.144.8 or 12.3.167.16)</t>
<t>+ T0, a time</t>
<t>+ Tf, a time</t>
<t>+ lambda, a rate in reciprocal seconds</t>
<t>+ Thresh, a maximum waiting time in seconds (see Section 3.82 of
<xref target="RFC2679"></xref>) And (Section 4.3. <xref
target="RFC2679"></xref>)</t>
<t>Metric Units: A sequence of pairs; the elements of each pair are:</t>
<t>+ T, a time, and</t>
<t>+ dT, either a real number or an undefined number of seconds.</t>
<t>The values of T in the sequence are monotonic increasing. Note that T
would be a valid parameter to Type-P-One-way-Delay, and that dT would be
a valid value of Type-P-One-way-Delay.</t>
<t>Also, Section 3.8.4 of <xref target="RFC2679"></xref> recommends that
the path SHOULD be reported. In this test set-up, most of the path
details will be concealed from the implementations by the L2TPv3
tunnels, thus a more informative path trace route can be conducted by
the routers at each location.</t>
<t>When NetProbe is used in production, a traceroute is conducted in
parallel with, and at the outset of measurements.</t>
<t>Perfas+ does not support traceroute.</t>
<t><figure>
<preamble></preamble>
<artwork><![CDATA[IPLGW#traceroute 193.159.144.8
Type escape sequence to abort.
Tracing the route to 193.159.144.8
1 12.126.218.245 [AS 7018] 0 msec 0 msec 4 msec
2 cr84.n54ny.ip.att.net (12.123.2.158) [AS 7018] 4 msec 4 msec
cr83.n54ny.ip.att.net (12.123.2.26) [AS 7018] 4 msec
3 cr1.n54ny.ip.att.net (12.122.105.49) [AS 7018] 4 msec
cr2.n54ny.ip.att.net (12.122.115.93) [AS 7018] 0 msec
cr1.n54ny.ip.att.net (12.122.105.49) [AS 7018] 0 msec
4 n54ny02jt.ip.att.net (12.122.80.225) [AS 7018] 4 msec 0 msec
n54ny02jt.ip.att.net (12.122.80.237) [AS 7018] 4 msec
5 192.205.34.182 [AS 7018] 0 msec
192.205.34.150 [AS 7018] 0 msec
192.205.34.182 [AS 7018] 4 msec
6 da-rg12-i.DA.DE.NET.DTAG.DE (62.154.1.30) [AS 3320] 88 msec 88 msec
88 msec
7 217.89.29.62 [AS 3320] 88 msec 88 msec 88 msec
8 217.89.29.55 [AS 3320] 88 msec 88 msec 88 msec
9 * * *
]]></artwork>
<postamble></postamble>
</figure></t>
<t>It was only possible to conduct the traceroute for the measured path
on one of the tunnel-head routers (the normal trace facilities of the
measurement systems are confounded by the L2TPv3 tunnel
encapsulation).</t>
</section>
<section title="Error Calibration, RFC 2679">
<t>An implementation is required to report on its error calibration in
Section 3.8 of <xref target="RFC2679"></xref> (also required in Section
4.8 for sample metrics). Sections 3.6, 3.7, and 3.8 of <xref
target="RFC2679"></xref> give the detailed formulation of the errors and
uncertainties for calibration. In summary, Section 3.7.1 of <xref
target="RFC2679"></xref> describes the total time-varying uncertainty
as:</t>
<t>Esynch(t)+ Rsource + Rdest</t>
<t>where:</t>
<t>Esynch(t) denotes an upper bound on the magnitude of clock
synchronization uncertainty.</t>
<t>Rsource and Rdest denote the resolution of the source clock and the
destination clock, respectively.</t>
<t>Further, Section 3.7.2 of <xref target="RFC2679"></xref> describes
the total wire-time uncertainty as</t>
<t>Hsource + Hdest</t>
<t>referring to the upper bounds on host-time to wire-time for source
and destination, respectively.</t>
<t>Section 3.7.3 of <xref target="RFC2679"></xref> describes a test with
small packets over an isolated minimal network where the results can be
used to estimate systematic and random components of the sum of the
above errors or uncertainties. In a test with hundreds of singletons,
the median is the systematic error and when the median is subtracted
from all singletons, the remaining variability is the random error.</t>
<t>The test context, or Type-P of the test packets, must also be
reported, as required in Section 3.8 of <xref target="RFC2679"></xref>
and all metrics defined there. Type-P is defined in Section 13 of <xref
target="RFC2330"></xref> (as are many terms used below).</t>
<section title="NetProbe Error and Type-P">
<t>Type-P for this test was IP-UDP with Best Effort DCSP. These
headers were encapsulated according to the L2TPv3 specifications <xref
target="RFC3931"></xref>, and thus may not influence the treatment
received as the packets traversed the Internet.</t>
<t>In general, NetProbe error is dependent on the specific version and
installation details.</t>
<t>NetProbe operates using host time above the UDP layer, which is
different from the wire-time preferred in <xref
target="RFC2330"></xref>, but can be identified as a source of error
according to Section 3.7.2 of <xref target="RFC2679"></xref>.</t>
<t>Accuracy of NetProbe measurements is usually limited by NTP
synchronization performance (which is typically taken as ~+/-1ms error
or greater), although the installation used in this testing often
exhibits errors much less than typical for NTP. The primary stratum 1
NTP server is closely located on a sparsely utilized network
management LAN, thus it avoids many concerns raised in Section 10
of<xref target="RFC2330"></xref> (in fact, smooth adjustment,
long-term drift analysis and compensation, and infrequent adjustment
all lead to stability during measurement intervals, the main
concern).</t>
<t>The resolution of the reported results is 1us (us = microsecond) in
the version of NetProbe tested here, which contributes to at least
+/-1us error.</t>
<t>NetProbe implements a time-keeping sanity check on sending and
receiving time-stamping processes. When the significant process
interruption takes place, individual test packets are flagged as
possibly containing unusual time errors, and are excluded from the
sample used for all "time" metrics.</t>
<t>We performed a NetProbe calibration of the type described in
Section 3.7.3 of <xref target="RFC2679"></xref>, using 64 Byte packets
over a cross-connect cable. The results estimate systematic and random
components of the sum of the Hsource + Hdest errors or uncertainties.
In a test with 300 singletons conducted over 30 seconds (periodic
sample with 100ms spacing), the median is the systematic error and the
remaining variability is the random error. One set of results is
tabulated below:</t>
<t><figure>
<preamble>(Results from the "R" software environment for
statistical computing and graphics - http://www.r-project.org/
)</preamble>
<artwork><![CDATA[> summary(XD4CAL)
CAL1 CAL2 CAL3
Min. : 89.0 Min. : 68.00 Min. : 54.00
1st Qu.: 99.0 1st Qu.: 77.00 1st Qu.: 63.00
Median :110.0 Median : 79.00 Median : 65.00
Mean :116.8 Mean : 83.74 Mean : 69.65
3rd Qu.:127.0 3rd Qu.: 88.00 3rd Qu.: 74.00
Max. :205.0 Max. :177.00 Max. :163.00
> ]]></artwork>
<postamble>NetProbe Calibration with Cross-Connect Cable, one-way
delay values in microseconds (us)</postamble>
</figure></t>
<t>The median or systematic error can be as high as 110 us, and the
range of the random error is also on the order of 116 us for all
streams.</t>
<t>Also, anticipating the Anderson-Darling K-sample (ADK) comparisons
to follow, we corrected the CAL2 values for the difference between
means between CAL2 and CAL3 (as specified in <xref
target="I-D.ietf-ippm-metrictest"></xref>), and found strong support
for the (Null Hypothesis that) the samples are from the same
distribution (resolution of 1 us and alpha equal 0.05 and 0.01)<figure>
<preamble></preamble>
<artwork><![CDATA[> XD4CVCAL2 <- XD4CAL$CAL2 - (mean(XD4CAL$CAL2)-mean(XD4CAL$CAL3))
> boxplot(XD4CVCAL2,XD4CAL$CAL3)
> XD4CV2_ADK <- adk.test(XD4CVCAL2, XD4CAL$CAL3)
> XD4CV2_ADK
Anderson-Darling k-sample test.
Number of samples: 2
Sample sizes: 300 300
Total number of values: 600
Number of unique values: 97
Mean of Anderson Darling Criterion: 1
Standard deviation of Anderson Darling Criterion: 0.75896
T = (Anderson Darling Criterion - mean)/sigma
Null Hypothesis: All samples come from a common population.
t.obs P-value extrapolation
not adj. for ties 0.71734 0.17042 0
adj. for ties -0.39553 0.44589 1
> ]]></artwork>
<postamble></postamble>
</figure></t>
</section>
<section title="Perfas Error and Type-P">
<t>Perfas+ is configured to use GPS synchronisation and uses NTP
synchronization as a fall-back or default. GPS synchronisation worked
throughout this test with the exception of the calibration stated here
(one implementation was NTP synchronised only). The time stamp
accuracy typically is 0.1 ms.</t>
<t>The resolution of the results reported by Perfas+ is 1us (us =
microsecond) in the version tested here, which contributes to at least
+/-1us error.</t>
<t><figure>
<preamble></preamble>
<artwork><![CDATA[Port 5001 5002 5003
Min. -227 -226 294
Median -169 -167 323
Mean -159 -157 335
Max. 6 -52 376
s 102 102 93]]></artwork>
<postamble>Perfas Calibration with Cross-Connect Cable, one-way
delay values in microseconds (us)</postamble>
</figure></t>
<t>The median or systematic error can be as high as 323 us, and the
range of the random error is also less than 232 us for all
streams.</t>
</section>
</section>
<section title="Pre-determined Limits on Equivalence">
<t>In this section, we provide the numerical limits on comparisons
between implementations, in order to declare that the results are
equivalent and therefore, the tested specification is clear.</t>
<t>A key point is that the allowable errors, corrections, and confidence
levels only need to be sufficient to detect mis-interpretation of the
tested specification resulting in diverging implementations.</t>
<t>Also, the allowable error must be sufficient to compensate for
measured path differences. It was simply not possible to measure fully
identical paths in the VLAN-loopback test configuration used, and this
practical compromise must be taken into account.</t>
<t>For Anderson-Darling K-sample (ADK) comparisons, the required
confidence factor for the cross-implementation comparisons SHALL be the
smallest of:</t>
<t><list style="symbols">
<t>0.95 confidence factor at 1ms resolution, or</t>
<t>the smallest confidence factor (in combination with resolution)
of the two same-implementation comparisons for the same test
conditions.</t>
</list>A constant time accuracy error of as much as +/-0.5ms MAY be
removed from one implementation's distributions (all singletons) before
the ADK comparison is conducted.</t>
<t>A constant propagation delay error (due to use of different sub-nets
between the switch and measurement devices at each location) of as much
as +2ms MAY be removed from one implementation's distributions (all
singletons) before the ADK comparison is conducted.</t>
<t>For comparisons involving the mean of a sample or other central
statistics, the limits on both the time accuracy error and the
propagation delay error constants given above also apply.</t>
</section>
<section title="Tests to evaluate RFC 2679 Specifications">
<t>This section describes some results from real-world (cross-Internet)
tests with measurement devices implementing IPPM metrics and a network
emulator to create relevant conditions, to determine whether the metric
definitions were interpreted consistently by implementors.</t>
<t>The procedures are slightly modified from the original procedures
contained in Appendix A.1 of <xref
target="I-D.ietf-ippm-metrictest"></xref>. The modifications include the
use of the mean statistic for comparisons.</t>
<t>Note that there are only five instances of the requirement term
"MUST" in <xref target="RFC2679"></xref> outside of the boilerplate and
<xref target="RFC2119"></xref> reference.</t>
<section title="One-way Delay, ADK Sample Comparison - Same & Cross Implementation">
<t>This test determines if implementations produce results that appear
to come from a common delay distribution, as an overall evaluation of
Section 4 of <xref target="RFC2679"></xref>, "A Definition for Samples
of One-way Delay". Same-implementation comparison results help to set
the threshold of equivalence that will be applied to
cross-implementation comparisons.</t>
<t>This test is intended to evaluate measurements in sections 3 and 4
of <xref target="RFC2679"></xref>.</t>
<t>By testing the extent to which the distributions of one-way delay
singletons from two implementations of <xref target="RFC2679"></xref>
appear to be from the same distribution, we economize on comparisons,
because comparing a set of individual summary statistics (as defined
in Section 5 of <xref target="RFC2679"></xref>) would require another
set of individual evaluations of equivalence. Instead, we can simply
check which statistics were implemented, and report on those
facts.</t>
<t><list style="numbers">
<t>Configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs.</t>
<t>Measure a sample of one-way delay singletons with 2 or more
implementations, using identical options and network emulator
settings (if used).</t>
<t>Measure a sample of one-way delay singletons with *four*
instances of the *same* implementations, using identical options,
noting that connectivity differences SHOULD be the same as for the
cross implementation testing.</t>
<t>Apply the ADK comparison procedures (see Appendix C of <xref
target="I-D.ietf-ippm-metrictest"></xref>) and determine the
resolution and confidence factor for distribution equivalence of
each same-implementation comparison and each cross-implementation
comparison.</t>
<t>Take the coarsest resolution and confidence factor for
distribution equivalence from the same-implementation pairs, or
the limit defined in Section 5 above, as a limit on the
equivalence threshold for these experimental conditions.</t>
<t>Apply constant correction factors to all singletons of the
sample distributions, as described and limited in Section 5
above.</t>
<t>Compare the cross-implementation ADK performance with the
equivalence threshold determined in step 5 to determine if
equivalence can be declared.</t>
</list></t>
<t>The common parameters used for tests in this section are:</t>
<t><list style="symbols">
<t>IP header + payload = 64 octets</t>
<t>Periodic sampling at 1 packet per second</t>
<t>Test duration = 300 seconds (March 29)</t>
</list>The netem emulator was set for 100ms average delay, with
uniform delay variation of +/-50ms. In this experiment, the netem
emulator was configured to operate independently on each VLAN and thus
the emulator itself is a potential source of error when comparing
streams that traverse the test path in different directions.</t>
<t>In the result analysis of this section:</t>
<t><list style="symbols">
<t>All comparisons used 1 microsecond resolution.</t>
<t>No Correction Factors were applied.</t>
<t>The 0.95 confidence factor (1.960 for paired stream comparison)
was used.</t>
</list></t>
<section title="NetProbe Same-implementation results">
<t>A single same-implementation comparison fails the ADK criterion
(s1 <-> sB). We note that these streams traversed the test
path in opposite directions, making the live network factors a
possibility to explain the difference.</t>
<t>All other pair comparisons pass the ADK criterion.</t>
<t><figure title="NetProbe ADK Results for same-implementation">
<preamble></preamble>
<artwork align="center"><![CDATA[+------------------------------------------------------+
| | | | |
| ti.obs (P) | s1 | s2 | sA |
| | | | |
.............|.............|.............|.............|
| | | | |
| s2 | 0.25 (0.28) | | |
| | | | |
...........................|.............|.............|
| | | | |
| sA | 0.60 (0.19) |-0.80 (0.57) | |
| | | | |
...........................|.............|.............|
| | | | |
| sB | 2.64 (0.03) | 0.07 (0.31) |-0.52 (0.48) |
| | | | |
+------------+-------------+-------------+-------------+ ]]></artwork>
<postamble></postamble>
</figure></t>
<t></t>
</section>
<section title="Perfas Same-implementation results">
<t>All pair comparisons pass the ADK criterion.</t>
<t><figure title="Perfas ADK Results for same-implementation">
<preamble></preamble>
<artwork align="center"><![CDATA[+------------------------------------------------------+
| | | | |
| ti.obs (P) | p1 | p2 | p3 |
| | | | |
.............|.............|.............|.............|
| | | | |
| p2 | 0.06 (0.32) | | |
| | | | |
.........................................|.............|
| | | | |
| p3 | 1.09 (0.12) | 0.37 (0.24) | |
| | | | |
...........................|.............|.............|
| | | | |
| p4 |-0.81 (0.57) |-0.13 (0.37) | 1.36 (0.09) |
| | | | |
+------------+-------------+-------------+-------------+]]></artwork>
<postamble></postamble>
</figure></t>
<t></t>
</section>
<section title="One-way Delay, Cross-Implementation ADK Comparison">
<t>The cross-implementation results are compared using a combined
ADK analysis [ref], where all NetProbe results are compared with all
Perfas results after testing that the combined same-implementation
results pass the ADK criterion.</t>
<t>When 4 (same) samples are compared, the ADK criterion for 0.95
confidence is 1.915, and when all 8 (cross) samples are compared it
is 1.85.</t>
<t><figure>
<preamble></preamble>
<artwork><![CDATA[Combination of Anderson-Darling K-Sample Tests.
Sample sizes within each data set:
Data set 1 : 299 297 298 300 (NetProbe)
Data set 2 : 300 300 298 300 (Perfas)
Total sample size per data set: 1194 1198
Number of unique values per data set: 1188 1192
...
Null Hypothesis:
All samples within a data set come from a common distribution.
The common distribution may change between data sets.
NetProbe ti.obs P-value extrapolation
not adj. for ties 0.64999 0.21355 0
adj. for ties 0.64833 0.21392 0
Perfas
not adj. for ties 0.55968 0.23442 0
adj. for ties 0.55840 0.23473 0
Combined Anderson-Darling Criterion:
tc.obs P-value extrapolation
not adj. for ties 0.85537 0.17967 0
adj. for ties 0.85329 0.18010 0
]]></artwork>
<postamble></postamble>
</figure>The combined same-implementation samples and the combined
cross-implementation comparison all pass the ADK criteria at
P>=0.18 and support the Null Hypothesis (both data sets come from
a common distribution).</t>
<t>We also see that the paired ADK comparisons are rather critical.
Although the NetProbe s1-sB comparison failed, the combined data set
from 4 streams passed the ADK criterion easily.</t>
</section>
<section title="Conclusions on the ADK Results for One-way Delay">
<t>Similar testing was repeated many times in the months of March
and April 2011. There were many experiments where a single test
stream from NetProbe or Perfas proved to be different from the
others in paired comparisons (even same comparisons). When the out
lier stream was removed from the comparison, the remaining streams
passed combined ADK criterion. Also, the application of correction
factors resulted in higher comparison success.</t>
<t>We conclude that the two implementations are capable of producing
equivalent one-way delay distributions based on their interpretation
of <xref target="RFC2679"></xref> .</t>
</section>
</section>
<section title="One-way Delay, Loss threshold, RFC 2679">
<t>This test determines if implementations use the same configured
maximum waiting time delay from one measurement to another under
different delay conditions, and correctly declare packets arriving in
excess of the waiting time threshold as lost.</t>
<t>See Section 3.5 of <xref target="RFC2679"></xref>, 3rd bullet point
and also Section 3.8.2 of <xref target="RFC2679"></xref>.</t>
<t><list style="numbers">
<t>configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs.</t>
<t>configure the network emulator to add 1.0 sec one-way constant
delay in one direction of transmission.</t>
<t>measure (average) one-way delay with 2 or more implementations,
using identical waiting time thresholds (Thresh) for loss set at 3
seconds.</t>
<t>configure the network emulator to add 3 sec one-way constant
delay in one direction of transmission equivalent to 2 seconds of
additional one-way delay (or change the path delay while test is
in progress, when there are sufficient packets at the first delay
setting)</t>
<t>repeat/continue measurements</t>
<t>observe that the increase measured in step 5 caused all packets
with 2 sec additional delay to be declared lost, and that all
packets that arrive successfully in step 3 are assigned a valid
one-way delay.</t>
</list></t>
<t>The common parameters used for tests in this section are:</t>
<t><list style="symbols">
<t>IP header + payload = 64 octets</t>
<t>Poisson sampling at lambda = 1 packet per second</t>
<t>Test duration = 900 seconds total (March 21)</t>
</list>The netem emulator was set to add constant delays as
specified in the procedure above.</t>
<section title="NetProbe results for Loss Threshold">
<t>In NetProbe, the Loss Threshold is implemented uniformly over all
packets as a post-processing routine. With the Loss Threshold set at
3 seconds, all packets with one-way delay >3 seconds are marked
"Lost" and included in the Lost Packet list with their transmission
time (as required in Section 3.3 of <xref target="RFC2680"></xref>).
This resulted in 342 packets designated as lost in one of the test
streams (with average delay = 3.091 sec).</t>
</section>
<section title="Perfas Results for Loss Threshold">
<t>Perfas uses a fixed Loss Threshold which was not adjustable
during this study. The Loss Threshold is approximately one minute,
and emulation of a delay of this size was not attempted. However, it
is possible to implement any delay threshold desired with a
post-processing routine and subsequent analysis. Using this method,
195 packets would be declared lost (with average delay = 3.091
sec).</t>
</section>
<section title="Conclusions for Loss Threshold">
<t>Both implementations assume that any constant delay value desired
can be used as the Loss Threshold, since all delays are stored as a
pair <Time, Delay> as required in <xref
target="RFC2679"></xref> . This is a simple way to enforce the
constant loss threshold envisioned in <xref target="RFC2679"></xref>
(see specific section references above). We take the position that
the assumption of post-processing is compliant, and that the text of
the RFC should be revised slightly to include this point.</t>
</section>
</section>
<section title="One-way Delay, First-bit to Last bit, RFC 2679">
<t>This test determines if implementations register the same relative
change in delay from one packet size to another, indicating that the
first-to-last time-stamping convention has been followed. This test
tends to cancel the sources of error which may be present in an
implementation.</t>
<t>See Section 3.7.2 of <xref target="RFC2679"></xref>, and Section
10.2 of <xref target="RFC2330"></xref>.</t>
<t><list style="numbers">
<t>configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs, and ideally including a low-speed link (it was not possible
to change the link configuration during testing, so the lowest
speed link present was the basis for serialization time
comparisons).</t>
<t>measure (average) one-way delay with 2 or more implementations,
using identical options and equal size small packets (64 octet IP
header and payload)</t>
<t>maintain the same path with additional emulated 100 ms one-way
delay</t>
<t>measure (average) one-way delay with 2 or more implementations,
using identical options and equal size large packets (500 octet IP
header and payload)</t>
<t>observe that the increase measured between steps 2 and 4 is
equivalent to the increase in ms expected due to the larger
serialization time for each implementation. Most of the
measurement errors in each system should cancel, if they are
stationary.</t>
</list></t>
<t>The common parameters used for tests in this section are:</t>
<t><list style="symbols">
<t>IP header + payload = 64 octets</t>
<t>Periodic sampling at l packet per second</t>
<t>Test duration = 300 seconds total (April 12)</t>
</list>The netem emulator was set to add constant 100ms delay.</t>
<section title="NetProbe and Perfas Results for Serialization">
<t>When the IP header + payload size was increased from 64 octets to
500 octets, there was a delay increase observed.</t>
<t><figure>
<preamble></preamble>
<artwork><![CDATA[Mean Delays in us
NetProbe
Payload s1 s2 sA sB
500 190893 191179 190892 190971
64 189642 189785 189747 189467
Diff 1251 1394 1145 1505
Perfas
Payload p1 p2 p3 p4
500 190908 190911 191126 190709
64 189706 189752 189763 190220
Diff 1202 1159 1363 489
]]></artwork>
<postamble>Serialization tests, all values in
microseconds</postamble>
</figure></t>
<t>The typical delay increase when the larger packets were used was
1.1 to 1.5 ms (with one outlier). The typical measurements indicate
that a link with approximately 3 Mbit/s capacity is present on the
path.</t>
<t>Through investigation of the facilities involved, it was
determined that the lowest speed link was approximately 45 Mbit/s,
and therefore the estimated difference should be about 0.077 ms. The
observed differences are much higher.</t>
<t>The unexpected large delay difference was also the outcome when
testing serialization times in a lab environment, using the NIST Net
Emulator and NetProbe [ref to earlier lab tests].</t>
</section>
<section title="Conclusions for Serialization">
<t>Since it was not possible to confirm the estimated serialization
time increases in field tests, we resort to examination of the
implementations to determine compliance.</t>
<t>NetProbe performs all time stamping above the IP-layer, accepting
that some compromises must be made to achieve extreme portability
and measurement scale. Therefore, the first-to-last bit convention
is supported because the serialization time is included in the
one-way delay measurement, enabling comparison with other
implementations.</t>
<t>Perfas
>>>>>>>>>>>>>>> TBD</t>
</section>
</section>
<section title="One-way Delay, Difference Sample Metric (Lab)">
<t>This test determines if implementations register the same relative
increase in delay from one measurement to another under different
delay conditions. This test tends to cancel the sources of error which
may be present in an implementation.</t>
<t>This test is intended to evaluate measurements in sections 3 and 4
of <xref target="RFC2679"></xref>.</t>
<t><list style="numbers">
<t>configure an L2TPv3 path between test sites, and each pair of
measurement devices to operate tests in their designated pair of
VLANs.</t>
<t>measure (average) one-way delay with 2 or more implementations,
using identical options</t>
<t>configure the path with X+Y ms one-way delay</t>
<t>repeat measurements</t>
<t>observe that the (average) increase measured in steps 2 and 4
is ~Y ms for each implementation. Most of the measurement errors
in each system should cancel, if they are stationary.</t>
</list>In this test, X=1000ms and Y=1000ms.</t>
<t>The common parameters used for tests in this section are:</t>
<t><list style="symbols">
<t>IP header + payload = 64 octets</t>
<t>Poisson sampling at lambda = 1 packet per second</t>
<t>Test duration = 900 seconds total (March 21)</t>
</list>The netem emulator was set to add constant delays as
specified in the procedure above.</t>
<section title="NetProbe results for Differential Delay">
<t></t>
<t><figure title="Average delays before/after 1 second increase">
<preamble></preamble>
<artwork align="center"><![CDATA[Average pre-increase delay, microseconds 1089868.0
Average post 1s additional, microseconds 2089686.0
Difference (should be ~= Y = 1s) 999818.0]]></artwork>
<postamble></postamble>
</figure></t>
<t>The NetProbe implementation observed a 1 second increase with a
182 microsecond error (assuming that the netem emulated delay
difference is exact).</t>
<t>We note that this differential delay test has been run under lab
conditions and published in prior work [ref to "advance metrics"
draft]. The error was 6 microseconds.</t>
</section>
<section title="Perfas results for Differential Delay">
<figure title="Average delays before/after 1 second increase">
<preamble></preamble>
<artwork align="center"><![CDATA[Average pre-increase delay, microseconds 1089794.0
Average post 1s additional, microseconds 2089801.0
Difference (should be ~= Y = 1s) 1000007.0]]></artwork>
<postamble></postamble>
</figure>
<t></t>
<t>The Perfas implementation observed a 1 second increase with a 7
microsecond error.</t>
</section>
<section title="Conclusions for Differential Delay">
<t>Again, the live network conditions appear to have influenced the
results, but both implementations measured the same delay increase
within their calibration accuracy.</t>
</section>
</section>
<section title="Implementation of Statistics for One-way Delay">
<t>The ADK tests the extent to which the sample distributions of
one-way delay singletons from two implementations of <xref
target="RFC2679"></xref> appear to be from the same overall
distribution. By testing this way, we economize on the number of
comparisons, because comparing a set of individual summary statistics
(as defined in Section 5 of <xref target="RFC2679"></xref>) would
require another set of individual evaluations of equivalence. Instead,
we can simply check which statistics were implemented, and report on
those facts, noting that Section 5 of <xref target="RFC2679"></xref>
does not specify the calculations exactly, and gives only some
illustrative examples.<figure>
<preamble></preamble>
<artwork><![CDATA[ NetProbe Perfas
5.1. Type-P-One-way-Delay-Percentile yes no
5.2. Type-P-One-way-Delay-Median yes no
5.3. Type-P-One-way-Delay-Minimum yes yes
5.4. Type-P-One-way-Delay-Inverse-Percentile no no
]]></artwork>
<postamble>Implementation of Section 5 Statistics</postamble>
</figure></t>
<t>5.1. Type-P-One-way-Delay-Percentile 5.2.
Type-P-One-way-Delay-Median 5.3. Type-P-One-way-Delay-Minimum 5.4.
Type-P-One-way-Delay-Inverse-Percentile</t>
</section>
</section>
<section anchor="Security" title="Security Considerations">
<t>The security considerations that apply to any active measurement of
live networks are relevant here as well. See <xref
target="RFC4656"></xref> and <xref target="RFC5357"></xref>.</t>
</section>
<section anchor="IANA" title="IANA Considerations">
<t>This memo makes no requests of IANA, and hopes that IANA will be as
accepting of our new computer overlords as the authors intend to be.</t>
</section>
<section anchor="Acknowledgements" title="Acknowledgements">
<t>The authors thank Lars Eggert for his continued encouragement to
advance the IPPM metrics during his tenure as AD Advisor.</t>
<t>Nicole Kowalski supplied the needed CPE router for the NetProbe side
of the test set-up, and graciously managed her testing in spite of
issues caused by dual-use of the router. Thanks Nicole!</t>
<t>The "NetProbe Team" also acknowledges many useful discussions with
Ganga Maguluri.</t>
</section>
</middle>
<back>
<references title="Normative References">
<?rfc include="reference.RFC.2119"?>
<?rfc include='reference.RFC.2026'?>
<?rfc include='reference.RFC.2330'?>
<?rfc include='reference.RFC.2679'?>
<?rfc include='reference.RFC.2680'?>
<?rfc include='reference.RFC.3432'?>
<?rfc include='reference.RFC.4656'?>
<?rfc include='reference.RFC.4814'?>
<?rfc include='reference.RFC.5226'?>
<?rfc include='reference.RFC.5357'?>
<?rfc include='reference.RFC.5657'?>
<?rfc include='reference.I-D.ietf-ippm-metrictest'?>
</references>
<references title="Informative References">
<?rfc include='reference.I-D.morton-ippm-advance-metrics'?>
<?rfc include='reference.RFC.3931'?>
</references>
</back>
</rfc>| PAFTECH AB 2003-2026 | 2026-04-24 05:55:10 |