One document matched: draft-manyfolks-ippm-metric-registry-00.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>
<?rfc toc="yes" ?>
<?rfc symrefs="yes" ?>
<?rfc strict="no" ?>
<rfc category="bcp" docName="draft-manyfolks-ippm-metric-registry-00"
ipr="trust200902" obsoletes="" updates="">
<front>
<title abbrev="Registry for Performance Metrics">Registry for Performance
Metrics</title>
<author fullname="Marcelo Bagnulo" initials="M." surname="Bagnulo">
<organization abbrev="UC3M">Universidad Carlos III de
Madrid</organization>
<address>
<postal>
<street>Av. Universidad 30</street>
<city>Leganes</city>
<region>Madrid</region>
<code>28911</code>
<country>SPAIN</country>
</postal>
<phone>34 91 6249500</phone>
<email>marcelo@it.uc3m.es</email>
<uri>http://www.it.uc3m.es</uri>
</address>
</author>
<author fullname="Benoit Claise" initials="B." surname="Claise">
<organization abbrev="Cisco Systems, Inc.">Cisco Systems,
Inc.</organization>
<address>
<postal>
<street>De Kleetlaan 6a b1</street>
<city>1831 Diegem</city>
<country>Belgium</country>
</postal>
<email>bclaise@cisco.com</email>
</address>
</author>
<author fullname="Philip Eardley" initials="P." surname="Eardley">
<organization abbrev="BT">British Telecom</organization>
<address>
<postal>
<street>Adastral Park, Martlesham Heath</street>
<city>Ipswich</city>
<country>ENGLAND</country>
</postal>
<email>philip.eardley@bt.com</email>
</address>
</author>
<author fullname="Al Morton" initials="A." surname="Morton">
<organization abbrev="AT&T Labs">AT&T Labs</organization>
<address>
<postal>
<street>200 Laurel Avenue South</street>
<city>Middletown, NJ</city>
<country>USA</country>
</postal>
<email>acmorton@att.com</email>
</address>
</author>
<date day="12" month="February" year="2014"/>
<abstract>
<t>This document specifies the common aspects of the IANA registry for
performance metrics, both active and passive categories. This document
also gives a set of guidelines for Registered Performance Metric
requesters and reviewers.</t>
</abstract>
</front>
<middle>
<section title="Open Issues and Resolutions">
<t><list style="numbers">
<t>I believe that the Performance Metrics Experts and the
Performance Metric Directorate will be a different group of people.
Reason: every single time a new expert is added, the IESG needs to
approve her/him. To be discussed with the Area Directors. *** (v7)
Has this discussion taken place? If these are different groups, we
don't need to define Performance Metrics Directorate.</t>
<t>We should expand on the different roles and responsibilities of
the Performance Metrics Experts versus the Performance Metric
Directorate. At least, the Performance Metric Directorate one should
be expanded. --- (v7) If these are different entities, our only
concern is the role of the "PM Experts".</t>
<t>Not sure if this is interesting for this document to go in the
details of the LMAP control protocol versus report protocol (see
section 'Interoperability'. (the text currently does this in several
sections, S5 comes to mind - Closed)</t>
<t>Marcelo, not sure what you mean by 'Single point of reference'.
(Closed - see S5.3)</t>
<t>Define 'Measurement Parameter'. Even if this is active monitoring
specific term, we need it in this draft. Done in v3 Terminology
section as "Input Parameter". - Closed in v7 as "Parameter".</t>
<t>Performance Metric Description: part of this document of the
active/ passive monitoring documents. -- Closed will be Part of
Active & Passive docs.</t>
<t>Many aspects of the Naming convention are TBD, and need
discussion. For example, we have distinguished RTCP-XR metrics as
End-Point (neither active nor passive in the traditional sense, so
not Act_ or Pas_). Also, the Act_ or Pas_ component is not
consistent with "camel_case", as Marcelo points out. Even though we
may not cast all naming conventions in stone at the start, it will
be helpful to look at several examples of passive metric names
now.</t>
<t>RTCP-XR metrics are currently referred to as "end-point", and
have aspects that similar to active (the measured stream
characteristics are known a priori and measurement commonly takes
place at the end-points of the path) and passive (there is no
additional traffic dedicated to measurement, with the exception of
the RTCP report packets themselves). We have one example expressing
an end-point metric in the active sub-registry memo.</t>
<t>Revised Registry Entries: Keep for history (deprecated) or
Delete?</t>
<t>In section 7 defining the Registry Common Columns, ~all column
names begin with "Performance Metric". Al recommends deleting this
prefix in each sub-section as redundant.</t>
</list></t>
</section>
<section title="Introduction">
<t>The IETF specifies and uses Performance Metrics of protocols and
applications transported over its protocols. Performance metrics are
such an important part of the operations of IETF protocols that <xref
target="RFC6390"/> specifies guidelines for their development.</t>
<t>The definition and use of Performance Metrics in the IETF happens in
various working groups (WG), most notably: <list>
<t>The "IP Performance Metrics" (IPPM) WG is the WG primarily
focusing on Performance Metrics definition at the IETF.</t>
<t>The "Metric Blocks for use with RTCP's Extended Report Framework"
(XRBLOCK) WG recently specified many Performance Metrics related to
"RTP Control Protocol Extended Reports (RTCP XR)" <xref
target="RFC3611"/>, which establishes a framework to allow new
information to be conveyed in RTCP, supplementing the original
report blocks defined in "RTP: A Transport Protocol for Real-Time
Applications", <xref target="RFC3550"/>.</t>
<t>The "Benchmarking Methodology" WG (BMWG) defined many Performance
Metrics for use in laboratory benchmarking of inter-networking
technologies.</t>
<t>The "IP Flow Information eXport" (IPFIX) WG Information elements
related to Performance Metrics are currently proposed.</t>
<t>The "Performance Metrics for Other Layers" (PMOL) concluded WG,
defined some Performance Metrics related to Session Initiation
Protocol (SIP) voice quality <xref target="RFC6035"/>.</t>
</list></t>
<t>It is expected that more Performance Metrics will be defined in the
future, not only IP-based metrics, but also metrics which are
protocol-specific and application-specific.</t>
<t>However, despite the importance of Performance Metrics, there are two
related problems for the industry. First, how to ensure that when one
party requests another party to measure (or report or in some way act
on) a particular performance metric, then both parties have exactly the
same understanding of what performance metric is being referred to.
Second, how to discover which Performance Metrics have been specified,
so as to avoid developing new performance metric that is very similar.
The problems can be addressed by creating a registry of performance
metrics. The usual way in which IETF organizes namespaces is with
Internet Assigned Numbers Authority (IANA) registries, and there is
currently no Performance Metrics Registry maintained by the IANA.</t>
<t>This document therefore proposes the creation of a Performance
Metrics Registry. It also provides best practices on how to define new
or updated entries in the Performance Metrics Registry.</t>
</section>
<section title="Terminology">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in <xref
target="RFC2119"/>.</t>
<t>The terms Performance Metric and Performance Metrics Directorate are
defined in <xref target="RFC6390"/>, and copied over in this document
for the readers convenience.</t>
<t><list style="hanging">
<t hangText="Registered Performance Metric:">A Registered
Performance Metric (or Registered Metric) is a quantitative measure
of performance (see section 6.1 of <xref target="RFC2330"/>)
expressed as an entry in the Performance Metric Registry, and
comprised of a specifically named metric which has met all the
registry review criteria, is under the curation of IETF Performance
Metrics Experts, and whose changes are controlled by IANA.</t>
<t hangText="Registry or Performance Metrics Registry:">The IANA
registry containing Registered Performance Metrics.</t>
<t hangText="Non-IANA Registry:">A set of metrics that are
registered locally (and not by IANA).</t>
<t hangText="Performance Metrics Experts:">The Performance Metrics
Experts is a group of experts selected by the IESG to validate the
Performance Metrics before updating the Performance Metrics
Registry. The Performance Metrics Experts work closely with
IANA.</t>
<t hangText="Performance Metrics Directorate:">The Performance
Metrics Directorate is a directorate that provides guidance for
Performance Metrics development in the IETF. The Performance Metrics
Directorate should be composed of experts in the performance
community, potentially selected from the IP Performance Metrics
(IPPM), Benchmarking Methodology (BMWG), and Performance Metrics for
Other Layers (PMOL) WGs.</t>
<t hangText="Parameter:">An input factor defined as a variable in
the definition of a metric. A numerical or other specified factor
forming one of a set that defines a metric or sets the conditions of
its operation. Most Input Parameters do not change the fundamental
nature of the metric's definition, but others have substantial
influence. All Input Parameters must be known to measure using a
metric and interpret the results.</t>
<t hangText="Active Measurement Method:">Methods of Measurement
conducted on traffic which serves only the purpose of measurement
and is generated for that reason alone, and whose traffic
characteristics are known a priori. An Internet user's host can
generate active measurement traffic (virtually all typical
user-generated traffic is not dedicated to active measurement, but
it can produce such traffic with the necessary application
operating).</t>
<t hangText="Passive Measurement Method:">Methods of Measurement
conducted on Internet user traffic such that sensitive information
is present and may be stored in the measurement system, or
observations of traffic from other sources for monitoring and
measurement purposes.</t>
<t hangText="Hybrid Measurement Method:">Methods of Measurement
which use a combination of Active Measurement and Passive
Measurement methods.</t>
</list></t>
</section>
<section title="Scope">
<t>The intended audience of this document includes those who prepare and
submit a request for a Registered Performance Metric, and for the
Performance Metric Experts who review a request.</t>
<t>This document specifies a Performance Metrics Registry in IANA. This
Performance Metric Registry is applicable to Performance Metrics issued
from Active Measurement, Passive Measurement, or from end-point
calculation. This registry is designed to encompass performance metrics
developed throughout the IETF and especially for the following working
groups: IPPM, XRBLOCK, IPFIX, BMWG, and possibly others. This document
analyzes an prior attempt to set up a Performance Metric Registry, and
the reasons why this design was inadequate <xref target="RFC6248"/>.
Finally, this document gives a set of guidelines for requesters and
expert reviewers of candidate Registered Performance Metrics.</t>
<t>This document serves as the foundation for further work. It specifies
the set of columns describing common aspects necessary for all entries
in the Performance Metrics Registry.</t>
<t>Two documents describing sub-registries will be developed separately:
one for active Registered Metrics and another one for the passive
Registered Metrics. Indeed, active and passive performance metrics
appear to have different characteristics which must be documented in
their respective sub-registies. For example, active performance methods
must specify the packet stream characteristics they generate and
measure, so it is essential to include the stream specifications in the
registry entry. In the case of passive Performance metrics, there is a
need to specify the sampling distribution in the registry, while it
would be possible to force the definition of the registry field to
include both types of distributions in the same registry column, we
believe it is cleaner and clearer to have separated sub-registries with
different columns that have a narrow definition.</t>
<t>It is possible that future metrics may be a hybrid of active and
passive measurement methods, and it may be possible to register hybrid
metrics using in one of the two planned sub-registries (active or
passive), or it may be efficient to define a third sub-registry with
unique columns. The current design with sub-registries allows for
growth, and this is a recognized option for extension.</t>
<t>This document makes no attempt to populate the registry with initial
entries.</t>
<t>Based on <xref target="RFC5226"/> Section 4.3, this document is
processed as Best Current Practice (BCP) <xref target="RFC2026"/>.</t>
</section>
<section title="Design Considerations for the Registry and Registered Metrics">
<t>In this section, we detail several design considerations that are
relevant for understanding the motivations and expected use of the
metric registry.</t>
<section title="Interoperability">
<t>As any IETF registry, the primary use for a registry is to manage a
namespace for its use within one or more protocols. In this particular
case of the metric registry, there are two types of protocols that
will use the values defined in the registry for their operation: <list
style="symbols">
<t>Control protocol: this type of protocols is used to allow one
entity to request another entity to perform a measurement using a
specific metric defined by the registry. One particular example is
the LMAP framework <xref target="I-D.ietf-lmap-framework"/>. Using
the LMAP terminology, the registry is used in the LMAP Control
protocol to allow a Controller to request a measurement task to
one or more Measurement Agents. In order to enable this use case,
the entries of the metric registry must be well enough defined to
allow a Measurement Agent implementation to trigger a specific
measurement task upon the reception of a control protocol message.
This requirements heavily constrains the type of entries that are
acceptable for the Metric registry. <!--Further considerations about
this are captured in the Guidelines for metric registry
allocations (cross reference to another section of this document
or to a different document).--></t>
<t>Report protocol: This type of protocols is used to allow an
entity to report measurement results to another entity. By
referencing to a specific metric registry, it is possible to
properly characterize the measurement result data being
transferred. Using the LMAP terminology, the registry is used in
the Report protocol to allow a Measurement Agent to report
measurement results to a Collector.</t>
</list></t>
</section>
<section title="Criteria for Registered Performance Metrics">
<t>It is neither possible nor desirable to populate the registry with
all combinations of input parameters of all performance metrics. The
Registered Performance Metrics should be: <list style="numbers">
<t>interpretable by the user.</t>
<t>implementable by the software designer.</t>
<t>deployable by network operators, without major impact on the
networks.</t>
<t>accurate, for interoperability and deployment across
vendors</t>
</list>In essence, there needs to be evidence that a candidate
registry entry has significant industry interest, or has seen
deployment, and there is agreement that the candidate Registered
Metric serves its intended purpose.</t>
</section>
<section title="Single point of reference for Performance metrics">
<t>A registry for Performance metrics serves as a single point of
reference for performance metrics defined in different working groups
in the IETF. As we mentioned earlier, there are several WGs that
define performance metrics in the IETF and it is hard to keep track of
all them. This results in multiple definitions of similar metrics that
attempt to measure the same phenomena but in slightly different (and
incompatible) ways. Having a registry would allow both the IETF
community and external people to have a single list of relevant
performance metrics defined by the IETF (and others, where
appropriate). The single list is also an essential aspect of
communication about metrics, where different entities that request
measurements, execute measurements, and report the results can benefit
from a common understanding of the referenced metric.</t>
</section>
<section title="Side benefits">
<t>There are a couple of side benefits of having such a registry.
First, the registry could serve as an inventory of useful and used
metrics, that are normally supported by different implementations of
measurement agents. Second, the results of the metrics would be
comparable even if they are performed by different implementations and
in different networks, as the metric is properly defined. BCP 176
<xref target="RFC6576"/> examines whether the results produced by
independent implementations are equivalent in the context of
evaluating the completeness and clarity of metric specifications. This
BCP defines the standards track advancement testing for (active) IPPM
metrics, and the same process will likely suffice to determine whether
registry entries are sufficiently well specified to result in
comparable (or equivalent) results. Registry entries which have
undergone such testing SHOULD be noted, with a reference to the test
results.</t>
</section>
</section>
<section title="Performance Metric Registry: Prior attempt">
<t>There was a previous attempt to define a metric registry <xref
target="RFC4148">RFC 4148</xref>. However, it was obsoleted by <xref
target="RFC6248">RFC 6248</xref> because it was "found to be
insufficiently detailed to uniquely identify IPPM metrics... [there was
too much] variability possible when characterizing a metric exactly"
which led to the RFC4148 registry having "very few users, if any".</t>
<t>A couple of interesting additional quotes from RFC 6248 might help
understand the issues related to that registry. <list style="numbers">
<t>"It is not believed to be feasible or even useful to register
every possible combination of Type P, metric parameters, and Stream
parameters using the current structure of the IPPM Metrics
Registry."</t>
<t>"The registry structure has been found to be insufficiently
detailed to uniquely identify IPPM metrics."</t>
<t>"Despite apparent efforts to find current or even future users,
no one responded to the call for interest in the RFC 4148 registry
during the second half of 2010."</t>
</list></t>
<t>The current approach learns from this by tightly defining each entry
in the registry with only a few parameters open, if any. The idea is
that entries in the registry represent different measurement methods
which require input parameters to set factors like source and
destination addresses (which do not change the fundamental nature of the
measurement). The downside of this approach is that it could result in a
large number of entries in the registry. We believe that less is more in
this context - it is better to have a reduced set of useful metrics
rather than a large set of metrics with questionable usefulness.
Therefore this document defines that the registry only includes metrics
that are well defined and that have proven to be operationally useful.
In order to guarantee these two characteristics we require that a set of
experts review the allocation request to verify that the metric is well
defined and it is operationally useful.</t>
<section title="Why this Attempt Will Succeed?">
<t>The registry defined in this document addresses the main issues
identified in the previous attempt. As we mention in the previous
section, one of the main issues with the previous registry was that
the metrics contained in the registry were too generic to be useful.
In this registry, the registry requests are evaluated by an expert
group that will make sure that the metric is properly defined. This
document provides guidelines to assess if a metric is properly
defined.</t>
<t>Another key difference between this attempt and the previous one is
that in this case there is at least one clear user for the registry:
the LMAP framework and protocol. Because the LMAP protocol will use
the registry values in its operation, this actually helps to determine
if a metric is properly defined. In particular, since we expect that
the LMAP control protocol will enable a controller to request a
measurement agent to perform a measurement using a given metric by
embedding the metric registry value in the protocol, a metric is
properly specified if it is defined well-enough so that it is possible
(and practical) to implement the metric in the measurement agent. This
was clearly not the case for the previous attempt: defining a metric
with an undefined P-Type makes its implementation unpractical.</t>
</section>
</section>
<section title="Common Columns of the Performance Metric Registry">
<t>The metric registry is composed of two sub-registries: the registry
for active performance metrics and the registry for passive performance
metrics. The rationale for having two sub-registries (as opposed to
having a single registry for all metrics) is because the set of registry
columns must support unambiguous registry entries, and there are
fundamental differences in the methods to collect active and passive
metrics and the required input parameters. Forcing them into a single,
generalized registry would result in a less meaningful structure for
some entries in the registry. Nevertheless, it is desirable that the two
sub-registries share the same structure as much as possible. In
particular, both registries will share the following columns: the
identifier and the name, the requester, the revision, the revision date
and the description. All these fields are described below. The design of
these two sub-registries is work-in-progress.</t>
<!--
<section title="Performance Metrics: Active or Passive?">
<t>BENOIT: WE NEED TO SAY WHAT WE MEAN BY ACTIVE AND BY PASSIVE.</t>
<t> MARCELO: I am uncertain about this. The terminology section already provides a definition for active and passive metrics, do we need anything else?</t>
<t>>>> Al: See the definitions section.</t>
</section>
-->
<section title="Performance Metrics Identifier">
<t>A numeric identifier for the Registered Performance Metric. This
identifier must be unique within the Performance Metric Registry and
sub-registries.</t>
<t>The Registered Performance Metric unique identifier is a 16-bit
integer (range 0 to 65535). When adding newly Registered Performance
Metrics to the Performance Metric Registry, IANA should assign the
lowest available identifier to the next active monitoring Registered
Performance Metric, and the highest available identifier to the next
passive monitoring Registered Performance Metric.</t>
</section>
<section title="Performance Metrics Name">
<t>As the name of a Registered Performance Metric is the first thing a
potential implementor will use when determining whether it is suitable
for a given application, it is important to be as precise and
descriptive as possible. Names of Registered Performance Metrics:
<list style="numbers">
<t>"must be chosen carefully to describe the Registered
Performance Metric and the context in which it will be used."</t>
<t>"should be unique within the Performance Metric Registry
(including sub-registries)."</t>
<t>"must use capital letters for the first letter of each
component <!-- except for the first one (aka "camel case")
MARCELO: I am confused by this. If the name of the metric will
start with Act_ or Pas_ which has its first letter capitalized,
then there is no exception, right?-->. All other letters are lowercase,
even for acronyms. Exceptions are made for acronyms containing a
mixture of lowercase and capital letters, such as 'IPv4' and
'IPv6'."</t>
<t>"must use '_' between each component composing the Registered
Performance Metric name."</t>
<t>"must start with prefix Act_ for active measurement Registered
Performance Metric."</t>
<t>"must start with prefix Pass_ for passive monitoring Registered
Performance Metric." AL COMMENTS: how about just 3 letters for
consistency: "Pas_"</t>
<t>MARCELO: I am uncertain whether we should give more guidance
here for the naming convention. In particular, the second
component could be the highest protocol used in the metric (e.g.
UDP, TCP, DNS, SIP, ICMP, IPv4, etc). the third component should
be a descriptive name (like latency, packet loss or similar). the
fourth component could be stream distribution. the fifth component
could be the output type (99mean, 95interval). this is of course
very active metric oriented, would be good if we could figure out
what is the minimum common structure for both passive and active.
TBD. AL COMMENTS: Let's see some examples for passive monitoring.
It may not make sense to have common name components, except for
Act_ and Pas_.</t>
<t>BENOIT proposes (approximately this, Al's wording) : The
remaining rules for naming are left to the Performance Experts to
determine as they gather experience, so this is an area of planned
update by a future RFC.</t>
</list></t>
<t>An example is "Act_UDP_Latency_Poisson_99mean" for a active
monitoring UDP latency metric using a Poisson stream of packets and
producing the 99th percentile mean as output.</t>
<t>>>>> NEED passive naming examples.</t>
</section>
<section title="Performance Metrics Status">
<t>The status of the specification of this Registered Performance
Metric. Allowed values are 'current' and 'deprecated'. All newly
defined Information Elements have 'current' status.</t>
</section>
<section title="Performance Metrics Requester">
<t>The requester for the Registered Performance Metric. The requester
may be a document, such as RFC, or person.</t>
</section>
<section title="Performance Metrics Revision">
<t>The revision number of a Registered Performance Metric, starting at
0 for Registered Performance Metrics at time of definition and
incremented by one for each revision.</t>
</section>
<section title="Performance Metrics Revision Date">
<t>The date of acceptance or the most recent revision for the
Registered Performance Metric.</t>
</section>
<section title="Performance Metrics Description">
<t>A Registered Performance Metric Description is a written
representation of a particular registry entry. It supplements the
metric name to help registry users select relevant Registered
Performance Metrics.</t>
</section>
<section title="Reference Specification(s)">
<t>Registry entries that follow the common columns must provide the
reference specification(s) on which the Registered Performance Metric
is based.</t>
</section>
</section>
<section title="The Life-Cycle of Registered Metrics">
<t>Once a Performance Metric or set of Performance Metrics has been
identified for a given application, candidate registry entry
specifications in accordance with Section X are submitted to IANA to
follow the process for review by the Performance Metric Experts, as
defined below. This process is also used for other changes to the
Performance Metric Registry, such as deprecation or revision, as
described later in this section.</t>
<t>It is also desirable that the author(s) of a candidate registry entry
seek review in the relevant IETF working group, or offer the opportunity
for review on the WG mailing list.</t>
<section title="The Process for Review by the Performance Metric Experts">
<t>Requests to change Registered Metrics in the Performance Metric
Registry or a linked sub-registry are submitted to IANA, which
forwards the request to a designated group of experts (Performance
Metric Experts) appointed by the IESG; these are the reviewers called
for by the Expert Review RFC5226 policy defined for the Performance
Metric Registry. The Performance Metric Experts review the request for
such things as compliance with this document, compliance with other
applicable Performance Metric-related RFCs, and consistency with the
currently defined set of Registered Performance Metrics.</t>
<t>Authors are expected to review compliance with the specifications
in this document to check their submissions before sending them to
IANA.</t>
<t>The Performance Metric Experts should endeavor to complete referred
reviews in a timely manner. If the request is acceptable, the
Performance Metric Experts signify their approval to IANA, which
changes the Performance Metric Registry. If the request is not
acceptable, the Performance Metric Experts can coordinate with the
requester to change the request to be compliant. The Performance
Metric Experts may also choose in exceptional circumstances to reject
clearly frivolous or inappropriate change requests outright.</t>
<t>This process should not in any way be construed as allowing the
Performance Metric Experts to overrule IETF consensus. Specifically,
any Registered Metrics that were added with IETF consensus require
IETF consensus for revision or deprecation.</t>
<t>Decisions by the Performance Metric Experts may be appealed as in
Section 7 of RFC5226.</t>
</section>
<section title="Revising Registered Performance Metrics">
<t>Requests to revise the Performance Metric Registry or a linked
sub-registry are submitted to IANA, which forwards the request to a
designated group of experts (Performance Metric Experts) appointed by
the IESG; these are the reviewers called for by the Expert Review
[RFC5226] policy defined for the Performance Metric Registry. The
Performance Metric Experts review the request for such things as
compliance with this document, compliance with other applicable
Performance Metric-related RFCs, and consistency with the currently
defined set of Registered Performance Metrics.</t>
<t>A request for Revision is ONLY permissible when the changes
maintain backward-compatibility with implementations of the prior
registry entry describing a Registered Metric (entries with lower
revision numbers, but the same Identifier and Name).</t>
<t>The purpose of the Status field in the Performance Metric Registry
is to indicate whether the entry for a Registered Metric is 'current'
or 'deprecated'.</t>
<t>In addition, no policy is defined for revising IANA Performance
Metric entries or addressing errors therein. To be certain, changes
and deprecations within the Performance Metric Registry are not
encouraged, and should be avoided to the extent possible. However, in
recognition that change is inevitable, the provisions of this section
address the need for revisions.</t>
<t>Revisions are initiated by sending a candidate Registered
Performance Metric definition to IANA, as in Section X, identifying
the existing registry entry.</t>
<t>The primary requirement in the definition of a policy for managing
changes to existing Registered Performance Metrics is avoidance of
interoperability problems; Performance Metric Experts must work to
maintain interoperability above all else. Changes to Registered
Performance Metrics already in use may only be done in an
inter-operable way; necessary changes that cannot be done in a way to
allow interoperability with unchanged implementations must result in
deprecation of the earlier metric.</t>
<t>A change to a Registered Performance Metric is held to be
backward-compatible only when: <list style="numbers">
<t>"it involves the correction of an error that is obviously only
editorial; or"</t>
<t>"it corrects an ambiguity in the Registered Performance
Metric's definition, which itself leads to issues severe enough to
prevent the Registered Performance Metric's usage as originally
defined; or"</t>
<t>"it corrects missing information in the metric definition
without changing its meaning (e.g., the explicit definition of
'quantity' semantics for numeric fields without a Data Type
Semantics value); or"</t>
<t>"it harmonizes with an external reference that was itself
corrected."</t>
<t>"BENOIT: NOTE THAT THERE ARE MORE RULES IN RFC 7013 SECTION 5
BUT THEY WOULD ONLY APPLY TO THE ACTIVE/PASSIVE DRAFTS. TO BE
DISCUSSED."</t>
</list></t>
<t>If a change is deemed permissible by the Performance Metric
Experts, IANA makes the change in the Performance Metric Registry. The
requester of the change is appended to the requester in the
registry.</t>
<t>Each Registered Performance Metric in the Registry has a revision
number, starting at zero. Each change to a Registered Performance
Metric following this process increments the revision number by
one.</t>
<t>COMMENT: Al (and Phil) think we should keep old/revised entries
as-is, marked as deprecated >>>> Since any revision must
be inter-operable according to the criteria above, there is no need
for the Performance Metric Registry to store information about old
revisions.</t>
<t>When a revised Registered Performance Metric is accepted into the
Performance Metric Registry, the date of acceptance of the most recent
revision is placed into the revision Date column of the registry for
that Registered Performance Metric.</t>
<t>Where applicable, additions to registry entries in the form of text
Comments or Remarks should include the date, but such additions may
not constitute a revision according to this process.</t>
</section>
<section title="Deprecating Registered Performance Metrics">
<t>Changes that are not permissible by the above criteria for
Registered Metric's revision may only be handled by deprecation. A
Registered Performance Metric MAY be deprecated and replaced when:
<list style="numbers">
<t>"the Registered Performance Metric definition has an error or
shortcoming that cannot be permissibly changed as in Section
Revising Registered Performance Metrics; or"</t>
<t>"the deprecation harmonizes with an external reference that was
itself deprecated through that reference's accepted deprecation
method; or"</t>
</list></t>
<t>A request for deprecation is sent to IANA, which passes it to the
Performance Metric Expert for review, as in Section 'The Process for
Review by the Performance Metric Experts'. When deprecating an
Performance Metric, the Performance Metric description in the
Performance Metric Registry must be updated to explain the
deprecation, as well as to refer to any new Performance Metrics
created to replace the deprecated Performance Metric.</t>
<t>The revision number of a Registered Performance Metric is
incremented upon deprecation, and the revision Date updated, as with
any revision.</t>
<t>The use of deprecated Registered Metrics should result in a log
entry or human-readable warning by the respective application.</t>
<t>Names and Metric ID of deprecated Registered Metrics must not be
reused.</t>
</section>
</section>
<section title="Performance Metric Registry and other Registries">
<t>BENOIT: TBD.</t>
<t>THE BASIC IDEA IS THAT PEOPLE COULD DIRECTLY DEFINE PERF. METRICS IN
OTHER EXISTING REGISTRIES, FOR SPECIFIC PROTOCOL/ENCODING. EXAMPLE:
IPFIX. IDEALLY, ALL PERF. METRICS SHOULD BE DEFINED IN THIS REGISTRY AND
REFERS TO FROM OTHER REGISTRIES.</t>
</section>
<section title="Security considerations">
<t>This draft doesn't introduce any new security considerations for the
Internet. However, the definition of Performance Metrics may introduce
some security concerns, and should be reviewed with security in
mind.</t>
</section>
<section title="IANA Considerations">
<t>This document specifies the procedure for Performance Metrics
Registry setup. IANA is requested to create a new registry for
performance metrics called "Registered Performance Metrics".</t>
<t>This Performance Metrics Registry contains two sub registries once
for active and another one for passive performance metrics. These sub
registries are not defined in this document. However, these two sub
registries MUST contain the following columns: the identifier and the
name, the requester, the revision, the revision date and the
description, as specified in this document.</t>
<t>New assignments for Performance Metric Registry will be administered
by IANA through Expert Review [RFC5226], i.e., review by one of a group
of experts, the Performance Metric Experts, appointed by the IESG upon
recommendation of the Transport Area Directors. The experts will
initially be drawn from the Working Group Chairs and document editors of
the Performance Metrics Directorate
[performance-metrics-directorate].</t>
</section>
<section title="Acknowledgments">
<t>Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading
some brainstorming sessions on this topic.</t>
</section>
</middle>
<back>
<references title="Normative References">
<?rfc include="reference.RFC.2119"?>
<?rfc include="reference.RFC.2026"?>
<?rfc include='reference.RFC.2330'?>
<?rfc include='reference.RFC.4148'?>
<?rfc include="reference.RFC.5226"?>
<?rfc include='reference.RFC.6248'?>
<?rfc include="reference.RFC.6390" ?>
<?rfc include='reference.RFC.6576'?>
</references>
<references title="Informative References">
<?rfc include="reference.RFC.3611"?>
<?rfc include="reference.RFC.3550"?>
<?rfc include="reference.RFC.6035"?>
<?rfc include='reference.I-D.ietf-lmap-framework'?>
</references>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-23 11:14:10 |