One document matched: draft-litkowski-rtgwg-uloop-delay-03.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY RFC5286 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5286.xml">
<!ENTITY RFC3906 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3906.xml">
<!ENTITY RFC4090 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4090.xml">
<!ENTITY RFC5714 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5714.xml">
<!ENTITY RFC5305 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5305.xml">
<!ENTITY RFC5715 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5715.xml">
<!ENTITY RFC3630 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3630.xml">
<!ENTITY RFC5443 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5443.xml">
<!ENTITY RFC6571 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6571.xml">
<!ENTITY REMOTE-LFA SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-rtgwg-remote-lfa.xml">
<!ENTITY OFIB SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6976.xml">
<!ENTITY PLSN SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-rtgwg-microloop-analysis.xml">
<!ENTITY LFA-MANAGEABILITY SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.ietf-rtgwg-lfa-manageability.xml">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?> <!-- used by XSLT processors -->
<!-- OPTIONS, known as processing instructions (PIs) go here. -->
<!-- For a complete list and description of PIs,
please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable PIs that most I-Ds might want to use. -->
<?rfc strict="yes" ?> <!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC): -->
<?rfc toc="yes"?> <!-- generate a ToC -->
<?rfc tocdepth="3"?> <!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references: -->
<?rfc symrefs="yes"?> <!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?> <!-- sort the reference entries alphabetically -->
<!-- control vertical white space:
(using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?> <!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?> <!-- keep one blank line between list items -->
<!-- end of popular PIs -->
<rfc category="std" docName="draft-litkowski-rtgwg-uloop-delay-03" ipr="trust200902">
<front>
<title abbrev="uloop-delay">Microloop prevention by introducing a local convergence delay</title>
<author fullname="Stephane Litkowski" initials="S" surname="Litkowski">
<organization>Orange</organization>
<address>
<!-- postal><street/><city/><region/><code/><country/></postal -->
<!-- <phone/> -->
<!-- <facsimile/> -->
<email>stephane.litkowski@orange.com</email>
<!-- <uri/> -->
</address>
</author>
<author fullname="Bruno Decraene" initials="B" surname="Decraene">
<organization>Orange</organization>
<address>
<!-- postal><street/><city/><region/><code/><country/></postal -->
<!-- <phone/> -->
<!-- <facsimile/> -->
<email>bruno.decraene@orange.com</email>
<!-- <uri/> -->
</address>
</author>
<author fullname="Clarence Filsfils" initials="C" surname="Filsfils">
<organization>Cisco Systems</organization>
<address>
<!-- postal><street/><city/><region/><code/><country/></postal -->
<!-- <phone/> -->
<!-- <facsimile/> -->
<email>cfilsfil@cisco.com</email>
<!-- <uri/> -->
</address>
</author>
<author fullname="Pierre Francois" initials="P" surname="Francois">
<organization>IMDEA Networks</organization>
<address>
<!-- postal><street/><city/><region/><code/><country/></postal -->
<!-- <phone/> -->
<!-- <facsimile/> -->
<email>pierre.francois@imdea.org</email>
<!-- <uri/> -->
</address>
</author>
<date year="2014" />
<area></area>
<workgroup>Routing Area Working Group</workgroup>
<!-- <keyword/> -->
<!-- <keyword/> -->
<!-- <keyword/> -->
<!-- <keyword/> -->
<abstract>
<t>
This document describes a mechanism for link-state routing protocols
to prevent local transient forwarding loops in case of link failure.
This mechanism
Proposes a two-steps convergence by introducing a delay between the convergence of the node adjacent to the topology change and the network wide convergence.
</t>
<t>
As this mechanism delays the IGP convergence it may only be used for planned maintenance or when fast reroute protects the traffic between the link failure and the IGP convergence.
</t>
<t>
Simulations using real network topologies have been performed and show that local loops are a significant portion (>50%) of the total forwarding loops.
</t>
</abstract>
<note title="Requirements Language">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref
target="RFC2119"/>.</t>
</note>
</front>
<middle>
<section anchor="intro" title="Introduction">
<t>
Micro-forwarding loops and some potential solutions are well described in <xref target="RFC5715"/>.
This document describes a simple targeted mechanism that solves micro-loops local to the failure; based on network analysis, these are a significant portion of the micro-forwarding loops.
A simple and easily deployable solution to these local micro-loops is critical because these local loops cause traffic loss after an advanced fast-reroute alternate has been used (see <xref target="side-effects-frr"/>).
</t>
<t>
Consider the case in Figure 1 where S does not have an LFA to protect its traffic to D.
That means that all non-D neighbors of S on the topology will send to S any traffic destined to D if a neighbor did not, then that neighbor would be loop-free. Regardless of the advanced fast-reroute technique used, when S converges to the new topology, it will send its traffic to a neighbor that was not loop-free and thus cause a local micro-loop.
The deployment of advanced fast-reroute techniques motivates this simple router-local mechanism to solve this targeted problem. This solution can be work with the various techniques described in <xref target="RFC5715"/>.
</t>
<figure>
<artwork>
1
D ------ C
| |
1 | | 5
| |
S ------ B
1
Figure 1
</artwork>
<postamble>
When S-D fails, a transient forwarding loop may appear between S and B if S updates its forwarding entry to D before B.
</postamble>
</figure>
</section>
<section anchor="side-effects" title="Transient forwarding loops side effects">
<t>Even if they are very limited in duration, transient forwarding loops may cause high damage for the network.</t>
<section anchor="side-effects-frr" title="Fast reroute unefficiency">
<t>
<figure>
<artwork>
D
1 |
| 1
A ------ B
| | ^
10 | | 5 | T
| | |
E--------C
| 1
1 |
S
Figure 2 - RSVPTE FRR case
</artwork>
</figure>
In figure 2, a RSVP-TE tunnel T, provisionned on C and terminating on B, is used to protect against C-B link failure (IGP shortcut activated on C). Primary path of T is C->B and FRR is activated on T providing a FRR bypass or detour using path C->E->A->B.
On C, nexthop to D is tunnel T thanks to IGP shortcut.
When C-B link fails :
<list style="numbers">
<t>C detects the failure, and updates the tunnel path using preprogrammed FRR path, traffic path from S to D is : S->E->C->E->A->B->A->D . </t>
<t>In parallel, on router C, both IGP convergence and TE tunnel convergence (tunnel path recomputation) are occuring :
<list style="symbols">
<t>T path is recomputed : C->E->A->B</t>
<t>IGP path to D is recomputed : C->E->A->D</t>
</list>
</t>
<t>On C, tail-end of the TE tunnel (router B) is no more on SPT to D, so C does not encapsulate anymore the traffic to D using the tunnel T and update forwarding entry to D using nexthop E.</t>
</list>
If C updates its forwarding entry to D before router E, there would be a transient forwarding loop between C and E until E has converged.
</t>
<figure>
<artwork>
Router C timeline Router E timeline
--- + ---- t0 C-B link fails
LoC | ---- t1 C detects failure
--- + ---- t2 C activates FRR
|
T | ---- t3 C updates local LSA/LSP
R |
A | ---- t4 C floods local LSA/LSP
F |
F | ---- t5 C computes SPF --- t0 E receives LSA/LSP
I |
C | ---- t6 C updates RIB/FIB --- t1 E floods LSA/LSP
|
O | --- t2 E computes SPF
K |
--- + * (t6' C updates FIB for D) --- t3 E updates RIB/FIB
|
LoC | ---- t7 Convergence ended on C
|
|
|
|
|
--- + * (Traffic restored to D) * (t3' E updates FIB for D)
|
| --- t4 Convergence ended on E
|
</artwork>
</figure>
<t>
The issue described here is completely independent of the fast-reroute mechanism involved (TE FRR, LFA/rLFA, MRT ...). Fast-reroute is working perfectly but ensures protection, by definition, only until the PLR has converged.
When implementing FRR, a service provider wants to guarantee a very limited loss of connectivity time. The previous example shows that the benefit of FRR may be completely lost due to a transient forwarding loop appearing when PLR has converged.
Delaying FIB updates after IGP convergence may permit to keep fast-reroute path until neighbor has converged and preserve customer traffic.
</t>
</section>
<section anchor="side-effects-congestion" title="Network congestion">
<t>
<figure>
<artwork>
1
D ------ C
| |
1 | | 5
| |
A -- S ------ B
/ | 1
F E
</artwork>
</figure>
In the figure above, as presented in <xref target="intro"/>, when link S-D fails, a transient forwarding loop may appear between S and B for destination D.
The traffic on S-B link will constantly increase due to the looping traffic to D. Depending on TTL of packets, traffic rate destinated to D and bandwidth of link, the S-B link may be congestioned in few hundreds of milliseconds and will stay overloaded until the loop is solved.
</t>
<t>
Congestion introduced by transient forwarding loops are problematic as they are impacting traffic that is not directly concerned by the failing network component.
In our example, the congestion of S-B link will impact customer traffic that is not directly concerned by the failure : e.g. A to B, F to B, E to B.
Class of services may be implemented to mitigate the congestion but some traffic not directly concerned by the failure would still be dropped as a router is not able to identify looped traffic from normal traffic.
</t>
</section>
</section>
<section anchor="overview" title="Overview of the solution">
<t>
This document defines a two-step convergence initiated by the router
detecting the failure and advertising the topological changes in the
IGP. This introduces a delay between the convergence of the local
router and the network wide convergence. This delay is
positive in case of "down" events and negative in case of "up"
events.
</t>
<t>
This ordered convergence, is similar to the ordered FIB proposed
defined in <xref target="RFC6976"/>, but limited to only one hop
distance. As a consequence, it is simpler and becomes a local only feature not requiring interoperability; at the cost of only covering the transient forwarding loops involving this local router. The proposed mechanism also reuses some concept described
in <xref target="I-D.ietf-rtgwg-microloop-analysis"/> with some limitation.
</t>
</section>
<section anchor="specification" title="Specification">
<section anchor="definition" title="Definitions">
<t>
This document will refer to the following existing IGP timers:
<list style="symbols">
<t>LSP_GEN_TIMER: to batch multiple local events in one single local LSP update. It is often associated with damping mechanism to slowdown reactions by incrementing the timer when multiple consecutive events are detected.</t>
<t>SPF_TIMER: to batch multiple events in one single computation. It is often associated with damping mechanism to slowdown reactions by incrementing the timer when the IGP is instable.</t>
<t>IGP_LDP_SYNC_TIMER: defined in <xref target="RFC5443"/> to give LDP some time to establish the session and learn the MPLS labels before the link is used.</t>
</list>
</t>
<t>
This document introduces the following two new timers :
<list style="symbols">
<t>ULOOP_DELAY_DOWN_TIMER: slowdown the local node convergence in case of link down events.</t>
<t>ULOOP_DELAY_UP_TIMER: slowdown the network wide IGP convergence in case of link up events.</t>
</list>
</t>
</section>
<section anchor="description-current" title="Current IGP reactions">
<t>
Upon a change of status on an adjacency/link, the existing behavior of the router advertising the event is the following:
<list style="numbers">
<t>UP/Down event is notified to IGP.</t>
<t>IGP processes the notification and postpones the reaction in LSP_GEN_TIMER msec.</t>
<t>Upon LSP_GEN_TIMER expiration, IGP updates its LSP/LSA and floods it.</t>
<t>SPF is scheduled in SPF_TIMER msec.</t>
<t>Upon SPF_TIMER expiration, SPF is computed and RIB/FIB are updated.</t>
</list>
</t>
</section>
<section anchor="description-local-events" title="Local events">
<t>
The mechanisms described in this document assume that there has been a single failure as seen by the IGP area/level.
If this assumption is violated (e.g. multiple links or nodes failed), then standard IP convergence MUST be applied.
There are three types of single failures: local link, local node, and remote failure. </t>
<t>Example :
<figure>
<artwork>
+--- E ----+--------+
| | |
A ---- B -------- C ------ D
</artwork>
</figure>
</t>
<t>
Let B be the computing router when the link B-C fails. B updates its local LSP/LSA describing the link B->C as down, C does the same, and both start flooding their updated LSP/LSAs.
During the SPF_TIMER period, B and C learn all the LSPs/LSAs to consider.
B sees that C is flooding as down a link where B is the other end and that B and C are describing the same single event.
Since B receives no other changes, B can determine that this is a local link failure.
</t>
<t>
[Editor s Note: Detection of a failed broadcast link involves additional complexity and will be described in a future version.]
</t>
<t>
If a router determines that the event is local link failure, then the router may use the mechanism described in this document.
</t>
<t>
Distinguishing local node failure from remote or multiple link failure requires additional logic which is future work to fully describe.
To give a sense of the work necessary, if node C is failing, routers B,E and D are updating and flooding updated LSPs/LSAs.
B would need to determine the changes in the LSPs/LSAs from E and D and see that they all relate to node C which is also the far-end of the locally failed link.
Once this detection is accurately done, the same mechanism of delaying local convergence can be applied.
</t>
</section>
<section anchor="description-new" title="Local delay">
<section anchor="description-updown" title="Link down event">
<t>
Upon an adjacency/link down event, this document introduces a change
in step 5 in order to delay the local convergence compared to the
network wide convergence: the node SHOULD delay the forwarding entry
updates by ULOOP_DELAY_DOWN_TIMER. Such delay SHOULD only be
introduced if all the LSDB modifications processed are only reporting
down local events . Note that determining that all topological change
are only local down events requires analyzing all modified LSP/LSA as
a local link or node failure will typically be notified by multiple
nodes. If a subsequent LSP/LSA is received/updated and a new SPF
computation is triggered before the expiration of
ULOOP_DELAY_DOWN_TIMER, then the same evaluation SHOULD be performed.
</t>
<t> As a result of this addition, routers local to the failure will
converge slower than remote routers. Hence it SHOULD only be done for non urgent convergence, such as for administrative de-activation (maintenance) or when the traffic is Fast ReRouted.
</t>
</section>
<section anchor="description-downup" title="Link up event">
<t> Upon an adjacency/link up event, this document introduces the
following change in step 3 where the node SHOULD:
<list style="symbols">
<t> Firstly build a LSP/LSA with the new adjacency but setting the
metric to MAX_METRIC . It SHOULD flood it but not compute the SPF
at this time. This step is required to ensure the two way connectivity check on all nodes when computing SPF.
</t>
<t> Then build the LSP/LSA with the target metric but SHOULD delay the
flooding of this LSP/LSA by SPF_TIMER + ULOOP_DELAY_UP_TIMER.
MAX_METRIC is equal to MaxLinkMetric (0xFFFF) for OSPF and 2^24-2
(0xFFFFFE) for IS-IS.
</t>
<t> Then continue with next steps (SPF computation) without waiting
for the expiration of the above timer. In other word, only the
flooding of the LSA/LSP is delayed, not the local SPF computation.
</t>
</list>
</t>
<t> As as result of this addition, routers local to the failure will
converge faster than remote routers.
</t>
<t>
If this mechanism is used in cooperation with "LDP IGP
Synchronization" as defined in <xref target="RFC5443"/> then the mechanism defined
in RFC 5443 is applied first, followed by the mechanism defined in
this document. More precisely, the procedure defined in this
document is applied once the LDP session is considered "fully
operational" as per <xref target="RFC5443"/>.
</t>
</section>
</section>
</section>
<section anchor="use-case" title="Applicability">
<t>As previously stated, the mechanism only avoids the forwarding loops on the links between the node local to the failure and its neighbor. Forwarding loops may still occur on other links.</t>
<section anchor="use-case-working" title="Applicable case : local loops">
<t>
<figure>
<artwork>
A ------ B ----- E
| / |
| / |
G---D------------C F All the links have a metric of 1
Figure 2
</artwork>
</figure>
</t>
<t>
Let us consider the traffic from G to F. The primary path is
G->D->C->E->F. When link CE fails, if C updates its forwarding entry
for F before D, a transient loop occurs.
This is sub-optimal as C has FRR enabled and it breaks the FRR forwarding while all upstream routers are still forwarding the traffic to itself.
</t>
<t>
By implementing the mechanism defined in this document on C, when the
CE link fails, C delays the update of his forwarding entry to F, in
order to let some time for D to converge. FRR keeps protecting the traffic during this period. When the timer expires on
C, forwarding entry to F is updated. There is no transient
forwarding loop on the link CD.
</t>
</section>
<section anchor="use-case-nonworking" title="Non applicable case : remote loops">
<t>
<figure>
<artwork>
A ------ B ----- E --- H
| |
| |
G---D--------C ------F --- J ---- K
All the links have a metric of 1 except BE=15
Figure 3
</artwork>
</figure>
</t>
<t>
Let us consider the traffic from G to K. The primary path is
G->D->C->F->J->K. When the CF link fails, if C updates its forwarding
entry to K before D, a transient loop occurs between C and D.</t>
<t>
By implementing the mechanism defined in this document on C, when the
link CF fails, C delays the update of his forwarding entry to K,
letting time for D to converge. When the timer expires on C,
forwarding entry to F is updated. There is no transient forwarding
loop between C and D. However, a transient forwarding loop may still
occur between D and A. In this scenario, this mechanism is not enough
to address all the possible forwarding loops. However, it does not
create additional traffic loss. Besides, in some cases -such as when
the nodes update their FIB in the following order C, A, D, for
example because the router A is quicker than D to converge- the
mechanism may still avoid the forwarding loop that was occuring.
</t>
</section>
</section>
<section anchor="applicability" title="Simulations">
<t>
Simulations have been run on multiple service provider topologies. So far, only link down event have been tested.
</t>
<texttable anchor="simu-result-topology" title="Number of Repair/Dst that may loop">
<ttcol align="center">Topology</ttcol>
<ttcol align="center">Gain</ttcol>
<c>T1</c><c>71%</c>
<c>T2</c><c>81%</c>
<c>T3</c><c>62%</c>
<c>T4</c><c>50%</c>
<c>T5</c><c>70%</c>
<c>T6</c><c>70%</c>
<c>T7</c><c>59%</c>
<c>T8</c><c>77%</c>
</texttable>
<t>
We evaluated the efficiency of the mechanism on eight different
service provider topologies (different network size, design). The
benefit is displayed in the table above. The benefit is evaluated as
follows:
<list style="symbols">
<t>We consider a tuple (link A-B, destination D, PLR S, backup nexthop N) as a loop if upon link A-B failure, the flow from a router S upstream from A (A could be considered as PLR also) to D may loop due to convergence time difference between S and one of his neighbor N.</t>
<t>We evaluate the number of potential loop tuples in normal conditions.</t>
<t>We evaluate the number of potential loop tuples using the same topological input but taking into account that S converges after N.</t>
<t>Gain is how much loops (remote and local) we succeed to suppress.</t>
</list>
On topology 1, 71% of the transient forwarding loops created by the failure of any link are prevented by implementing the local delay. The analysis shows that all local loops are obviously solved and only remote loops are remaining.
</t>
</section>
<section anchor="Deployment" title="Deployment considerations">
<t>
Transient forwarding loops have the following drawbacks :
<list style="symbols">
<t>Limit FRR efficiency : even if FRR is activated in 50msec, as soon as PLR has converged, traffic may be affected by a transient loop.</t>
<t>It may impact traffic not directly concerned by the failure (due to link congestion).</t>
</list>
This local delay proposal is a transient forwarding loop avoidance
mechanism (like OFIB). Even if it only address local transient loops, , the efficiency versus complexity comparison of the
mechanism makes it a good solution. It is also incrementally deployable with incremental benefits, which makes it an attractive option for both vendors to implement and Service Providers to deploy.
Delaying convergence time is not an issue if we consider that the
traffic is protected during the convergence.
</t>
</section>
<section anchor="comparison" title="Comparison with other solutions">
<t>
As stated in <xref target="overview"/>, our solution reuses some concepts already introduced by other IETF proposals but tries to find a tradeoff between efficiency and simplicity.
This section tries to compare behaviors of the solutions.
</t>
<section anchor="plsn" title="PLSN">
<t>
PLSN (<xref target="I-D.ietf-rtgwg-microloop-analysis"/>) describes a mechanism where each node in the network tries a avoid transient forwarding loops upon a topology change by always keeping traffic on a loop-free path for a defined duration (locked path to a safe neighbor).
The locked path may be the new primary nexthop, another neighbor, or the old primary nexthop depending how the safety condition is satisified.
</t>
<t>PLSN does not solve all transient forwarding loops (see <xref target="I-D.ietf-rtgwg-microloop-analysis"/> Section 4 for more details).</t>
<t>Our solution reuse some concept of PLSN but in a more simple fashion :
<list style="symbols">
<t>PLSN has 3 different behavior : keep using old nexthop, use new primary nexthop if safe, or use another safe nexthop, while our solution only have one : keep using the current nexthop (old primary, or already activated FRR path).</t>
<t>PLSN may cause some damage while using a safe nexthop which is not the new primary nexthop in case the new safe nexthop does not enough provide enough bandwidth (see <xref target="I-D.ietf-rtgwg-lfa-manageability"/>). Our solution may not experience this issue as the service provider may have control on the FRR path being used preventing network congestion.</t>
<t>PLSN applies to all nodes in a network (remote or local changes), while our mechanism applies only on the nodes connected to the topology change.</t>
</list>
</t>
</section>
<section anchor="ofib" title="OFIB">
<t>
OFIB (<xref target="RFC6976"/>) describes a mechanism where convergence of the network upon a topology change is made ordered to prevent transient forwarding loops.
Each router in the network must deduce the failure type from the LSA/LSP received and compute/apply a specific FIB update timer based on the failure type and its rank in the network considering the failure point as root.
</t>
<t>
This mechanism permit to solve all the transient forwarding loop in a network at the price of introducing complexity in the convergence process that may require strong monitoring by the service provider.
</t>
<t>
Our solution reuses the OFIB concept but limits it to the first hop that experience the topology change. As demonstrated, our proposal permits to solve all the local transient forwarding loops that represents a high percentage of all the loops.
Moreover limiting the mechanism to one hop permit to keep the network-wide convergence behavior.
</t>
</section>
</section>
<section anchor="Security" title="Security Considerations">
<t>
This document does not introduce change in term of IGP security. The
operation is internal to the router. The local delay does not
increase the attack vector as an attacker could only trigger this
mechanism if he already has be ability to disable or enable an IGP
link. The local delay does not increase the negative consequences as
if an attacker has the ability to disable or enable an IGP link, it
can already harm the network by creating instability and harm the
traffic by creating forwarding packet loss and forwarding loss for
the traffic crossing that link.
</t>
</section>
<section anchor="Acknowledgements" title="Acknowledgements">
<t>
We wish to thanks the authors of <xref target="RFC6976"/> for introducing the concept of ordered convergence: Mike Shand, Stewart Bryant, Stefano Previdi, and Olivier Bonaventure.
</t>
</section>
<section anchor="IANA" title="IANA Considerations">
<t>
This document has no actions for IANA.
</t>
</section>
</middle>
<back>
<references title="Normative References">
&RFC2119;
&RFC5715;
&RFC5443;
</references>
<references title="Informative References">
&OFIB;
&REMOTE-LFA;
&RFC6571;
&RFC3630;
&PLSN;
&LFA-MANAGEABILITY;
</references>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-24 00:14:57 |