One document matched: draft-templin-intarea-vet-36.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?>
<?rfc strict='yes'?>
<?rfc iprnotified='no'?>
<rfc category="exp" docName="draft-templin-intarea-vet-36.txt"
ipr="trust200902" obsoletes="RFC5558">
<front>
<title abbrev="VET">Virtual Enterprise Traversal (VET)</title>
<author fullname="Fred L. Templin" initials="F." role="editor"
surname="Templin">
<organization>Boeing Research & Technology</organization>
<address>
<postal>
<street>P.O. Box 3707 MC 7L-49</street>
<city>Seattle</city>
<region>WA</region>
<code>98124</code>
<country>USA</country>
</postal>
<email>fltemplin@acm.org</email>
</address>
</author>
<date day="25" month="February" year="2013"/>
<keyword>I-D</keyword>
<keyword>Internet-Draft</keyword>
<abstract>
<t>Enterprise networks connect hosts and routers over various link
types, and often also connect to the global Internet either directly or
via a provider network. Enterprise network nodes require a means to
automatically provision addresses/prefixes and support internetworking
operation in a wide variety of use cases including Small Office / Home
Office (SOHO) networks, Mobile Ad hoc Networks (MANETs), ISP networks,
multi-organizational corporate networks and the interdomain core of the
global Internet itself. This document specifies a Virtual Enterprise
Traversal (VET) abstraction for autoconfiguration and operation of nodes
in dynamic enterprise networks.</t>
</abstract>
</front>
<middle>
<section anchor="intro" title="Introduction">
<t>Enterprise networks <xref target="RFC4852"/> connect hosts and
routers over various link types (see <xref target="RFC4861"/>, Section
2.2). The term "enterprise network" in this context extends to a wide
variety of use cases and deployment scenarios. For example, an
"enterprise" can be as small as a Small Office / Home Office (SOHO)
network, as complex as a multi-organizational corporation, or as large
as the global Internet itself. Internet Service Provider (ISP) networks
are another example use case that fits well with the VET enterprise
network model. Mobile Ad hoc Networks (MANETs) <xref target="RFC2501"/>
can also be considered as a challenging example of an enterprise
network, in that their topologies may change dynamically over time and
that they may employ little/no active management by a centralized
network administrative authority. These specialized characteristics for
MANETs require careful consideration, but the same principles apply
equally to other enterprise network scenarios.</t>
<t>In many cases, enterprise networks must present a stable
manifestation to the outside world (e.g., the Internet Default Free
Zone) while their internal topologies may be changing dynamically. This
is often the case when portions of the enterprise network are mobile,
partitioned for security purposes, employ different IP protocol
versions, etc. and is most often addressed through encapsulation (also
known as tunneling). This document therefore focuses on provisions for
accommodating dynamic enterprise networks while presenting an outward
appearance of stability and uniformity.</t>
<t>This document specifies a Virtual Enterprise Traversal (VET)
abstraction for autoconfiguration and internetworking operation, where
addresses of different scopes may be assigned on various types of
interfaces with diverse properties. Both IPv4 <xref
target="RFC0791"/><xref target="RFC0792"/> and IPv6 <xref
target="RFC2460"/><xref target="RFC4443"/> are discussed within this
context (other network layer protocols are also considered). The use of
standard DHCP <xref target="RFC2131"/> <xref target="RFC3315"/> is
assumed unless otherwise specified.</t>
<t><figure anchor="era" title="Enterprise Router (ER) Architecture">
<artwork><![CDATA[ Provider-Edge Interfaces
x x x
| | |
+--------------------+---+--------+----------+ E
| | | | | n
| I | | .... | | t
| n +---+---+--------+---+ | e
| t | +--------+ /| | r
| e I x----+ | Host | I /*+------+--< p I
| r n | |Function| n|**| | r n
| n t | +--------+ t|**| | i t
| a e x----+ V e|**+------+--< s e
| l r . | E r|**| . | e r
| f . | T f|**| . | f
| V a . | +--------+ a|**| . | I a
| i c . | | Router | c|**| . | n c
| r e x----+ |Function| e \*+------+--< t e
| t s | +--------+ \| | e s
| u +---+---+--------+---+ | r
| a | | .... | | i
| l | | | | o
+--------------------+---+--------+----------+ r
| | |
x x x
Enterprise-Edge Interfaces]]></artwork>
</figure></t>
<t><xref target="era"/> above depicts the architectural model for an
Enterprise Router (ER). As shown in the figure, an ER may have a variety
of interface types including enterprise-edge, enterprise-interior,
provider-edge, internal-virtual, as well as VET interfaces used for
encapsulating inner network layer protocol packets for transmission over
an underlying IPv4 or IPv6 network. The different types of interfaces
are defined, and the autoconfiguration mechanisms used for each type are
specified. This architecture applies equally for MANET routers, in which
enterprise-interior interfaces typically correspond to the wireless
multihop radio interfaces associated with MANETs. Out of scope for this
document is the autoconfiguration of provider interfaces, which must be
coordinated in a manner specific to the service provider's network.</t>
<t>The VET framework builds on a Non-Broadcast Multiple Access (NBMA)
<xref target="RFC2491"/> virtual interface model in a manner similar to
other automatic tunneling technologies <xref target="RFC2529"/><xref
target="RFC5214"/>. VET interfaces support the encapsulation of inner
network layer protocol packets over IP networks (i.e., either IPv4 or
IPv6), and provide an NBMA interface abstraction for coordination
between tunnel endpoint "neighbors".</t>
<t>VET and its associated technologies (including the Subnetwork
Encapsulation and Adaptation Layer (SEAL) <xref
target="I-D.templin-intarea-seal"/> and Asymmetric Extended Route
Optimization (AERO) <xref target="RFC6706"/>) are functional building
blocks for a new architecture known as the Interior Routing Overlay
Network (IRON) <xref target="I-D.templin-ironbis"/>. Many of the VET
principles can be traced to the deliberations of the ROAD group in
January 1992, and also to still earlier initiatives including the
Catenet model for internetworking <xref target="CATENET"/> <xref
target="IEN48"/> <xref target="RFC2775"/> and NIMROD <xref
target="RFC1753"/>. The high-level architectural aspects of the ROAD
group deliberations are captured in a "New Scheme for Internet Routing
and Addressing (ENCAPS) for IPNG" <xref target="RFC1955"/>.</t>
<t>VET is related to the present-day activities of the IETF INTAREA,
AUTOCONF, DHC, IPv6, MANET, RENUM and V6OPS working groups, as well as
the IRTF RRG working group.</t>
</section>
<section title="Differences with RFC5558">
<t>This document is based on <xref target="RFC5558"/> but makes
significant changes over that earlier work. The most important
difference is that this document breaks the linkage between VET and
earlier NBMA tunneling mechanisms such as 6over4 and ISATAP. The
document therefore no longer has backwards-compatible dependencies on
these technologies.</t>
<t>The terminology section has seen some new terms added and some
existing terms renamed and/or clarified. Important new terms including
"Client Prefix (CP)" and "VET link" have been added, while other terms
including VET Border Router and VET Border Gateway have been renamed for
greater clarity. RFC2119 terminology has also been added.</t>
<t>"Enterprise Network Characteristics" now also considers cases in
which an enterprise network may contain many internal partitions, which
is an area that was left underspecified in RFC5558. These partitions may
be necessary for such uses as load balancing, organizational separation,
etc. The section now also discusses both unidirectional and
bidirectional neighbor relationships.</t>
<t>The "Enterprise Router (ER) Autoconfiguration" section now provides a
discussion on DHCP relaying considerations, including replay detection.
These considerations are important for instances in which DHCP relaying
may be excessive (e.g., Mobile Ad-Hoc Networks (MANETs)).</t>
<t>The "VET Border Router Autoconfiguration" section now draws a
distinction between what is meant by "VET link" and "VET interface", and
explains the cases in which link local addresses can and cannot be used.
Provider Aggregated (PA) prefix autoconfiguration now also discusses
both stateful and stateless autoconfiguration. The subsection on
"ISP-Independent EID Prefix Autoconfiguration" now also introduces the
capability of registering Client Prefixes (CPs) with Virtual Service
Providers (VSPs).</t>
<t>The "VET Border Gateway (VBG) Autoconfiguration" section now explains
the manner in which VBGs can act as "half gateways" in the IRON
Client/Server/Relay architecture. The "VET Host Autoconfiguration"
section now explains cases in which prefixes may be provided to hosts,
i.e., if there is assurance that the link will not partition.</t>
<t>Under "Internetworking Operation", "Routing Protocol Participation"
now discusses the case of receiving on-demand redirection messages as a
form of routing. The section further discusses PI prefix and CP prefix
routing considerations. "Default Route Configuration", "Address
Selection" and "Next-Hop Determination" are newly rewritten sections
that completely replace significant portions of this major section. "VET
Interface Encapsulation/Decapsulation" now gives important details on
encapsulation procedures and header formats that were not present in
RFC5558. The new section on "Neighbor Coordination" (including
discussions of unidirectional and bidirectional neighbor relationships
as well as redirection) is also key to understanding the new operational
model. The remaining sections of "Internetworking Operation" have
received minor and/or substantial rewrites with most of the
specification intact from RFC5558. The document finally adds a new
appendix on Anycast Services.</t>
</section>
<section anchor="terminology" title="Terminology">
<t>The mechanisms within this document build upon the fundamental
principles of IP encapsulation. The term "inner" refers to the innermost
{address, protocol, header, packet, etc.} *before* encapsulation, and
the term "outer" refers to the outermost {address, protocol, header,
packet, etc.} *after* encapsulation. VET also accommodates "mid-layer"
encapsulations such as SEAL <xref target="I-D.templin-intarea-seal"/>
and IPsec <xref target="RFC4301"/>.</t>
<t>The terminology in the normative references apply; the following
terms are defined within the scope of this document:</t>
<t><list style="hanging">
<t hangText="Virtual Enterprise Traversal (VET)"><vspace/>an
abstraction that uses encapsulation to create virtual overlays for
transporting inner network layer packets over outer IPv4 and IPv6
enterprise networks.</t>
<t hangText="enterprise network"><vspace/>the same as defined in
<xref target="RFC4852"/>. An enterprise network is further
understood to refer to a cooperative networked collective of devices
within a structured IP routing and addressing plan and with a
commonality of business, social, political, etc., interests.
Minimally, the only commonality of interest in some enterprise
network scenarios may be the cooperative provisioning of
connectivity itself.</t>
<t hangText="subnetwork"><vspace/>the same as defined in <xref
target="RFC3819"/>.</t>
<t hangText="site"><vspace/>a logical and/or physical grouping of
interfaces that connect a topological area less than or equal to an
enterprise network in scope. From a network organizational
standpoint, a site within an enterprise network can be considered as
an enterprise network unto itself.</t>
<t hangText="Mobile Ad hoc Network (MANET)"><vspace/>a connected
topology of mobile or fixed routers that maintain a routing
structure among themselves over links that often have dynamic
connectivity properties. The characteristics of MANETs are described
in <xref target="RFC2501"/>, Section 3, and a wide variety of MANETs
share common properties with enterprise networks.</t>
<t hangText="enterprise/site/MANET"><vspace/>throughout the
remainder of this document, the term "enterprise network" is used to
collectively refer to any of {enterprise, site, MANET}, i.e., the
VET mechanisms and operational principles can be applied to
enterprises, sites, and MANETs of any size or shape.</t>
<t hangText="VET link"><vspace/>a virtual link that uses automatic
tunneling to create an overlay network that spans an enterprise
network routing region. VET links can be segmented (e.g., by
filtering gateways) into multiple distinct segments that can be
joined together by bridges or IP routers the same as for any link.
Bridging would view the multiple (bridged) segments as a single VET
link, whereas IP routing would view the multiple segments as
multiple distinct VET links. VET links can further be partitioned
into multiple logical areas, where each area is identified by a
distinct set of border nodes.</t>
<t>VET links configured over non-multicast enterprise networks
support only Non-Broadcast, Multiple Access (NBMA) services; VET
links configured over enterprise networks that support multicast can
support both unicast and native multicast services. All nodes
connected to the same VET link appear as neighbors from the
standpoint of the inner network layer.</t>
<t hangText="Enterprise Router (ER)"><vspace/>As depicted in <xref
target="era"/>, an Enterprise Router (ER) is a fixed or mobile
router that comprises a router function, a host function, one or
more enterprise-interior interfaces, and zero or more internal
virtual, enterprise-edge, provider-edge, and VET interfaces. At a
minimum, an ER forwards outer IP packets over one or more sets of
enterprise-interior interfaces, where each set connects to a
distinct enterprise network.</t>
<t hangText="VET Border Router (VBR)"><vspace/>an ER that connects
end user networks (EUNs) to VET links and/or connects multiple VET
links together. A VBR is a tunnel endpoint router, and it configures
a separate VET interface for each distinct VET link. All VBRs are
also ERs.</t>
<t hangText="VET Border Gateway (VBG)"><vspace/>a VBR that connects
VET links to provider networks. A VBG may alternately act as a
"half-gateway", and forward the packets it receives from neighbors
on the VET link to another VBG on the same VET link. All VBGs are
also VBRs.</t>
<t hangText="VET host">any node (host or router) that configures a
VET interface for host-operation only. Note that a node may
configure some of its VET interfaces as host interfaces and others
as router interfaces.</t>
<t hangText="VET node"><vspace/>any node (host or router) that
configures and uses a VET interface.</t>
<t hangText="enterprise-interior interface"><vspace/>an ER's
attachment to a link within an enterprise network. Packets sent over
enterprise-interior interfaces may be forwarded over multiple
additional enterprise-interior interfaces before they reach either
their final destination or a border router/gateway.
Enterprise-interior interfaces connect laterally within the IP
network hierarchy.</t>
<t hangText="enterprise-edge interface"><vspace/>a VBR's attachment
to a link (e.g., an Ethernet, a wireless personal area network,
etc.) on an arbitrarily complex EUN that the VBR connects to a VET
link and/or a provider network. Enterprise-edge interfaces connect
to lower levels within the IP network hierarchy.</t>
<t hangText="provider-edge interface"><vspace/>a VBR's attachment to
the Internet or to a provider network via which the Internet can be
reached. Provider-edge interfaces connect to higher levels within
the IP network hierarchy.</t>
<t hangText="internal-virtual interface"><vspace/>an interface that
is internal to a VET node and does not in itself directly attach to
a tangible link, e.g., a loopback interface, a tunnel virtual
interface, etc.</t>
<t hangText="VET interface"><vspace/>a VET node's attachment to a
VET link. VET nodes configure each VET interface over a set of
underlying enterprise-interior interfaces that connect to a routing
region spanned by a single VET link. When there are multiple
distinct VET links (each with their own distinct set of underlying
interfaces), the VET node configures a separate VET interface for
each link.</t>
<t>The VET interface encapsulates each inner packet in any mid-layer
headers followed by an outer IP header, then forwards the packet on
an underlying interface such that the Time to Live (TTL) - Hop Limit
in the inner header is not decremented as the packet traverses the
link. The VET interface therefore presents an automatic tunneling
abstraction that represents the VET link as a single hop to the
inner network layer.</t>
<t hangText="Provider Aggregated (PA) prefix"><vspace/>a network
layer protocol prefix that is delegated to a VET node by a provider
network.</t>
<t hangText="Provider Independent (PI) prefix"><vspace/>a network
layer protocol prefix that is delegated to a VET node by an
independent registration authority. The VET node then becomes solely
responsible for representing the PI prefix into the global Internet
routing system on its own behalf.</t>
<t hangText="Client Prefix (CP)"><vspace/>a network layer protocol
prefix that is delegated to a VET node by a Virtual Service Provider
(VSP) that may operate independently of the node's provider
networks. The term "Client Prefix (CP)" is the same as used in IRON
<xref target="I-D.templin-ironbis"/>.</t>
<t hangText="Routing Locator (RLOC)"><vspace/>a public-scope or
enterprise-local-scope IP address that can be reached via the
enterprise-interior and/or interdomain routing systems. Public-scope
RLOCs are delegated to specific enterprise networks and routable
within both the enterprise-interior and interdomain routing regions.
Enterprise-local-scope RLOCs (e.g., IPv6 Unique Local Addresses
<xref target="RFC4193"/>, IPv4 privacy addresses <xref
target="RFC1918"/>, etc.) are self-generated by individual
enterprise networks and routable only within the enterprise-interior
routing region.</t>
<t>ERs use RLOCs for operating the enterprise-interior routing
protocol and for next-hop determination in forwarding packets
addressed to other RLOCs. End systems can use RLOCs as addresses for
end-to-end communications between peers within the same enterprise
network. VET interfaces treat RLOCs as *outer* IP addresses during
encapsulation.</t>
<t hangText="Endpoint Interface iDentifier (EID)"><vspace/>a
public-scope network layer address that is routable within
enterprise-edge and/or VET overlay networks. In a pure mapping
system, EID prefixes are not routable within the interdomain routing
system. In a hybrid routing/mapping system, EID prefixes may be
represented within the same interdomain routing instances that
distribute RLOC prefixes. In either case, EID prefixes are separate
and distinct from any RLOC prefix space, but they are mapped to RLOC
addresses to support packet forwarding over VET interfaces.</t>
<t>VBRs participate in any EID-based routing instances and use EID
addresses for next-hop determination. End systems can use EIDs as
addresses for end-to-end communications between peers either within
the same enterprise network or within different enterprise networks.
VET interfaces treat EIDs as *inner* network layer addresses during
encapsulation.</t>
<t>Note that an EID can also be used as an *outer* network layer
address if there are nested encapsulations. In that case, the EID
would appear as an RLOC to the innermost encapsulation.</t>
</list></t>
<t>The following additional acronyms are used throughout the
document:</t>
<t>CGA - Cryptographically Generated Address<vspace/> DHCP(v4, v6) -
Dynamic Host Configuration Protocol<vspace/> ECMP - Equal Cost Multi
Path<vspace/> EUN - End User Network<vspace/> FIB - Forwarding
Information Base<vspace/> ICMP - either ICMPv4 or ICMPv6<vspace/> ICV -
Integrity Check Vector<vspace/> IP - either IPv4 or IPv6<vspace/> ISATAP
- Intra-Site Automatic Tunnel Addressing Protocol<vspace/> NBMA -
Non-Broadcast, Multiple Access<vspace/> ND - Neighbor Discovery<vspace/>
PIO - Prefix Information Option<vspace/> PRL - Potential Router
List<vspace/> PRLNAME - Identifying name for the PRL<vspace/> RIB -
Routing Information Base<vspace/> RIO - Route Information
Option<vspace/> SCMP - SEAL Control Message Protocol<vspace/> SEAL -
Subnetwork Encapsulation and Adaptation Layer<vspace/> SLAAC - IPv6
StateLess Address AutoConfiguration<vspace/> SNS/SNA - SCMP Neighbor
Solicitation/Advertisement<vspace/> SPD - SCMP Predirect<vspace/> SRD -
SCMP Redirect<vspace/> SRS/SRA - SCMP Router
Solicitation/Advertisement</t>
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref target="RFC2119"/>.
When used in lower case (e.g., must, must not, etc.), these words MUST
NOT be interpreted as described in <xref target="RFC2119"/>, but are
rather interpreted as they would be in common English.</t>
</section>
<section anchor="discuss" title="Enterprise Network Characteristics">
<t>Enterprise networks consist of links that are connected by Enterprise
Routers (ERs) as depicted in <xref target="era"/>. ERs typically
participate in a routing protocol over enterprise-interior interfaces to
discover routes that may include multiple Layer 2 or Layer 3 forwarding
hops. VET Border Routers (VBRs) are ERs that connect End User Networks
(EUNs) to VET links that span enterprise networks. VET Border Gateways
(VBGs) are VBRs that connect VET links to provider networks.</t>
<t>Conceptually, an ER embodies both a host function and router
function, and supports communications according to the weak end-system
model <xref target="RFC1122"/>. The router function engages in the
enterprise-interior routing protocol on its enterprise-interior
interfaces, connects any of the ER's EUNs to its VET links, and may also
connect the VET links to provider networks (see <xref target="era"/>).
The host function typically supports network management applications,
but may also support diverse applications typically associated with
general-purpose computing platforms.</t>
<t>An enterprise network may be as simple as a small collection of ERs
and their attached EUNs; an enterprise network may also contain other
enterprise networks and/or be a subnetwork of a larger enterprise
network. An enterprise network may further encompass a set of branch
offices and/or nomadic hosts connected to a home office over one or
several service providers, e.g., through Virtual Private Network (VPN)
tunnels. Finally, an enterprise network may contain many internal
partitions that are logical or physical groupings of nodes for the
purpose of load balancing, organizational separation, etc. In that case,
each internal partition resembles an individual segment of a bridged
LAN.</t>
<t>Enterprise networks that comprise link types with sufficiently
similar properties (e.g., Layer 2 (L2) address formats, maximum
transmission units (MTUs), etc.) can configure a subnetwork routing
service such that the network layer sees the underlying network as an
ordinary shared link the same as for a (bridged) campus LAN (this is
often the case with large cellular operator networks). In that case, a
single network layer hop is sufficient to traverse the underlying
network. Enterprise networks that comprise link types with diverse
properties and/or configure multiple IP subnets must also provide an
enterprise-interior routing service that operates as an IP layer
mechanism. In that case, multiple network layer hops may be necessary to
traverse the underlying network.</t>
<t>In addition to other interface types, VET nodes configure VET
interfaces that view all other nodes on the VET link as neighbors on a
virtual NBMA link. VET nodes configure a separate VET interface for each
distinct VET link to which they connect, and discover neighbors on the
link that can be used for forwarding packets to off-link destinations.
VET interface neighbor relationships may be either unidirectional or
bidirectional.</t>
<t>A unidirectional neighbor relationship is typically established and
maintained as a result of network layer control protocol messaging in a
manner that parallels IPv6 neighbor discovery <xref target="RFC4861"/>.
A bidirectional neighbor relationship is typically established and
maintained as result of a short transaction between the neighbors (see
<xref target="tesync"/>).</t>
<t>For each distinct VET link, a trust basis must be established and
consistently applied. For example, for VET links configured over
enterprise networks in which VBRs establish symmetric security
associations, mechanisms such as IPsec <xref target="RFC4301"/> can be
used to assure authentication and confidentiality. In other enterprise
network scenarios, VET links may require asymmetric securing mechanisms
such as SEcure Neighbor Discovery (SEND) <xref target="RFC3971"/>. VET
links configured over still other enterprise networks may find it
sufficient to employ only the services provided by SEAL <xref
target="I-D.templin-intarea-seal"/> (including anti-replay, packet
header integrity, and data origin authentication) and defer strong
security services to higher layer functions.</t>
<t>Finally, for VET links configured over enterprise networks with a
centralized management structure (e.g., a corporate campus network, an
ISP network, etc.), a hybrid routing/mapping service can be deployed
using a synchronized set of VBGs. In that case, the VBGs can provide a
mapping service (similar to the "default mapper" described in <xref
target="I-D.jen-apt"/>) used for short-term packet forwarding until
route-optimized paths can be established. For VET links configured over
enterprise networks with a distributed management structure (e.g.,
disconnected MANETs), interdomain coordination between the VET nodes
themselves without the assistance of VBGs may be required. Recognizing
that various use cases may entail a continuum between a fully
centralized and fully distributed approach, the following sections
present the mechanisms of Virtual Enterprise Traversal as they apply to
a wide variety of scenarios.</t>
</section>
<section anchor="spec" title="Autoconfiguration">
<t>ERs, VBRs, VBGs, and VET hosts configure themselves for operation as
specified in the following subsections.</t>
<section anchor="eir" title="Enterprise Router (ER) Autoconfiguration">
<t>ERs configure enterprise-interior interfaces and engage in any
routing protocols over those interfaces.</t>
<t>When an ER joins an enterprise network, it first configures an IPv6
link-local address on each enterprise-interior interface that requires
an IPv6 link-local capability and configures an IPv4 link-local
address on each enterprise-interior interface that requires an IPv4
link-local capability. IPv6 link-local address generation mechanisms
include Cryptographically Generated Addresses (CGAs) <xref
target="RFC3972"/>, IPv6 Privacy Addresses <xref target="RFC4941"/>,
StateLess Address AutoConfiguration (SLAAC) using EUI-64 interface
identifiers <xref target="RFC4291"/> <xref target="RFC4862"/>, etc.
The mechanisms specified in <xref target="RFC3927"/> provide an IPv4
link-local address generation capability.</t>
<t>Next, the ER configures one or more RLOCs and engages in any
routing protocols on its enterprise-interior interfaces. The ER can
configure RLOCs via administrative configuration, pseudo-random
self-generation from a suitably large address pool, SLAAC, DHCP
autoconfiguration, or through an alternate autoconfiguration
mechanism.</t>
<t>Pseudo-random self-generation of IPv6 RLOCs can be from a large
public or local-use IPv6 address range (e.g., IPv6 Unique Local
Addresses <xref target="RFC4193"/>). Pseudo-random self-generation of
IPv4 RLOCs can be from a large public or local-use IPv4 address range
(e.g., <xref target="RFC1918"/>). When self-generation is used alone,
the ER continuously monitors the RLOCs for uniqueness, e.g., by
monitoring the enterprise-interior routing protocol. (Note however
that anycast RLOCs may be assigned to multiple enterprise-interior
interfaces; hence, monitoring for uniqueness applies only to RLOCs
that are provisioned as unicast.)</t>
<t>SLAAC autoconfiguration of RLOCs can be through the receipt of IPv6
Router Advertisements (RAs) followed by the stateless configuration of
addresses based on any included Prefix Information Options (PIOs)
<xref target="RFC4861"/><xref target="RFC4862"/>.</t>
<t>DHCP autoconfiguration of RLOCs uses standard DHCP procedures,
however ERs acting as DHCP clients SHOULD also use DHCP Authentication
<xref target="RFC3118"/> <xref target="RFC3315"/> as discussed further
below. In typical enterprise network scenarios (i.e., those with
stable links), it may be sufficient to configure one or a few DHCP
relays on each link that does not include a DHCP server. In more
extreme scenarios (e.g., MANETs that include links with dynamic
connectivity properties), DHCP operation may require any ERs that have
already configured RLOCs to act as DHCP relays to ensure that client
DHCP requests eventually reach a DHCP server. This may result in
considerable DHCP message relaying until a server is located, but the
DHCP Authentication Replay Detection option <xref target="RFC4030"/>
provides relays with a means for avoiding message duplication.</t>
<t>In all enterprise network scenarios, the amount of DHCP relaying
required can be significantly reduced if each relay has a way of
contacting a DHCP server directly. In particular, if the relay can
discover the unicast addresses for one or more servers (e.g., by
discovering the unicast RLOC addresses of VBGs as described in <xref
target="ebr1.5"/>) it can forward DHCP requests directly to the
unicast address(es) of the server(s). If the relay does not know the
unicast address of a server, it can forward DHCP requests to a
site-scoped DHCP server multicast address if the enterprise network
supports site-scoped multicast services. For DHCPv6, relays can
forward requests to the site-scoped IPv6 multicast group address
'All_DHCP_Servers' <xref target="RFC3315"/>. For DHCPv4, relays can
forward requests to the site-scoped IPv4 multicast group address
'All_DHCPv4_Servers', which SHOULD be set to a well-known site-scoped
IPv4 multicast group address for the enterprise network. DHCPv4
servers that delegate RLOCs SHOULD therefore join the
'All_DHCPv4_Servers' multicast group and service any DHCPv4 messages
received for that group.</t>
<t>A combined approach using both DHCP and self-generation is also
possible when the ER configures both a DHCP client and relay that are
connected, e.g., via a pair of back-to-back connected Ethernet
interfaces, a tun/tap interface, a loopback interface, inter-process
communication, etc. The ER first self-generates an RLOC taken from a
temporary addressing range used only for the bootstrapping purpose of
procuring an actual RLOC taken from a delegated addressing range. The
ER then engages in the enterprise-interior routing protocol and
performs a DHCP exchange as above using the temporary RLOC as the
address of its relay function. When the DHCP server delegates an
actual RLOC address/prefix, the ER abandons the temporary RLOC and
re-engages in the enterprise-interior routing protocol using an RLOC
taken from the delegation.</t>
<t>Alternatively (or in addition to the above), the ER can request
RLOC prefix delegations via an automated prefix delegation exchange
over an enterprise-interior interface and can assign the prefix(es) on
enterprise-edge interfaces. Note that in some cases, the same
enterprise-edge interfaces may assign both RLOC and EID addresses if
there is a means for source address selection. In other cases (e.g.,
for separation of security domains), RLOCs and EIDs are assigned on
separate sets of enterprise-edge interfaces.</t>
<t>In some enterprise network scenarios (e.g., MANETs that include
links with dynamic connectivity properties), assignment of RLOCs on
enterprise-interior interfaces as singleton addresses (i.e., as
addresses with /32 prefix lengths for IPv4, or as addresses with /128
prefix lengths for IPv6) MAY be necessary to avoid multi-link subnet
issues <xref target="RFC4903"/>.</t>
</section>
<section anchor="ebr" title="VET Border Router (VBR) Autoconfiguration">
<t>VBRs are ERs that configure and use one or more VET interfaces. In
addition to the ER autoconfiguration procedures specified in <xref
target="eir"/>, VBRs perform the following autoconfiguration
operations.</t>
<section anchor="ebr1" title="VET Interface Initialization">
<t>VBRs configure a separate VET interface for each VET link, where
each VET link spans a distinct sets of underlying links belonging to
the same enterprise network. All nodes on the VET link appear as
single-hop neighbors from the standpoint of the inner network layer
protocol through the use of encapsulation.</t>
<t>The VBR binds each VET interface to one or more underlying
interfaces, and uses the underlying interface addresses as RLOCs to
serve as the outer source addresses for encapsulated packets. The
VBR then assigns a link-local address to each VET interface if
possible (*). When IPv6 and IPv4 are used as the inner/outer
protocols (respectively), the VBR can autoconfigure an IPv6
link-local address on the VET interface using a modified EUI-64
interface identifier based on an IPv4 RLOC address (see Section
2.2.1 of <xref target="RFC5342"/>). Link-local address configuration
for other inner/outer protocol combinations is through
administrative configuration, random self-generation (e.g., <xref
target="RFC4941"/>, etc.) or through an unspecified alternate
method.</t>
<t>(*) In some applications, assignment of link-local addresses on a
VET interface may be impractical due to an indefinite mapping of the
inner link-local address to an outer RLOC address. For example, if
there are VET link neighbors located behind Network Address
Translators (NATs) any inner link-local address to outer RLOC
address mapping may be subject to change due to changes in NAT
state. In that case, inner network layer protocol services such as
the IPv6 Neighbor Discovery (ND) protocol <xref target="RFC4861"/>
that depend on link-local addressing may not be able to function in
the normal manner over the VET link.</t>
</section>
<section anchor="ebr1.5" title="Potential Router List (PRL) Discovery">
<t>After initializing the VET interface, the VBR next discovers a
Potential Router List (PRL) for the VET link that includes the RLOC
addresses of VBGs. The VBR discovers the PRL through administrative
configuration, as part of an arrangement with a Virtual Service
Provider (VSP) (see: <xref target="ebr4"/>), through information
conveyed in the enterprise-interior routing protocol, via a
multicast beacon, via an anycast VBG discovery message exchange, or
through some other means specific to the enterprise network.</t>
<t>If no such enterprise-specific information is available, the VBR
can instead resolve an identifying name for the PRL ('PRLNAME')
formed as 'hostname.domainname', where 'hostname' is an
enterprise-specific name string and 'domainname' is an
enterprise-specific Domain Name System (DNS) suffix <xref
target="RFC1035"/>. The VBR can discover 'domainname' through the
DHCP Domain Name option <xref target="RFC2132"/>, administrative
configuration, etc. The VBR can discover 'hostname' via link-layer
information (e.g., an IEEE 802.11 Service Set Identifier (SSID)),
administrative configuration, etc.</t>
<t>In the absence of other information, the VBR sets 'hostname' to
"linkupnetworks" and sets 'domainname' to an enterprise-specific DNS
suffix (e.g., "example.com"). Isolated enterprise networks that do
not connect to the outside world may have no enterprise-specific DNS
suffix, in which case the 'PRLNAME' consists only of the 'hostname'
component.</t>
<t>After discovering 'PRLNAME', the VBR resolves the name into a
list of RLOC addresses through a name service lookup. For centrally
managed enterprise networks, the VBR resolves 'PRLNAME' using an
enterprise-local name service (e.g., the DNS). For enterprises with
no centralized management structure, the VBR resolves 'PRLNAME'
using a distributed name service query such as Link-Local Multicast
Name Resolution (LLMNR) <xref target="RFC4795"/> over the VET
interface. In that case, all VBGs in the PRL respond to the query,
and the VBR accepts the union of all responses.</t>
</section>
<section anchor="ebr3"
title="Provider-Aggregated (PA) EID Prefix Autoconfiguration">
<t>VBRs that connect their enterprise networks to a provider network
can obtain Provider-Aggregated (PA) EID prefixes. For IPv4, VBRs
acquire IPv4 PA EID prefixes through administrative configuration,
an automated IPv4 prefix delegation exchange, etc.</t>
<t>For IPv6, VBRs acquire IPv6 PA EID prefixes through
administrative configuration or through DHCPv6 Prefix Delegation
exchanges with a VBG acting as a DHCP relay/server. In particular,
the VBR (acting as a requesting router) can use DHCPv6 prefix
delegation <xref target="RFC3633"/> over the VET interface to obtain
prefixes from the VBG (acting as a delegating router). The VBR
obtains prefixes using either a 2-message or 4-message DHCPv6
exchange <xref target="RFC3315"/>. When the VBR acts as a DHCPv6
client, it maps the IPv6 "All_DHCP_Relay_Agents_and_Servers"
link-scoped multicast address to the VBG's outer RLOC address.</t>
<t>To perform the 2-message exchange, the VBR's DHCPv6 client
function can send a Solicit message with an IA_PD option either
directly or via the VBR's own DHCPv6 relay function (see <xref
target="eir"/>). The VBR's VET interface then forwards the message
using VET encapsulation (see Section 6.4) to a VBG which either
services the request or relays it further. The forwarded Solicit
message will elicit a Reply message from the server containing
prefix delegations. The VBR can also propose a specific prefix to
the DHCPv6 server per Section 7 of <xref target="RFC3633"/>. The
server will check the proposed prefix for consistency and
uniqueness, then return it in the Reply message if it was able to
perform the delegation.</t>
<t>After the VBR receives IPv4 and/or IPv6 prefix delegations, it
can provision the prefixes on enterprise-edge interfaces as well as
on other VET interfaces configured over child enterprise networks
for which it acts as a VBG. The VBR can also provision the prefixes
on enterprise-interior interfaces to service directly-attached hosts
on the enterprise-interior link.</t>
<t>The prefix delegations remain active as long as the VBR continues
to renew them via the delegating VBG before lease lifetimes expire.
The lease lifetime also keeps the delegation state active even if
communications between the VBR and delegating VBG are disrupted for
a period of time (e.g., due to an enterprise network partition,
power failure, etc.). Note however that if the VBR abandons or
otherwise loses continuity with the prefixes, it may be obliged to
perform network-wide renumbering if it subsequently receives a new
and different set of prefixes.</t>
<t>Prefix delegation for non-IP protocols is out of scope.</t>
</section>
<section anchor="ebr4"
title="Provider-Independent EID Prefix Autoconfiguration">
<t>VBRs can acquire Provider-Independent (PI) prefixes to facilitate
multihoming, mobility and traffic engineering without requiring
site-wide renumbering events due to a change in ISP connections.</t>
<t>VBRs that connect major enterprise networks (e.g., large
corporations, academic campuses, ISP networks, etc.) to the global
Internet can acquire short PI prefixes (e.g., an IPv6 /32, an IPv4
/16, etc.) through a registration authority such as the Internet
Assigned Numbers Authority (IANA) or a major regional Internet
registry. The VBR then advertises the PI prefixes into the global
Internet on the behalf of its enterprise network without the
assistance of an ISP.</t>
<t>VBRs that connect enterprise networks to a provider network can
acquire longer Client Prefixes (CPs) (e.g., an IPv6 /56, an IPv4
/24, etc.) through arrangements with a Virtual Service Provider
(VSP) that may or may not be associated with a specific ISP. The VBR
then coordinates its CPs with a VSP independently of any of its
directly attached ISPs. (In many cases, the "VSP" may in fact be a
major enterprise network that delegates CPs from its PI
prefixes.)</t>
<t>After a VBR receives prefix delegations, it can sub-delegate
portions of the prefixes on enterprise-edge interfaces, on child VET
interfaces for which it is configured as a VBG and on
enterprise-interior interfaces to service directly-attached hosts on
the enterprise-interior link. The VBR can also sub-delegate portions
of its prefixes to requesting routers connected to child enterprise
networks. These requesting routers consider their sub-delegated
prefixes as PA, and consider the delegating routers as their points
of connection to a provider network.</t>
</section>
</section>
<section anchor="ebg" title="VET Border Gateway (VBG) Autoconfiguration">
<t>VBGs are VBRs that connect VET links configured over child
enterprise networks to provider networks via provider-edge interfaces
and/or via VET links configured over parent enterprise networks. A VBG
may also act as a "half-gateway", in that it may need to forward the
packets it receives from neighbors on the VET link via another VBG
associated with the same VET link. This model is seen in the IRON
<xref target="I-D.templin-ironbis"/> Client/Server/Relay architecture,
in which a Server "half-gateway" is a VBG that forwards packets with
enterprise-external destinations via a Relay "half-gateway" that
connects the VET link to the provider network.</t>
<t>VBGs autoconfigure their provider-edge interfaces in a manner that
is specific to the provider connections, and they autoconfigure their
VET interfaces that were configured over parent VET links using the
VBR autoconfiguration procedures specified in <xref target="ebr"/>.
For each of its VET interfaces connected to child VET links, the VBG
initializes the interface the same as for an ordinary VBR (see <xref
target="ebr1"/>). It then arranges to add one or more of its RLOCs
associated with the child VET link to the PRL.</t>
<t>VBGs configure a DHCP relay/server on VET interfaces connected to
child VET links that require DHCP services. VBGs may also engage in an
unspecified anycast VBG discovery message exchange if they are
configured to do so. Finally, VBGs respond to distributed name service
queries for 'PRLNAME' on VET interfaces connected to VET links that
span child enterprise networks with a distributed management
structure.</t>
</section>
<section anchor="host" title="VET Host Autoconfiguration">
<t>Nodes that cannot be attached via a VBR's enterprise-edge interface
(e.g., nomadic laptops that connect to a home office via a Virtual
Private Network (VPN)) can instead be configured for operation as a
simple host on the VET link. Each VET host performs the same
enterprise interior interface RLOC configuration procedures as
specified for ERs in <xref target="eir"/>. The VET host next performs
the same VET interface initialization and PRL discovery procedures as
specified for VBRs in <xref target="ebr"/>, except that it configures
its VET interfaces as host interfaces (and not router interfaces).
Note also that a node may be configured as a host on some VET
interfaces and as a VBR/VBG on other VET interfaces.</t>
<t>A VET host may receive non-link-local addresses and/or prefixes to
assign to the VET interface via administrative configuration, DHCP
exchanges and/or through SLAAC information conveyed in RAs. If
prefixes are provided, however, there must be assurance that either 1)
the VET link will not partition, or 2) that each VET host interface
connected to the VET link will configure a unique set of prefixes. VET
hosts therefore depend on DHCP and/or RA exchanges to provide only
addresses/prefixes that are appropriate for assignment to the VET
interface according to these specific cases, and depend on the VBGs
within the enterprise keeping track of which addresses/prefixes were
assigned to which hosts.</t>
<t>When the VET host solicits a DHCP-assigned EID address/prefix over
a (non-multicast) VET interface, it maps the DHCP relay/server
multicast inner destination address to the outer RLOC address of a VBG
that it has selected as a default router. The VET host then assigns
any resulting DHCP-delegated addresses/prefixes to the VET interface
for use as the source address of inner packets. The host will
subsequently send all packets destined to EID correspondents via a
default router on the VET link, and may discover more-specific routes
based on any redirection messages it receives.</t>
</section>
</section>
<section title="Internetworking Operation">
<t>Following the autoconfiguration procedures specified in <xref
target="spec"/>, ERs, VBRs, VBGs, and VET hosts engage in normal
internetworking operations as discussed in the following sections.</t>
<section anchor="mnr7.5" title="Routing Protocol Participation">
<t>ERs engage in any RLOC-based routing protocols over
enterprise-interior interfaces to exchange routing information for
forwarding IP packets with RLOC addresses. VBRs and VBGs can
additionally engage in any EID-based routing protocols over VET,
enterprise-edge and provider-edge interfaces to exchange routing
information for forwarding inner network layer packets with EID
addresses. Note that any EID-based routing instances are separate and
distinct from any RLOC-based routing instances.</t>
<t>VBR/VBG routing protocol participation on non-multicast VET
interfaces uses the NBMA interface model, e.g., in the same manner as
for OSPF over NBMA interfaces <xref target="RFC5340"/>. (VBR/VBG
routing protocol participation on multicast-capable VET interfaces can
alternatively use the standard multicast interface model, but this may
result in excessive multicast control message overhead.)</t>
<t>VBRs can use the list of VBGs in the PRL (see <xref
target="ebr1"/>) as an initial list of neighbors for EID-based routing
protocol participation. VBRs can alternatively use the list of VBGs as
potential default routers instead of engaging in an EID-based routing
protocol instance. In that case, when the VBR forwards a packet via a
VBG it may receive a redirection message indicating a different VET
node as a better next hop.</t>
<section anchor="mnr7.75" title="PI Prefix Routing Considerations">
<t>VBRs that connect large enterprise networks to the global
Internet advertise their EID PI prefixes directly into the Internet
default-free RIB via the Border Gateway Protocol (BGP) <xref
target="RFC4271"/> on their own behalf the same as for a major
service provider network. VBRs that connect large enterprise
networks to provider networks can instead advertise their EID PI
prefixes into their providers' routing system(s) if the provider
networks are configured to accept them.</t>
</section>
<section anchor="mnr7.8"
title="Client Prefix (CP) Routing Considerations">
<t>VBRs that obtain CPs from a VSP can register them with a serving
VBG in the VSP's network (e.g., through a vendor-specific short TCP
transaction). The VSP network then acts as a virtual "home"
enterprise network that connects its customer enterprise networks to
the Internet routing system. The customer enterprise networks in
turn appear as mobile components of the VSP's network, while the
customer network uses its ISP connections as transits. (In many
cases, the "VSP" may itself be a major enterprise network that
delegates CPs from its PI prefixes to child enterprise
networks.)</t>
</section>
</section>
<section anchor="defrte"
title="Default Route Configuration and Selection">
<t>Configuration of default routes in the presence of VET interfaces
must be carefully coordinated according to the inner and outer network
protocols. If the inner and outer protocols are different (e.g., IPv6
in IPv4) then default routes of the inner protocol version can be
configured with next-hops corresponding to default routers on a VET
interface while default routes of the outer protocol version can be
configured with next-hops corresponding to default routers on an
underlying interface.</t>
<t>If the inner and outer protocols are the same (e.g., IPv4 in IPv4),
care must be taken in setting the default route to avoid ambiguity.
For example, if default routes are configured on the VET interface
then more-specific routes could be configured on underlying interfaces
to avoid looping. Alternatively, multiple default routes can be
configured with some having next-hops corresponding to (EID-based)
default routers on VET interfaces and others having next-hops
corresponding to (RLOC-based) default routers on underlying
interfaces. In that case, special next-hop determination rules must be
used (see Section 6.4).</t>
</section>
<section title="Address Selection">
<t>When permitted by policy and supported by enterprise-interior
routing, VET nodes can avoid encapsulation through communications that
directly invoke the outer IP protocol using RLOC addresses instead of
EID addresses for end-to-end communications. For example, an
enterprise network that provides native IPv4 intra-enterprise services
can provide continued support for native IPv4 communications even when
encapsulated IPv6 services are available for inter-enterprise
communications.</t>
<t>In other enterprise network scenarios, the use of EID-based
communications (i.e., instead of RLOC-based communications) may be
necessary and/or beneficial to support address scaling, transparent
NAT traversal, security domain separation, site multihoming, traffic
engineering, etc.</t>
<t>VET nodes can use source address selection rules (e.g., based on
name service information) to determine whether to use EID-based or
RLOC-based addressing. The remainder of this section discusses
internetworking operation for EID-based communications using the VET
interface abstraction.</t>
</section>
<section anchor="nexthop" title="Next Hop Determination">
<t>VET nodes perform normal next-hop determination via longest prefix
match, and send packets according to the most-specific matching entry
in the FIB. If the FIB entry has multiple next-hop addresses, the VET
node selects the next-hop with the best metric value. If multiple next
hops have the same metric value, the VET node MAY use Equal Cost Multi
Path (ECMP) to forward different flows via different next-hop
addresses, where flows are determined, e.g., by computing a hash of
the inner packet's source address, destination address and flow label
fields. Note that it is not important that all VET nodes use the same
hashing algorithm nor that they perform ECMP at all; however, each VET
node SHOULD apply ECMP in a consistent fashion.</t>
<t>If the VET node has multiple default routes of the same inner and
outer protocol versions, with some corresponding to EID-based default
routers and others corresponding to RLOC-based default routers, it
must perform source address based selection of a default route. In
particular, if the packet's source address is taken from an EID prefix
the VET node selects a default route configured over the VET
interface; otherwise, it selects a default route configured over an
underlying interface.</t>
<t>As a last resort when there is no matching entry in the FIB (i.e.,
not even default), VET nodes can discover neighbors within the
enterprise network through on-demand name service queries for the
packet's destination address. For example, for the IPv6 destination
address '2001:DB8:1:2::1' and 'PRLNAME' "linkupnetworks.example.com"
the VET node can perform a name service lookup for the domain
name:<vspace blankLines="0"/>
'1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.1.0.0.0.8.b.d.0.1.0.0.2.ip6.linkupnetworks.example.com'.</t>
<t>The name service employs wildcard matching (e.g., <xref
target="RFC4592"/>) to determine the most-specific matching entry. For
example, if the most-specific prefix that covers the IPv6 destination
address is '2001:DB8:1::/48' the matching entry is:</t>
<t>'*.1.0.0.0.8.b.d.0.1.0.0.2.ip6.linkupnetworks.example.com'.</t>
<t>If the name-service lookup succeeds, it will return RLOC addresses
(e.g., in DNS A records) that correspond to neighbors to which the VET
node can forward packets. Note that this implies that, in enterprise
networks in which a last resort address resolution service is
necessary, the enterprise administrator MUST publish name service
resource records that satisfy the address mapping requirements
described above.</t>
<t>Name-service lookups in enterprise networks with a centralized
management structure use an infrastructure-based service, e.g., an
enterprise-local DNS. Name-service lookups in enterprise networks with
a distributed management structure and/or that lack an
infrastructure-based name service instead use a distributed name
service such as LLMNR over the VET interface. When a distributed name
service is used, the VBR that performs the lookup sends a multicast
query and accepts the union of all replies it receives from neighbors
on the VET interface. When a VET node receives the query, it responds
IFF it aggregates an IP prefix that covers the prefix in the
query.</t>
</section>
<section anchor="operation"
title="VET Interface Encapsulation/Decapsulation">
<t>VET interfaces encapsulate inner network layer packets in a SEAL
header followed by an outer transport-layer header such as UDP (if
necessary) followed by an outer IP header. Following all
encapsulations, the VET interface submits the encapsulated packet to
the outer IP forwarding engine for transmission on an underlying
interface. The following sections provide further details on
encapsulation.</t>
<section anchor="osi" title="Inner Network Layer Protocol">
<t>The inner network layer protocol sees the VET interface as an
ordinary network interface, and views the outer network layer
protocol as an ordinary L2 transport. The inner- and outer network
layer protocol types are mutually independent and can be used in any
combination. Inner network layer protocol types include IPv6 <xref
target="RFC2460"/> and IPv4 <xref target="RFC0791"/>, but they may
also include non-IP protocols such as OSI/CLNP <xref
target="RFC0994"/><xref target="RFC1070"/><xref
target="RFC4548"/>.</t>
</section>
<section anchor="seal" title="SEAL Encapsulation">
<t>VET interfaces that use SEAL encapsulate the inner packet in a
SEAL header as specified in <xref
target="I-D.templin-intarea-seal"/>. SEAL encapsulation must be
applied uniformly between all neighbors on the VET link. Note that
when a VET node sends a SEAL-encapsulated packet to a neighbor that
does not use SEAL encapsulation, it may receive an ICMP "port
unreachable" or "protocol unreachable" message. If so, the VET node
SHOULD treat the message as a hint that the prospective neighbor is
unreachable via the VET link.</t>
<t>The VET interface sets the 'NEXTHDR' value in the SEAL header to
the IP protocol number associated with the protocol number of the
inner network layer. The VET interface sets the other fields in the
SEAL header as specified in <xref
target="I-D.templin-intarea-seal"/>.</t>
</section>
<section anchor="UDP" title="UDP Encapsulation">
<t>Following SEAL encapsulation, VET interfaces that use UDP
encapsulation add an outer UDP header. Inclusion of an outer UDP
header MUST be applied by all neighbors on the VET link. Note that
when a VET node sends a UDP-encapsulated packet to a neighbor that
does not recognize the UDP port number, it may receive an ICMP "port
unreachable" message. If so, the VET node SHOULD treat the message
as a hint that the prospective neighbor is unreachable via the VET
link.</t>
<t>VET interfaces use UDP encapsulation on VET links that may
traverse NATs and/or traffic conditioning network gear (e.g., Equal
Cost MultiPath (ECMP) routers, Link Aggregation Gateways (LAGs),
etc.) that only recognize well-known network layer protocols. When
UDP encapsulation is used, the VET interface encapsulates the
mid-layer packet in an outer UDP header then sets the UDP port
number to the port number reserved for SEAL <xref
target="I-D.templin-intarea-seal"/>.</t>
<t>The VET interface maintains per-neighbor local and remote UDP
port numbers. For bidirectional neighbors, the VET interface sets
the local UDP port number to the value reserved for SEAL and sets
the remote UDP port number to the observed UDP source port number in
packets that it receives from the neighbor. In cases in which one of
the bidirectional neighbors is behind a NAT, this implies that the
one behind the NAT initiates the neighbor relationship. If both
neighbors have a way of knowing that there are no NATs in the path,
then they may select and set port numbers as for unidirectional
neighbors.</t>
<t>For unidirectional neighbors, the VET interface sets the remote
UDP port number to the value reserved for SEAL, and additionally
selects a small set of dynamic port number values for use as local
UDP port numbers. The VET interface then selects one of this set of
local port numbers for the UDP source port for each inner packet it
sends, where the port number can be determined e.g., by a hash
calculated over the inner network layer addresses and inner
transport layer port numbers. The VET interface uses a hash function
of its own choosing when selecting a dynamic port number value, but
it should choose a function that provides uniform distribution
between the set of values, and it should be consistent in the manner
in which the hash is applied. This procedure is RECOMMENDED in order
to support adequate load balancing, e.g., when Link Aggregation
based on UDP port numbers occurs within the path.</t>
<t>Finally, when the SEAL header Integrity Check Vector (ICV) is
included the VET interface SHOULD set the UDP checksum field to zero
regardless of the IP protocol version (see <xref
target="I-D.ietf-6man-udpzero"/><xref
target="I-D.ietf-6man-udpchecksums"/>).</t>
</section>
<section anchor="encaps" title="Outer IP Header Encapsulation">
<t>Following any mid-layer and/or UDP encapsulations, the VET
interface next adds an outer IP header. Outer IP header construction
is the same as specified for ordinary IP encapsulation (e.g., <xref
target="RFC1070"/><xref target="RFC2003"/><xref
target="RFC2473">,</xref><xref target="RFC4213">, </xref>, etc.)
except that the "TTL/Hop Limit", "Type of Service/Traffic Class" and
"Congestion Experienced" values in the inner network layer header
are copied into the corresponding fields in the outer IP header. The
VET interface also sets the IP protocol number to the appropriate
value for the first protocol layer within the encapsulation (e.g.,
UDP, SEAL, IPsec, etc.). When IPv6 is used as the outer IP protocol,
the VET interface sets the flow label value in the outer IPv6 header
the same as described in <xref target="RFC6438"/>.</t>
</section>
<section anchor="decaps" title="Decapsulation and Re-Encapsulation">
<t>When a VET node receives an encapsulated packet, it retains the
outer headers, processes the SEAL header (if present) as specified
in <xref target="I-D.templin-intarea-seal"/>, then performs next hop
determination on the packet's inner destination address. If the
inner packet will be forwarded out a different interface than it
arrived on, the VET node copies the "Congestion Experienced" value
in the outer IP header into the corresponding field in the inner
network layer header. The VET node then forwards the packet to the
next inner network layer hop, or delivers the packet locally if the
inner packet is addressed to itself.</t>
<t>If the inner packet will be forwarded out the same VET interface
that it arrived on, however, the VET node copies the "TTL/Hop
Limit", "Type of Service/Traffic Class" and "Congestion Experienced"
values in the outer IP header of the received packet into the
corresponding fields in the outer IP header of the packet to be
forwarded (i.e., the values are transferred between outer headers
and *not* copied from the inner network layer header). This is true
even if the outer IP protocol version of the received packet is
different than the outer IP protocol version of the packet to be
forwarded, i.e., the same as for bridging dissimilar L2 media
segments. This re-encapsulation procedure is necessary to support
diagnostic functions (e.g., 'traceroute'), and to ensure that the
TTL/Hop Limit eventually decrements to 0 in case of transient
routing loops.</t>
</section>
</section>
<section anchor="v6brdisc"
title="Neighbor Coordination on VET Interfaces that use SEAL">
<t>VET interfaces that use SEAL use the SEAL Control Message Protocol
(SCMP) as specified in Section 4.6 of <xref
target="I-D.templin-intarea-seal"/> to coordinate reachability,
routing information, and mappings between the inner and outer network
layer protocols. SCMP parallels the IPv6 ND <xref target="RFC4861"/>
and ICMPv6 <xref target="RFC4443"/> protocols, but operates from
within the tunnel and supports operation for any combinations of inner
and outer network layer protocols.</t>
<t>When a VET interface prepares a neighbor coordination SCMP message,
the message is formatted the same as described for the corresponding
IPv6 ND message, except that the message is preceded by a SEAL header
the same as for SCMP error messages. The interface sets the SEAL
header flags, NEXTHDR, LINK_ID, Identification, and ICV fields the
same as for SCMP error messages.</t>
<t>The VET interface next fills out the SCMP message header fields the
same as for SCMP error messages, calculates the SCMP message Checksum,
encapsulates the message in the requisite outer headers, then
calculates the SEAL header ICV value if it is configured to do so and
places the result in the ICV field. The VET interface finally sends
the message to the neighbor, which will verify the ICV and Checksum
before accepting the message.</t>
<t>VET and SEAL are specifically designed for encapsulation of inner
network layer payloads over outer IPv4 and IPv6 networks as a link
layer. VET interfaces therefore require a new Source/Target Link-Layer
Address Option (S/TLLAO) format that encapsulates IPv4 addresses as
shown in <xref target="v4llao"/> and IPv6 addresses as shown in <xref
target="v6llao"/>:</t>
<t><figure anchor="v4llao" title="SCMP S/TLLAO Option for IPv4 RLOCs">
<artwork><![CDATA[ 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 2 | Length = 1 | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv4 address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+]]></artwork>
</figure></t>
<t><figure anchor="v6llao" title="SCMP S/TLLAO Option for IPv6 RLOCs">
<artwork><![CDATA[ 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 2 | Length = 3 | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv6 address (bytes 0 thru 3) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv6 address (bytes 4 thru 7) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv6 address (bytes 8 thru 11) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv6 address (bytes 12 thru 15) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+]]></artwork>
</figure></t>
<t>The following subsections discuss VET interface neighbor
coordination using SCMP.</t>
<section anchor="ebgdisc" title="Router Discovery">
<t>VET hosts and VBRs can send SCMP Router Solicitation (SRS)
messages to one or more VBGs in the PRL to receive solicited SCMP
Router Advertisements (SRAs).</t>
<t>When a VBG receives an SRS message on a VET interface, it
prepares a solicited SRA message. The SRA includes Router Lifetimes,
Default Router Preferences, PIOs and any other options/parameters
that the VBG is configured to include.</t>
<t>The VBG finally includes one or more SLLAOs formatted as
specified above that encode the IPv6 and/or IPv4 RLOC unicast
addresses of its own enterprise-interior interfaces or the
enterprise-interior interfaces of other nearby VBGs.</t>
</section>
<section title="Neighbor Unreachability Detection">
<t>VET nodes perform Neighbor Unreachability Detection (NUD) by
monitoring hints of forward progress. The VET node can periodically
set the 'A' bit in the header of SEAL data packets to elicit SCMP
responses from the neighbor. The VET node can also send SCMP
Neighbor Solicitation (SNS) messages to the neighbor to elicit SCMP
Neighbor Advertisement (SNA) messages.</t>
<t>Responsiveness to routing changes is directly related to the
delay in detecting that a neighbor has gone unreachable. In order to
provide responsiveness comparable to dynamic routing protocols, a
reasonably short neighbor reachable time (e.g., 5sec) SHOULD be
used.</t>
<t>Additionally, a VET node may receive outer IP ICMP "Destination
Unreachable; net / host unreachable" messages from an ER on the path
indicating that the path to a neighbor may be failing. If the node
receives excessive ICMP unreachable errors through multiple RLOCs
associated with the same FIB entry, it SHOULD delete the FIB entry
and allow subsequent packets to flow through a different route
(e.g., a default route with a VBG as the next hop).</t>
</section>
<section anchor="ebrdisc" title="Redirection">
<t>The VET node connected to the source EUN (i.e., the source VET
node) can set R=1 in the SEAL header of a data packet to be
forwarded as an indication that redirection messages will be
accepted from the VET node connected to the destination EUN (i.e.,
the target VET node). Each VBG on the VET interface chain to the
target preserves the state of the R bit when it re-encapsulates and
forwards the packet.</t>
<t>When the VET node that acts as server to the target VET node
receives the packet, it sends an SCMP "Predirect" (SPD) message
forward to the target VET node. The target VET node in turn creates
an SCMP "Redirect" (SRD) message to send back to the source VET
node. The SPD and SRD message bodies are formed as specified in AERO
<xref target="RFC6706"/>, while the encapsulation headers and
message header are prepared as for SCMP encapsulation instead of
AERO encapsulation.</t>
<t>Before sending the SRD message, the target VET node also creates
a 128-bit secret key value (T_Key) that it will use to validate the
SEAL header ICV in future packets it will receive from the
(redirected) source VET node. The target encrypts T_Key with the
secret key it uses to validate the ICV in SEAL packets received from
the previous VET interface hop (P_Key(N)). It then writes the
encrypted value in the "Target" field of the SRD message, i.e.,
instead of an IPv6 address. The target VET node then encapsulates
the SRD message in a SEAL header as specified above, calculates the
SEAL ICVs and returns the message to the previous hop VBG on the
chain toward the source.</t>
<t>When the target returns the SRD message, each intermediate VBG in
the chain toward the source relays the message by examining the
source address of the inner packet within the RHO to determine the
previous hop toward the source. Each intermediate VBG in the chain
verifies the SRD message SEAL ICV and Checksum, and decrypts the
T_Key value in the SRD message "Target" field using its own secret
key (P_Key(i)). The VBG then re-encrypts T_Key using the key
corresponding to the next hop toward the source (P_Key(i-1)), then
re-calculates the SEAL ICV and sends the SRD message to the previous
hop. This relaying process is otherwise the same as for SCMP error
message relaying specified in Section 4.6 of <xref
target="I-D.templin-intarea-seal"/>.</t>
<t>When the source VET node receives the SRD message, it discovers
both the target's delegated prefix and candidate link layer
addresses for this new (unidirectional) target VET node. The source
VET node then installs the prefix included in the Redirect message
in a forwarding table entry with the target as the next hop. The
source node also caches the T_Key value, and uses it to calculate
the ICVs it will include in the SEAL header/trailer of subsequent
packets it sends to the target.</t>
<t>The source can subsequently send packets destined to an address
covered by the destination prefix using SEAL encapsulation via the
target as the next hop. The target can then use the ICVs in the SEAL
data packets for data origin authentication (similar to the source
address validation described in <xref
target="I-D.ietf-savi-framework"/>), but it need not also check the
outer source addresses/port numbers of the packets. Therefore, the
outer addresses may change over time even if the inner source
address stays the same.</t>
<t>Following redirection, if the source is subsequently unable to
reach the target via the route-optimized path, it deletes the
destination prefix forwarding table entry and installs a new
forwarding table entry for the destination prefix with a default
router as the next hop. The source VET node thereafter sets R=0 in
the SEAL headers of data packets that it sends toward the
destination prefix, but it may attempt redirection again at a later
time by again setting R=1.</t>
<t>Finally, the source and target VET nodes set an expiration timer
on the destination forwarding table entry so that stale entries are
deleted in a timely fashion as specified in AERO <xref
target="RFC6706"/>. The source MAY further engage the target in a
bidirectional neighbor synchronization exchange as described in
<xref target="tesync"/> if it is configured to do so.</t>
</section>
<section anchor="tesync"
title="Bidirectional Neighbor Synchronization">
<t>The tunnel neighbor relationship between a pair of VET interface
tunnel neighbors can be either unidirectional or bidirectional. A
unidirectional relationship (see <xref target="ebrdisc"/>) can be
established when the source VET node 'A' will tunnel data packets
directly to a target VET node 'B', but 'B' will not tunnel data
packets directly to 'A'. A bidirectional relationship is necessary,
e.g., when a pair of VET nodes require a client/server or
peer-to-peer binding.</t>
<t>In order to establish a bidirectional tunnel neighbor
relationship, the initiator (call it "A") performs a reliable
exchange (e.g., a short TCP transaction, a DHCP client/server
exchange, etc.) with the responder (call it "B"). The details of the
transaction are out of scope for this document, and indeed need not
be standardized as long as both the initiator and responder observe
the same specifications (typically manifested by a small piece of
software provisioned to a client VET node from a service provider).
Note that a short transaction instead of a persistent connection is
advised if the outer network layer protocol addresses may change,
e.g., due to a mobility event, due to loss of state in network
middleboxes, etc.</t>
<t>During the transaction, "A" and "B" first authenticate themselves
to each other, then exchange information regarding the inner network
layer prefixes that will be used for conveying inner packets that
will be forwarded over the tunnel. In this process, the initiator
and responder register one or more link identifiers (LINK_IDs) with
one another to provide "handles" for outer IP connection
addresses.</t>
<t>Following this bidirectional tunnel neighbor establishment, the
neighbors monitor the soft state for liveness, e.g., using Neighbor
Unreachability Detection hints of forward progress. When one of the
neighbors wishes to terminate the relationship, it performs another
short transaction to request the termination, then both neighbors
delete their respective tunnel soft state.</t>
<t>Once a bidirectional neighbor relationship has been established,
the initiator and responder can further engage in a dynamic routing
protocol (e.g., OSPF<xref target="RFC5340"/>, etc.) to exchange
inner network layer prefix information if they are configured to do
so.</t>
</section>
</section>
<section title="Neighbor Coordination on VET Interfaces using IPsec">
<t>VET interfaces that use IPsec encapsulation <xref
target="RFC4301"/> use the Internet Key Exchange protocol, version 2
(IKEv2) <xref target="RFC4306"/> to manage security association setup
and maintenance. IKEv2 provides a logical equivalent of the SCMP in
terms of VET interface neighbor coordinations; for example, IKEv2 also
provides mechanisms for redirection <xref target="RFC5685"/> and
mobility ]<xref target="RFC4555"/>.</t>
<t>IPsec additionally provides an extended Identification field and
ICV; these features allow IPsec to utilize outer IP fragmentation and
reassembly with less risk of exposure to data corruption due to
reassembly misassociations.</t>
</section>
<section anchor="mob" title="Mobility and Multihoming Considerations">
<t>VBRs that travel between distinct enterprise networks must either
abandon their PA prefixes that are relative to the "old" network and
obtain PA prefixes relative to the "new" network, or somehow
coordinate with a "home" network to retain ownership of the prefixes.
In the first instance, the VBR would be required to coordinate a
network renumbering event on its attached networks using the new PA
prefixes <xref target="RFC4192"/><xref target="RFC5887"/>. In the
second instance, an adjunct mobility management mechanism is
required.</t>
<t>VBRs can retain their CPs as they travel between distinct network
points of attachment as long as they continue to refresh their
CP-to-RLOC address mappings with their serving VBG in a bidirectional
neighbor exchange (see Section 6.6.4). (When the VBR moves far from
its serving VBG, it can also select a new VBG in order to maintain
optimal routing.) In this way, VBRs can update their CP-to-RLOC
mappings in real time and without requiring an adjunct mobility
management mechanism.</t>
<t>VBRs that have true PI prefixes can withdraw the prefixes from
former Internet points of attachment and re-advertise them at new
points of attachment as they move. However, this method has been shown
to produce excessive routing churn in the global internet BGP tables,
and should be avoided for any mobility scenarios that may occur along
short timescales. The alternative is to employ a system in which the
true PI prefixes are not injected into the Internet routing system,
but rather managed through some separate global mapping database. This
latter method is employed by the LISP proposal <xref
target="RFC6830"/>.</t>
<t>The VBGs of a multihomed enterprise network participate in a
private inner network layer routing protocol instance (e.g., via an
interior BGP instance) to accommodate network partitions/merges as
well as intra-enterprise mobility events.</t>
</section>
<section anchor="smf" title="Multicast ">
<section anchor="smf2"
title="Multicast over (Non)Multicast Enterprise Networks">
<t>Whether or not the underlying enterprise network supports a
native multicasting service, the VET node can act as an inner
network layer IGMP/MLD proxy <xref target="RFC4605"/> on behalf of
its attached EUNs and convey its multicast group memberships over
the VET interface to a VBG acting as a multicast router. The VET
node's inner network layer multicast transmissions will therefore be
encapsulated in outer headers with the unicast address of the VBG as
the destination.</t>
</section>
<section anchor="smf1"
title="Multicast Over Multicast-Capable Enterprise Networks">
<t>In multicast-capable enterprise networks, ERs provide an
enterprise-wide multicasting service (e.g., Simplified Multicast
Forwarding (SMF) <xref target="RFC6621"/>, Protocol Independent
Multicast (PIM) routing, Distance Vector Multicast Routing Protocol
(DVMRP) routing, etc.) over their enterprise-interior interfaces
such that outer IP multicast messages of site-scope or greater scope
will be propagated across the enterprise network. For such
deployments, VET nodes can optionally provide a native inner
multicast/broadcast capability over their VET interfaces through
mapping of the inner multicast address space to the outer multicast
address space. In that case, operation of link-or greater-scoped
inner multicasting services (e.g., a link-scoped neighbor discovery
protocol) over the VET interface is available, but SHOULD be used
sparingly to minimize enterprise-wide flooding.</t>
<t>VET nodes encapsulate inner multicast messages sent over the VET
interface in any mid-layer headers followed by an outer IP header
with a site-scoped outer IP multicast address as the destination.
For the case of IPv6 and IPv4 as the inner/outer protocols
(respectively), <xref target="RFC2529"/> provides mappings from the
IPv6 multicast address space to a site-scoped IPv4 multicast address
space (for other encapsulations, mappings are established through
administrative configuration or through an unspecified alternate
static mapping). Note that VET links will use mid-layer
encapsulations as the means for distinguishing VET nodes from legacy
RFC2529 nodes.</t>
<t>Multicast mapping for inner multicast groups over outer IP
multicast groups can be accommodated, e.g., through VET interface
snooping of inner multicast group membership and routing protocol
control messages. To support inner-to-outer multicast address
mapping, the VET interface acts as a virtual outer IP multicast host
connected to its underlying interfaces. When the VET interface
detects that an inner multicast group joins or leaves, it forwards
corresponding outer IP multicast group membership reports on an
underlying interface over which the VET interface is configured. If
the VET node is configured as an outer IP multicast router on the
underlying interfaces, the VET interface forwards locally
looped-back group membership reports to the outer IP multicast
routing process. If the VET node is configured as a simple outer IP
multicast host, the VET interface instead forwards actual group
membership reports (e.g., IGMP messages) directly over an underlying
interface.</t>
<t>Since inner multicast groups are mapped to site-scoped outer IP
multicast groups, the site administrator MUST ensure that the
site-scoped outer IP multicast messages received on the underlying
interfaces for one VET interface do not "leak out" to the underlying
interfaces of another VET interface. This is accommodated through
normal site-scoped outer IP multicast group filtering at enterprise
network boundaries.</t>
</section>
</section>
<section anchor="service" title="Service Discovery">
<t>VET nodes can perform enterprise-wide service discovery using a
suitable name-to-address resolution service. Examples of
flooding-based services include the use of LLMNR <xref
target="RFC4795"/> over the VET interface or multicast DNS (mDNS)
<xref target="I-D.cheshire-dnsext-multicastdns"/> over an underlying
interface. More scalable and efficient service discovery mechanisms
(e.g., anycast) are for further study.</t>
</section>
<section anchor="part" title="VET Link Partitioning">
<t>A VET link can be partitioned into multiple distinct logical
groupings. In that case, each partition configures its own distinct
'PRLNAME' (e.g., 'linkupnetworks.zone1.example.com',
'linkupnetworks.zone2.example.com', etc.).</t>
<t>VBGs that are configured to support partitioning MAY further create
multiple IP subnets within a partition, e.g., by sending SRAs with
PIOs containing different IP prefixes to different groups of VET
hosts. VBGs can identify subnets, e.g., by examining RLOC prefixes,
observing the enterprise-interior interfaces over which SRSs are
received, etc.</t>
<t>In the limiting case, VBGs can advertise a unique set of IP
prefixes to each VET host such that each host belongs to a different
subnet (or set of subnets) on the VET interface.</t>
</section>
<section anchor="state" title="VBG Prefix State Recovery">
<t>VBGs retain explicit state that tracks the inner network layer
prefixes delegated to VBRs connected to the VET link, e.g., so that
packets are delivered to the correct VBRs. When a VBG loses some or
all of its state (e.g., due to a power failure), client VBRs MUST
refresh the VBG's state so that packets can be forwarded over correct
routes.</t>
</section>
<section anchor="isatap" title="Legacy ISATAP Services">
<t>VBGs can support legacy ISATAP services according to the
specifications in <xref target="RFC5214"/>. In particular, VBGs can
configure legacy ISATAP interfaces and VET interfaces over the same
sets of underlying interfaces as long as the PRLs and IPv6 prefixes
associated with the ISATAP/VET interfaces are distinct.</t>
</section>
</section>
<section anchor="iana" title="IANA Considerations">
<t>There are no IANA considerations for this document.</t>
</section>
<section anchor="secure" title="Security Considerations">
<t>Security considerations for MANETs are found in <xref
target="RFC2501"/>.</t>
<t>The security considerations found in <xref target="RFC2529"/><xref
target="RFC5214"/><xref target="RFC6324"/> also apply to VET.</t>
<t>SEND <xref target="RFC3971"/> and/or IPsec <xref target="RFC4301"/>
can be used in environments where attacks on the neighbor coordination
protocol are possible. SEAL <xref target="I-D.templin-intarea-seal"/>
supports path MTU discovery, and provides per-packet authenticating
information for data origin authentication, anti-replay and message
header integrity.</t>
<t>Rogue neighbor coordination messages with spoofed RLOC source
addresses can consume network resources and cause VET nodes to perform
extra work. Nonetheless, VET nodes SHOULD NOT "blacklist" such RLOCs, as
that may result in a denial of service to the RLOCs' legitimate
owners.</t>
<t>VBRs and VBGs observe the recommendations for network ingress
filtering <xref target="RFC2827"/>.</t>
</section>
<section title="Related Work">
<t>Brian Carpenter and Cyndi Jung introduced the concept of intra-site
automatic tunneling in <xref target="RFC2529"/>; this concept was later
called: "Virtual Ethernet" and investigated by Quang Nguyen under the
guidance of Dr. Lixia Zhang. Subsequent works by these authors and their
colleagues have motivated a number of foundational concepts on which
this work is based.</t>
<t>Telcordia has proposed DHCP-related solutions for MANETs through the
CECOM MOSAIC program.</t>
<t>The Naval Research Lab (NRL) Information Technology Division uses
DHCP in their MANET research testbeds.</t>
<t>Security concerns pertaining to tunneling mechanisms are discussed in
<xref target="RFC6169"/>.</t>
<t>Default router and prefix information options for DHCPv6 are
discussed in <xref target="I-D.droms-dhc-dhcpv6-default-router"/>.</t>
<t>An automated IPv4 prefix delegation mechanism is proposed in <xref
target="RFC6656"/>.</t>
<t>RLOC prefix delegation for enterprise-edge interfaces is discussed in
<xref target="I-D.clausen-manet-autoconf-recommendations"/>.</t>
<t>MANET link types are discussed in <xref
target="I-D.clausen-manet-linktype"/>.</t>
<t>The LISP proposal <xref target="RFC6830"/> examines
encapsulation/decapsulation issues and other aspects of tunneling.</t>
<t>Various proposals within the IETF have suggested similar
mechanisms.</t>
</section>
<section anchor="ack" title="Acknowledgements">
<t>The following individuals gave direct and/or indirect input that was
essential to the work: Jari Arkko, Teco Boot, Emmanuel Bacelli, Fred
Baker, James Bound, Scott Brim, Brian Carpenter, Thomas Clausen, Claudiu
Danilov, Chris Dearlove, Remi Despres, Gert Doering, Ralph Droms, Washam
Fan, Dino Farinacci, Vince Fuller, Thomas Goff, David Green, Joel
Halpern, Bob Hinden, Sascha Hlusiak, Sapumal Jayatissa, Dan Jen, Darrel
Lewis, Tony Li, Joe Macker, David Meyer, Gabi Nakibly, Thomas Narten,
Pekka Nikander, Dave Oran, Alexandru Petrescu, Mark Smith, John Spence,
Jinmei Tatuya, Dave Thaler, Mark Townsley, Ole Troan, Michaela
Vanderveen, Robin Whittle, James Woodyatt, Lixia Zhang, and others in
the IETF AUTOCONF and MANET working groups. Many others have provided
guidance over the course of many years.</t>
<t>Discussions with colleagues following the publication of RFC5558 have
provided useful insights that have resulted in significant improvements
to this, the Second Edition of VET.</t>
</section>
<section title="Contributors">
<t>The following individuals have contributed to this document:</t>
<t>Eric Fleischman (eric.fleischman@boeing.com)<vspace/> Thomas
Henderson (thomas.r.henderson@boeing.com)<vspace/> Steven Russert
(steven.w.russert@boeing.com)<vspace/> Seung Yi
(seung.yi@boeing.com)</t>
<t>Ian Chakeres (ian.chakeres@gmail.com) contributed to earlier versions
of the document.</t>
<t>Jim Bound's foundational work on enterprise networks provided
significant guidance for this effort. We mourn his loss and honor his
contributions.</t>
</section>
</middle>
<back>
<references title="Normative References">
<?rfc include="reference.RFC.0791"?>
<?rfc include="reference.RFC.0792"?>
<?rfc include="reference.RFC.2119"?>
<?rfc include="reference.RFC.2131"?>
<?rfc include="reference.RFC.2460"?>
<?rfc include="reference.RFC.4861"?>
<?rfc include="reference.RFC.4862"?>
<?rfc include="reference.RFC.3315"?>
<?rfc include="reference.RFC.3118"?>
<?rfc include="reference.RFC.3633"?>
<?rfc include="reference.RFC.6706"?>
<?rfc include="reference.RFC.6438"?>
<?rfc include="reference.RFC.4291"?>
<?rfc include="reference.RFC.5342"?>
<?rfc include="reference.RFC.3971"?>
<?rfc include="reference.RFC.3972"?>
<?rfc include="reference.RFC.4443"?>
<?rfc include="reference.RFC.2827"?>
<?rfc include="reference.I-D.templin-intarea-seal"?>
<?rfc ?>
</references>
<references title="Informative References">
<?rfc include="reference.RFC.1122"?>
<?rfc include="reference.RFC.3819"?>
<?rfc include="reference.RFC.1955"?>
<?rfc include="reference.RFC.1753"?>
<?rfc include="reference.RFC.2003"?>
<?rfc include="reference.RFC.2132"?>
<?rfc include="reference.RFC.2473"?>
<?rfc include="reference.RFC.2775"?>
<?rfc include="reference.RFC.2501"?>
<?rfc include="reference.RFC.1918"?>
<?rfc include="reference.RFC.4852"?>
<?rfc include="reference.RFC.2529"?>
<?rfc include="reference.RFC.4192"?>
<?rfc include="reference.RFC.4193"?>
<?rfc include="reference.RFC.4213"?>
<?rfc include="reference.RFC.1035"?>
<?rfc include="reference.RFC.3927"?>
<?rfc include="reference.RFC.4271"?>
<?rfc include="reference.RFC.4301"?>
<?rfc include="reference.RFC.4795"?>
<?rfc include="reference.RFC.1070"?>
<?rfc include="reference.RFC.4903"?>
<?rfc include="reference.RFC.2491"?>
<?rfc include="reference.RFC.5340"?>
<?rfc include="reference.RFC.0994"?>
<?rfc include="reference.RFC.3947"?>
<?rfc include="reference.RFC.3948"?>
<?rfc include="reference.RFC.5214"?>
<?rfc ?>
<?rfc include="reference.RFC.4306"?>
<?rfc include="reference.RFC.4555"?>
<?rfc include="reference.RFC.4592"?>
<?rfc include="reference.RFC.5685"?>
<?rfc include="reference.RFC.4548"?>
<?rfc include="reference.RFC.4605"?>
<?rfc include="reference.RFC.6621"?>
<?rfc include="reference.RFC.4941"?>
<?rfc include="reference.RFC.5887"?>
<?rfc include="reference.I-D.ietf-savi-framework"?>
<?rfc include="reference.I-D.cheshire-dnsext-multicastdns"?>
<?rfc include="reference.RFC.6656"?>
<?rfc include="reference.RFC.6169"?>
<?rfc include="reference.I-D.clausen-manet-linktype"?>
<?rfc include="reference.I-D.clausen-manet-autoconf-recommendations"?>
<?rfc include="reference.I-D.droms-dhc-dhcpv6-default-router"?>
<?rfc include="reference.RFC.4030"?>
<?rfc include="reference.RFC.5558"?>
<?rfc include="reference.I-D.ietf-6man-udpchecksums"?>
<?rfc include="reference.I-D.jen-apt"?>
<?rfc include="reference.I-D.ietf-6man-udpzero"?>
<?rfc include="reference.RFC.6324"?>
<?rfc include="reference.RFC.6830"?>
<?rfc include="reference.I-D.ietf-grow-va"?>
<?rfc include="reference.I-D.templin-ironbis"?>
<?rfc ?>
<?rfc ?>
<reference anchor="IEN48">
<front>
<title>The Catenet Model for Internetworking</title>
<author fullname="Vinton Cerf" initials="V" surname="Cerf">
<organization/>
</author>
<date month="July" year="1978"/>
</front>
</reference>
<reference anchor="CATENET">
<front>
<title>A Proposal for Interconnecting Packet Switching
Networks</title>
<author fullname="L. Pouzin" initials="L." surname="Pouzin">
<organization/>
</author>
<date month="May" year="1974"/>
</front>
</reference>
<reference anchor="RASADV">
<front>
<title>Remote Access Server Advertisement (RASADV) Protocol
Specification</title>
<author fullname="Microsoft" initials="" surname="Microsoft">
<organization/>
</author>
<date month="October" year="2008"/>
</front>
</reference>
</references>
<section title="Duplicate Address Detection (DAD) Considerations">
<t>A priori uniqueness determination (also known as "pre-service DAD")
for an RLOC assigned on an enterprise-interior interface would require
either flooding the entire enterprise network or somehow discovering a
link in the network on which a node that configures a duplicate address
is attached and performing a localized DAD exchange on that link. But,
the control message overhead for such an enterprise-wide DAD would be
substantial and prone to false-negatives due to packet loss and
intermittent connectivity. An alternative to pre-service DAD is to
autoconfigure pseudo-random RLOCs on enterprise-interior interfaces and
employ a passive in-service DAD (e.g., one that monitors routing
protocol messages for duplicate assignments).</t>
<t>Pseudo-random IPv6 RLOCs can be generated with mechanisms such as
CGAs, IPv6 privacy addresses, etc. with very small probability of
collision. Pseudo-random IPv4 RLOCs can be generated through random
assignment from a suitably large IPv4 prefix space.</t>
<t>Consistent operational practices can assure uniqueness for
VBG-aggregated addresses/prefixes, while statistical properties for
pseudo-random address self-generation can assure uniqueness for the
RLOCs assigned on an ER's enterprise-interior interfaces. Still, an RLOC
delegation authority should be used when available, while a passive
in-service DAD mechanism should be used to detect RLOC duplications when
there is no RLOC delegation authority.</t>
</section>
<section title="Anycast Services">
<t>Some of the IPv4 addresses that appear in the Potential Router List
may be anycast addresses, i.e., they may be configured on the VET
interfaces of multiple VBRs/VBGs. In that case, each VET router
interface that configures the same anycast address must exhibit
equivalent outward behavior.</t>
<t>Use of an anycast address as the IP destination address of tunneled
packets can have subtle interactions with tunnel path MTU and neighbor
discovery. For example, if the initial fragments of a fragmented
tunneled packet with an anycast IP destination address are routed to
different egress tunnel endpoints than the remaining fragments, the
multiple endpoints will be left with incomplete reassembly buffers. This
issue can be mitigated by ensuring that each egress tunnel endpoint
implements a proactive reassembly buffer garbage collection strategy.
Additionally, ingress tunnel endpoints that send packets with an anycast
IP destination address must use the minimum path MTU for all egress
tunnel endpoints that configure the same anycast address as the tunnel
MTU. Finally, ingress tunnel endpoints SHOULD treat ICMP unreachable
messages from a router within the tunnel as at most a weak indication of
neighbor unreachability, since the failures may only be transient and a
different path to an alternate anycast router quickly selected through
reconvergence of the underlying routing protocol.</t>
<t>Use of an anycast address as the IP source address of tunneled
packets can lead to more serious issues. For example, when the IP source
address of a tunneled packet is anycast, ICMP messages produced by
routers within the tunnel might be delivered to different ingress tunnel
endpoints than the ones that produced the packets. In that case,
functions such as path MTU discovery and neighbor unreachability
detection may experience non-deterministic behavior that can lead to
communications failures. Additionally, the fragments of multiple
tunneled packets produced by multiple ingress tunnel endpoints may be
delivered to the same reassembly buffer at a single egress tunnel
endpoint. In that case, data corruption may result due to fragment
misassociation during reassembly.</t>
<t>In view of these considerations, VBGs that configure an anycast
address SHOULD also configure one or more unicast addresses from the
Potential Router List; they SHOULD further accept tunneled packets
destined to any of their anycast or unicast addresses, but SHOULD send
tunneled packets using a unicast address as the source address.</t>
</section>
</back>
</rfc>
| PAFTECH AB 2003-2026 | 2026-04-24 08:20:39 |