Network Working Group | F. Templin, Ed. |
Internet-Draft | Boeing Research & Technology |
Obsoletes: RFC5558 (if approved) | February 25, 2013 |
Intended status: Experimental | |
Expires: August 29, 2013 |
Virtual Enterprise Traversal (VET)
draft-templin-intarea-vet-36.txt
Enterprise networks connect hosts and routers over various link types, and often also connect to the global Internet either directly or via a provider network. Enterprise network nodes require a means to automatically provision addresses/prefixes and support internetworking operation in a wide variety of use cases including Small Office / Home Office (SOHO) networks, Mobile Ad hoc Networks (MANETs), ISP networks, multi-organizational corporate networks and the interdomain core of the global Internet itself. This document specifies a Virtual Enterprise Traversal (VET) abstraction for autoconfiguration and operation of nodes in dynamic enterprise networks.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 29, 2013.
Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Enterprise networks [RFC4852] connect hosts and routers over various link types (see [RFC4861], Section 2.2). The term "enterprise network" in this context extends to a wide variety of use cases and deployment scenarios. For example, an "enterprise" can be as small as a Small Office / Home Office (SOHO) network, as complex as a multi-organizational corporation, or as large as the global Internet itself. Internet Service Provider (ISP) networks are another example use case that fits well with the VET enterprise network model. Mobile Ad hoc Networks (MANETs) [RFC2501] can also be considered as a challenging example of an enterprise network, in that their topologies may change dynamically over time and that they may employ little/no active management by a centralized network administrative authority. These specialized characteristics for MANETs require careful consideration, but the same principles apply equally to other enterprise network scenarios.
In many cases, enterprise networks must present a stable manifestation to the outside world (e.g., the Internet Default Free Zone) while their internal topologies may be changing dynamically. This is often the case when portions of the enterprise network are mobile, partitioned for security purposes, employ different IP protocol versions, etc. and is most often addressed through encapsulation (also known as tunneling). This document therefore focuses on provisions for accommodating dynamic enterprise networks while presenting an outward appearance of stability and uniformity.
This document specifies a Virtual Enterprise Traversal (VET) abstraction for autoconfiguration and internetworking operation, where addresses of different scopes may be assigned on various types of interfaces with diverse properties. Both IPv4 [RFC0791][RFC0792] and IPv6 [RFC2460][RFC4443] are discussed within this context (other network layer protocols are also considered). The use of standard DHCP [RFC2131] [RFC3315] is assumed unless otherwise specified.
Provider-Edge Interfaces x x x | | | +--------------------+---+--------+----------+ E | | | | | n | I | | .... | | t | n +---+---+--------+---+ | e | t | +--------+ /| | r | e I x----+ | Host | I /*+------+--< p I | r n | |Function| n|**| | r n | n t | +--------+ t|**| | i t | a e x----+ V e|**+------+--< s e | l r . | E r|**| . | e r | f . | T f|**| . | f | V a . | +--------+ a|**| . | I a | i c . | | Router | c|**| . | n c | r e x----+ |Function| e \*+------+--< t e | t s | +--------+ \| | e s | u +---+---+--------+---+ | r | a | | .... | | i | l | | | | o +--------------------+---+--------+----------+ r | | | x x x Enterprise-Edge Interfaces
Figure 1: Enterprise Router (ER) Architecture
Figure 1 above depicts the architectural model for an Enterprise Router (ER). As shown in the figure, an ER may have a variety of interface types including enterprise-edge, enterprise-interior, provider-edge, internal-virtual, as well as VET interfaces used for encapsulating inner network layer protocol packets for transmission over an underlying IPv4 or IPv6 network. The different types of interfaces are defined, and the autoconfiguration mechanisms used for each type are specified. This architecture applies equally for MANET routers, in which enterprise-interior interfaces typically correspond to the wireless multihop radio interfaces associated with MANETs. Out of scope for this document is the autoconfiguration of provider interfaces, which must be coordinated in a manner specific to the service provider's network.
The VET framework builds on a Non-Broadcast Multiple Access (NBMA) [RFC2491] virtual interface model in a manner similar to other automatic tunneling technologies [RFC2529][RFC5214]. VET interfaces support the encapsulation of inner network layer protocol packets over IP networks (i.e., either IPv4 or IPv6), and provide an NBMA interface abstraction for coordination between tunnel endpoint "neighbors".
VET and its associated technologies (including the Subnetwork Encapsulation and Adaptation Layer (SEAL) [I-D.templin-intarea-seal] and Asymmetric Extended Route Optimization (AERO) [RFC6706]) are functional building blocks for a new architecture known as the Interior Routing Overlay Network (IRON) [I-D.templin-ironbis]. Many of the VET principles can be traced to the deliberations of the ROAD group in January 1992, and also to still earlier initiatives including the Catenet model for internetworking [CATENET] [IEN48] [RFC2775] and NIMROD [RFC1753]. The high-level architectural aspects of the ROAD group deliberations are captured in a "New Scheme for Internet Routing and Addressing (ENCAPS) for IPNG" [RFC1955].
VET is related to the present-day activities of the IETF INTAREA, AUTOCONF, DHC, IPv6, MANET, RENUM and V6OPS working groups, as well as the IRTF RRG working group.
This document is based on [RFC5558] but makes significant changes over that earlier work. The most important difference is that this document breaks the linkage between VET and earlier NBMA tunneling mechanisms such as 6over4 and ISATAP. The document therefore no longer has backwards-compatible dependencies on these technologies.
The terminology section has seen some new terms added and some existing terms renamed and/or clarified. Important new terms including "Client Prefix (CP)" and "VET link" have been added, while other terms including VET Border Router and VET Border Gateway have been renamed for greater clarity. RFC2119 terminology has also been added.
"Enterprise Network Characteristics" now also considers cases in which an enterprise network may contain many internal partitions, which is an area that was left underspecified in RFC5558. These partitions may be necessary for such uses as load balancing, organizational separation, etc. The section now also discusses both unidirectional and bidirectional neighbor relationships.
The "Enterprise Router (ER) Autoconfiguration" section now provides a discussion on DHCP relaying considerations, including replay detection. These considerations are important for instances in which DHCP relaying may be excessive (e.g., Mobile Ad-Hoc Networks (MANETs)).
The "VET Border Router Autoconfiguration" section now draws a distinction between what is meant by "VET link" and "VET interface", and explains the cases in which link local addresses can and cannot be used. Provider Aggregated (PA) prefix autoconfiguration now also discusses both stateful and stateless autoconfiguration. The subsection on "ISP-Independent EID Prefix Autoconfiguration" now also introduces the capability of registering Client Prefixes (CPs) with Virtual Service Providers (VSPs).
The "VET Border Gateway (VBG) Autoconfiguration" section now explains the manner in which VBGs can act as "half gateways" in the IRON Client/Server/Relay architecture. The "VET Host Autoconfiguration" section now explains cases in which prefixes may be provided to hosts, i.e., if there is assurance that the link will not partition.
Under "Internetworking Operation", "Routing Protocol Participation" now discusses the case of receiving on-demand redirection messages as a form of routing. The section further discusses PI prefix and CP prefix routing considerations. "Default Route Configuration", "Address Selection" and "Next-Hop Determination" are newly rewritten sections that completely replace significant portions of this major section. "VET Interface Encapsulation/Decapsulation" now gives important details on encapsulation procedures and header formats that were not present in RFC5558. The new section on "Neighbor Coordination" (including discussions of unidirectional and bidirectional neighbor relationships as well as redirection) is also key to understanding the new operational model. The remaining sections of "Internetworking Operation" have received minor and/or substantial rewrites with most of the specification intact from RFC5558. The document finally adds a new appendix on Anycast Services.
The mechanisms within this document build upon the fundamental principles of IP encapsulation. The term "inner" refers to the innermost {address, protocol, header, packet, etc.} *before* encapsulation, and the term "outer" refers to the outermost {address, protocol, header, packet, etc.} *after* encapsulation. VET also accommodates "mid-layer" encapsulations such as SEAL [I-D.templin-intarea-seal] and IPsec [RFC4301].
The terminology in the normative references apply; the following terms are defined within the scope of this document:
The following additional acronyms are used throughout the document:
CGA - Cryptographically Generated Address
DHCP(v4, v6) - Dynamic Host Configuration Protocol
ECMP - Equal Cost Multi Path
EUN - End User Network
FIB - Forwarding Information Base
ICMP - either ICMPv4 or ICMPv6
ICV - Integrity Check Vector
IP - either IPv4 or IPv6
ISATAP - Intra-Site Automatic Tunnel Addressing Protocol
NBMA - Non-Broadcast, Multiple Access
ND - Neighbor Discovery
PIO - Prefix Information Option
PRL - Potential Router List
PRLNAME - Identifying name for the PRL
RIB - Routing Information Base
RIO - Route Information Option
SCMP - SEAL Control Message Protocol
SEAL - Subnetwork Encapsulation and Adaptation Layer
SLAAC - IPv6 StateLess Address AutoConfiguration
SNS/SNA - SCMP Neighbor Solicitation/Advertisement
SPD - SCMP Predirect
SRD - SCMP Redirect
SRS/SRA - SCMP Router Solicitation/Advertisement
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. When used in lower case (e.g., must, must not, etc.), these words MUST NOT be interpreted as described in [RFC2119], but are rather interpreted as they would be in common English.
Enterprise networks consist of links that are connected by Enterprise Routers (ERs) as depicted in Figure 1. ERs typically participate in a routing protocol over enterprise-interior interfaces to discover routes that may include multiple Layer 2 or Layer 3 forwarding hops. VET Border Routers (VBRs) are ERs that connect End User Networks (EUNs) to VET links that span enterprise networks. VET Border Gateways (VBGs) are VBRs that connect VET links to provider networks.
Conceptually, an ER embodies both a host function and router function, and supports communications according to the weak end-system model [RFC1122]. The router function engages in the enterprise-interior routing protocol on its enterprise-interior interfaces, connects any of the ER's EUNs to its VET links, and may also connect the VET links to provider networks (see Figure 1). The host function typically supports network management applications, but may also support diverse applications typically associated with general-purpose computing platforms.
An enterprise network may be as simple as a small collection of ERs and their attached EUNs; an enterprise network may also contain other enterprise networks and/or be a subnetwork of a larger enterprise network. An enterprise network may further encompass a set of branch offices and/or nomadic hosts connected to a home office over one or several service providers, e.g., through Virtual Private Network (VPN) tunnels. Finally, an enterprise network may contain many internal partitions that are logical or physical groupings of nodes for the purpose of load balancing, organizational separation, etc. In that case, each internal partition resembles an individual segment of a bridged LAN.
Enterprise networks that comprise link types with sufficiently similar properties (e.g., Layer 2 (L2) address formats, maximum transmission units (MTUs), etc.) can configure a subnetwork routing service such that the network layer sees the underlying network as an ordinary shared link the same as for a (bridged) campus LAN (this is often the case with large cellular operator networks). In that case, a single network layer hop is sufficient to traverse the underlying network. Enterprise networks that comprise link types with diverse properties and/or configure multiple IP subnets must also provide an enterprise-interior routing service that operates as an IP layer mechanism. In that case, multiple network layer hops may be necessary to traverse the underlying network.
In addition to other interface types, VET nodes configure VET interfaces that view all other nodes on the VET link as neighbors on a virtual NBMA link. VET nodes configure a separate VET interface for each distinct VET link to which they connect, and discover neighbors on the link that can be used for forwarding packets to off-link destinations. VET interface neighbor relationships may be either unidirectional or bidirectional.
A unidirectional neighbor relationship is typically established and maintained as a result of network layer control protocol messaging in a manner that parallels IPv6 neighbor discovery [RFC4861]. A bidirectional neighbor relationship is typically established and maintained as result of a short transaction between the neighbors (see Section 6.6.4).
For each distinct VET link, a trust basis must be established and consistently applied. For example, for VET links configured over enterprise networks in which VBRs establish symmetric security associations, mechanisms such as IPsec [RFC4301] can be used to assure authentication and confidentiality. In other enterprise network scenarios, VET links may require asymmetric securing mechanisms such as SEcure Neighbor Discovery (SEND) [RFC3971]. VET links configured over still other enterprise networks may find it sufficient to employ only the services provided by SEAL [I-D.templin-intarea-seal] (including anti-replay, packet header integrity, and data origin authentication) and defer strong security services to higher layer functions.
Finally, for VET links configured over enterprise networks with a centralized management structure (e.g., a corporate campus network, an ISP network, etc.), a hybrid routing/mapping service can be deployed using a synchronized set of VBGs. In that case, the VBGs can provide a mapping service (similar to the "default mapper" described in [I-D.jen-apt]) used for short-term packet forwarding until route-optimized paths can be established. For VET links configured over enterprise networks with a distributed management structure (e.g., disconnected MANETs), interdomain coordination between the VET nodes themselves without the assistance of VBGs may be required. Recognizing that various use cases may entail a continuum between a fully centralized and fully distributed approach, the following sections present the mechanisms of Virtual Enterprise Traversal as they apply to a wide variety of scenarios.
ERs, VBRs, VBGs, and VET hosts configure themselves for operation as specified in the following subsections.
ERs configure enterprise-interior interfaces and engage in any routing protocols over those interfaces.
When an ER joins an enterprise network, it first configures an IPv6 link-local address on each enterprise-interior interface that requires an IPv6 link-local capability and configures an IPv4 link-local address on each enterprise-interior interface that requires an IPv4 link-local capability. IPv6 link-local address generation mechanisms include Cryptographically Generated Addresses (CGAs) [RFC3972], IPv6 Privacy Addresses [RFC4941], StateLess Address AutoConfiguration (SLAAC) using EUI-64 interface identifiers [RFC4291] [RFC4862], etc. The mechanisms specified in [RFC3927] provide an IPv4 link-local address generation capability.
Next, the ER configures one or more RLOCs and engages in any routing protocols on its enterprise-interior interfaces. The ER can configure RLOCs via administrative configuration, pseudo-random self-generation from a suitably large address pool, SLAAC, DHCP autoconfiguration, or through an alternate autoconfiguration mechanism.
Pseudo-random self-generation of IPv6 RLOCs can be from a large public or local-use IPv6 address range (e.g., IPv6 Unique Local Addresses [RFC4193]). Pseudo-random self-generation of IPv4 RLOCs can be from a large public or local-use IPv4 address range (e.g., [RFC1918]). When self-generation is used alone, the ER continuously monitors the RLOCs for uniqueness, e.g., by monitoring the enterprise-interior routing protocol. (Note however that anycast RLOCs may be assigned to multiple enterprise-interior interfaces; hence, monitoring for uniqueness applies only to RLOCs that are provisioned as unicast.)
SLAAC autoconfiguration of RLOCs can be through the receipt of IPv6 Router Advertisements (RAs) followed by the stateless configuration of addresses based on any included Prefix Information Options (PIOs) [RFC4861][RFC4862].
DHCP autoconfiguration of RLOCs uses standard DHCP procedures, however ERs acting as DHCP clients SHOULD also use DHCP Authentication [RFC3118] [RFC3315] as discussed further below. In typical enterprise network scenarios (i.e., those with stable links), it may be sufficient to configure one or a few DHCP relays on each link that does not include a DHCP server. In more extreme scenarios (e.g., MANETs that include links with dynamic connectivity properties), DHCP operation may require any ERs that have already configured RLOCs to act as DHCP relays to ensure that client DHCP requests eventually reach a DHCP server. This may result in considerable DHCP message relaying until a server is located, but the DHCP Authentication Replay Detection option [RFC4030] provides relays with a means for avoiding message duplication.
In all enterprise network scenarios, the amount of DHCP relaying required can be significantly reduced if each relay has a way of contacting a DHCP server directly. In particular, if the relay can discover the unicast addresses for one or more servers (e.g., by discovering the unicast RLOC addresses of VBGs as described in Section 5.2.2) it can forward DHCP requests directly to the unicast address(es) of the server(s). If the relay does not know the unicast address of a server, it can forward DHCP requests to a site-scoped DHCP server multicast address if the enterprise network supports site-scoped multicast services. For DHCPv6, relays can forward requests to the site-scoped IPv6 multicast group address 'All_DHCP_Servers' [RFC3315]. For DHCPv4, relays can forward requests to the site-scoped IPv4 multicast group address 'All_DHCPv4_Servers', which SHOULD be set to a well-known site-scoped IPv4 multicast group address for the enterprise network. DHCPv4 servers that delegate RLOCs SHOULD therefore join the 'All_DHCPv4_Servers' multicast group and service any DHCPv4 messages received for that group.
A combined approach using both DHCP and self-generation is also possible when the ER configures both a DHCP client and relay that are connected, e.g., via a pair of back-to-back connected Ethernet interfaces, a tun/tap interface, a loopback interface, inter-process communication, etc. The ER first self-generates an RLOC taken from a temporary addressing range used only for the bootstrapping purpose of procuring an actual RLOC taken from a delegated addressing range. The ER then engages in the enterprise-interior routing protocol and performs a DHCP exchange as above using the temporary RLOC as the address of its relay function. When the DHCP server delegates an actual RLOC address/prefix, the ER abandons the temporary RLOC and re-engages in the enterprise-interior routing protocol using an RLOC taken from the delegation.
Alternatively (or in addition to the above), the ER can request RLOC prefix delegations via an automated prefix delegation exchange over an enterprise-interior interface and can assign the prefix(es) on enterprise-edge interfaces. Note that in some cases, the same enterprise-edge interfaces may assign both RLOC and EID addresses if there is a means for source address selection. In other cases (e.g., for separation of security domains), RLOCs and EIDs are assigned on separate sets of enterprise-edge interfaces.
In some enterprise network scenarios (e.g., MANETs that include links with dynamic connectivity properties), assignment of RLOCs on enterprise-interior interfaces as singleton addresses (i.e., as addresses with /32 prefix lengths for IPv4, or as addresses with /128 prefix lengths for IPv6) MAY be necessary to avoid multi-link subnet issues [RFC4903].
VBRs are ERs that configure and use one or more VET interfaces. In addition to the ER autoconfiguration procedures specified in Section 5.1, VBRs perform the following autoconfiguration operations.
VBRs configure a separate VET interface for each VET link, where each VET link spans a distinct sets of underlying links belonging to the same enterprise network. All nodes on the VET link appear as single-hop neighbors from the standpoint of the inner network layer protocol through the use of encapsulation.
The VBR binds each VET interface to one or more underlying interfaces, and uses the underlying interface addresses as RLOCs to serve as the outer source addresses for encapsulated packets. The VBR then assigns a link-local address to each VET interface if possible (*). When IPv6 and IPv4 are used as the inner/outer protocols (respectively), the VBR can autoconfigure an IPv6 link-local address on the VET interface using a modified EUI-64 interface identifier based on an IPv4 RLOC address (see Section 2.2.1 of [RFC5342]). Link-local address configuration for other inner/outer protocol combinations is through administrative configuration, random self-generation (e.g., [RFC4941], etc.) or through an unspecified alternate method.
(*) In some applications, assignment of link-local addresses on a VET interface may be impractical due to an indefinite mapping of the inner link-local address to an outer RLOC address. For example, if there are VET link neighbors located behind Network Address Translators (NATs) any inner link-local address to outer RLOC address mapping may be subject to change due to changes in NAT state. In that case, inner network layer protocol services such as the IPv6 Neighbor Discovery (ND) protocol [RFC4861] that depend on link-local addressing may not be able to function in the normal manner over the VET link.
After initializing the VET interface, the VBR next discovers a Potential Router List (PRL) for the VET link that includes the RLOC addresses of VBGs. The VBR discovers the PRL through administrative configuration, as part of an arrangement with a Virtual Service Provider (VSP) (see: Section 5.2.4), through information conveyed in the enterprise-interior routing protocol, via a multicast beacon, via an anycast VBG discovery message exchange, or through some other means specific to the enterprise network.
If no such enterprise-specific information is available, the VBR can instead resolve an identifying name for the PRL ('PRLNAME') formed as 'hostname.domainname', where 'hostname' is an enterprise-specific name string and 'domainname' is an enterprise-specific Domain Name System (DNS) suffix [RFC1035]. The VBR can discover 'domainname' through the DHCP Domain Name option [RFC2132], administrative configuration, etc. The VBR can discover 'hostname' via link-layer information (e.g., an IEEE 802.11 Service Set Identifier (SSID)), administrative configuration, etc.
In the absence of other information, the VBR sets 'hostname' to "linkupnetworks" and sets 'domainname' to an enterprise-specific DNS suffix (e.g., "example.com"). Isolated enterprise networks that do not connect to the outside world may have no enterprise-specific DNS suffix, in which case the 'PRLNAME' consists only of the 'hostname' component.
After discovering 'PRLNAME', the VBR resolves the name into a list of RLOC addresses through a name service lookup. For centrally managed enterprise networks, the VBR resolves 'PRLNAME' using an enterprise-local name service (e.g., the DNS). For enterprises with no centralized management structure, the VBR resolves 'PRLNAME' using a distributed name service query such as Link-Local Multicast Name Resolution (LLMNR) [RFC4795] over the VET interface. In that case, all VBGs in the PRL respond to the query, and the VBR accepts the union of all responses.
VBRs that connect their enterprise networks to a provider network can obtain Provider-Aggregated (PA) EID prefixes. For IPv4, VBRs acquire IPv4 PA EID prefixes through administrative configuration, an automated IPv4 prefix delegation exchange, etc.
For IPv6, VBRs acquire IPv6 PA EID prefixes through administrative configuration or through DHCPv6 Prefix Delegation exchanges with a VBG acting as a DHCP relay/server. In particular, the VBR (acting as a requesting router) can use DHCPv6 prefix delegation [RFC3633] over the VET interface to obtain prefixes from the VBG (acting as a delegating router). The VBR obtains prefixes using either a 2-message or 4-message DHCPv6 exchange [RFC3315]. When the VBR acts as a DHCPv6 client, it maps the IPv6 "All_DHCP_Relay_Agents_and_Servers" link-scoped multicast address to the VBG's outer RLOC address.
To perform the 2-message exchange, the VBR's DHCPv6 client function can send a Solicit message with an IA_PD option either directly or via the VBR's own DHCPv6 relay function (see Section 5.1). The VBR's VET interface then forwards the message using VET encapsulation (see Section 6.4) to a VBG which either services the request or relays it further. The forwarded Solicit message will elicit a Reply message from the server containing prefix delegations. The VBR can also propose a specific prefix to the DHCPv6 server per Section 7 of [RFC3633]. The server will check the proposed prefix for consistency and uniqueness, then return it in the Reply message if it was able to perform the delegation.
After the VBR receives IPv4 and/or IPv6 prefix delegations, it can provision the prefixes on enterprise-edge interfaces as well as on other VET interfaces configured over child enterprise networks for which it acts as a VBG. The VBR can also provision the prefixes on enterprise-interior interfaces to service directly-attached hosts on the enterprise-interior link.
The prefix delegations remain active as long as the VBR continues to renew them via the delegating VBG before lease lifetimes expire. The lease lifetime also keeps the delegation state active even if communications between the VBR and delegating VBG are disrupted for a period of time (e.g., due to an enterprise network partition, power failure, etc.). Note however that if the VBR abandons or otherwise loses continuity with the prefixes, it may be obliged to perform network-wide renumbering if it subsequently receives a new and different set of prefixes.
Prefix delegation for non-IP protocols is out of scope.
VBRs can acquire Provider-Independent (PI) prefixes to facilitate multihoming, mobility and traffic engineering without requiring site-wide renumbering events due to a change in ISP connections.
VBRs that connect major enterprise networks (e.g., large corporations, academic campuses, ISP networks, etc.) to the global Internet can acquire short PI prefixes (e.g., an IPv6 /32, an IPv4 /16, etc.) through a registration authority such as the Internet Assigned Numbers Authority (IANA) or a major regional Internet registry. The VBR then advertises the PI prefixes into the global Internet on the behalf of its enterprise network without the assistance of an ISP.
VBRs that connect enterprise networks to a provider network can acquire longer Client Prefixes (CPs) (e.g., an IPv6 /56, an IPv4 /24, etc.) through arrangements with a Virtual Service Provider (VSP) that may or may not be associated with a specific ISP. The VBR then coordinates its CPs with a VSP independently of any of its directly attached ISPs. (In many cases, the "VSP" may in fact be a major enterprise network that delegates CPs from its PI prefixes.)
After a VBR receives prefix delegations, it can sub-delegate portions of the prefixes on enterprise-edge interfaces, on child VET interfaces for which it is configured as a VBG and on enterprise-interior interfaces to service directly-attached hosts on the enterprise-interior link. The VBR can also sub-delegate portions of its prefixes to requesting routers connected to child enterprise networks. These requesting routers consider their sub-delegated prefixes as PA, and consider the delegating routers as their points of connection to a provider network.
VBGs are VBRs that connect VET links configured over child enterprise networks to provider networks via provider-edge interfaces and/or via VET links configured over parent enterprise networks. A VBG may also act as a "half-gateway", in that it may need to forward the packets it receives from neighbors on the VET link via another VBG associated with the same VET link. This model is seen in the IRON [I-D.templin-ironbis] Client/Server/Relay architecture, in which a Server "half-gateway" is a VBG that forwards packets with enterprise-external destinations via a Relay "half-gateway" that connects the VET link to the provider network.
VBGs autoconfigure their provider-edge interfaces in a manner that is specific to the provider connections, and they autoconfigure their VET interfaces that were configured over parent VET links using the VBR autoconfiguration procedures specified in Section 5.2. For each of its VET interfaces connected to child VET links, the VBG initializes the interface the same as for an ordinary VBR (see Section 5.2.1). It then arranges to add one or more of its RLOCs associated with the child VET link to the PRL.
VBGs configure a DHCP relay/server on VET interfaces connected to child VET links that require DHCP services. VBGs may also engage in an unspecified anycast VBG discovery message exchange if they are configured to do so. Finally, VBGs respond to distributed name service queries for 'PRLNAME' on VET interfaces connected to VET links that span child enterprise networks with a distributed management structure.
Nodes that cannot be attached via a VBR's enterprise-edge interface (e.g., nomadic laptops that connect to a home office via a Virtual Private Network (VPN)) can instead be configured for operation as a simple host on the VET link. Each VET host performs the same enterprise interior interface RLOC configuration procedures as specified for ERs in Section 5.1. The VET host next performs the same VET interface initialization and PRL discovery procedures as specified for VBRs in Section 5.2, except that it configures its VET interfaces as host interfaces (and not router interfaces). Note also that a node may be configured as a host on some VET interfaces and as a VBR/VBG on other VET interfaces.
A VET host may receive non-link-local addresses and/or prefixes to assign to the VET interface via administrative configuration, DHCP exchanges and/or through SLAAC information conveyed in RAs. If prefixes are provided, however, there must be assurance that either 1) the VET link will not partition, or 2) that each VET host interface connected to the VET link will configure a unique set of prefixes. VET hosts therefore depend on DHCP and/or RA exchanges to provide only addresses/prefixes that are appropriate for assignment to the VET interface according to these specific cases, and depend on the VBGs within the enterprise keeping track of which addresses/prefixes were assigned to which hosts.
When the VET host solicits a DHCP-assigned EID address/prefix over a (non-multicast) VET interface, it maps the DHCP relay/server multicast inner destination address to the outer RLOC address of a VBG that it has selected as a default router. The VET host then assigns any resulting DHCP-delegated addresses/prefixes to the VET interface for use as the source address of inner packets. The host will subsequently send all packets destined to EID correspondents via a default router on the VET link, and may discover more-specific routes based on any redirection messages it receives.
Following the autoconfiguration procedures specified in Section 5, ERs, VBRs, VBGs, and VET hosts engage in normal internetworking operations as discussed in the following sections.
ERs engage in any RLOC-based routing protocols over enterprise-interior interfaces to exchange routing information for forwarding IP packets with RLOC addresses. VBRs and VBGs can additionally engage in any EID-based routing protocols over VET, enterprise-edge and provider-edge interfaces to exchange routing information for forwarding inner network layer packets with EID addresses. Note that any EID-based routing instances are separate and distinct from any RLOC-based routing instances.
VBR/VBG routing protocol participation on non-multicast VET interfaces uses the NBMA interface model, e.g., in the same manner as for OSPF over NBMA interfaces [RFC5340]. (VBR/VBG routing protocol participation on multicast-capable VET interfaces can alternatively use the standard multicast interface model, but this may result in excessive multicast control message overhead.)
VBRs can use the list of VBGs in the PRL (see Section 5.2.1) as an initial list of neighbors for EID-based routing protocol participation. VBRs can alternatively use the list of VBGs as potential default routers instead of engaging in an EID-based routing protocol instance. In that case, when the VBR forwards a packet via a VBG it may receive a redirection message indicating a different VET node as a better next hop.
VBRs that connect large enterprise networks to the global Internet advertise their EID PI prefixes directly into the Internet default-free RIB via the Border Gateway Protocol (BGP) [RFC4271] on their own behalf the same as for a major service provider network. VBRs that connect large enterprise networks to provider networks can instead advertise their EID PI prefixes into their providers' routing system(s) if the provider networks are configured to accept them.
VBRs that obtain CPs from a VSP can register them with a serving VBG in the VSP's network (e.g., through a vendor-specific short TCP transaction). The VSP network then acts as a virtual "home" enterprise network that connects its customer enterprise networks to the Internet routing system. The customer enterprise networks in turn appear as mobile components of the VSP's network, while the customer network uses its ISP connections as transits. (In many cases, the "VSP" may itself be a major enterprise network that delegates CPs from its PI prefixes to child enterprise networks.)
Configuration of default routes in the presence of VET interfaces must be carefully coordinated according to the inner and outer network protocols. If the inner and outer protocols are different (e.g., IPv6 in IPv4) then default routes of the inner protocol version can be configured with next-hops corresponding to default routers on a VET interface while default routes of the outer protocol version can be configured with next-hops corresponding to default routers on an underlying interface.
If the inner and outer protocols are the same (e.g., IPv4 in IPv4), care must be taken in setting the default route to avoid ambiguity. For example, if default routes are configured on the VET interface then more-specific routes could be configured on underlying interfaces to avoid looping. Alternatively, multiple default routes can be configured with some having next-hops corresponding to (EID-based) default routers on VET interfaces and others having next-hops corresponding to (RLOC-based) default routers on underlying interfaces. In that case, special next-hop determination rules must be used (see Section 6.4).
When permitted by policy and supported by enterprise-interior routing, VET nodes can avoid encapsulation through communications that directly invoke the outer IP protocol using RLOC addresses instead of EID addresses for end-to-end communications. For example, an enterprise network that provides native IPv4 intra-enterprise services can provide continued support for native IPv4 communications even when encapsulated IPv6 services are available for inter-enterprise communications.
In other enterprise network scenarios, the use of EID-based communications (i.e., instead of RLOC-based communications) may be necessary and/or beneficial to support address scaling, transparent NAT traversal, security domain separation, site multihoming, traffic engineering, etc.
VET nodes can use source address selection rules (e.g., based on name service information) to determine whether to use EID-based or RLOC-based addressing. The remainder of this section discusses internetworking operation for EID-based communications using the VET interface abstraction.
VET nodes perform normal next-hop determination via longest prefix match, and send packets according to the most-specific matching entry in the FIB. If the FIB entry has multiple next-hop addresses, the VET node selects the next-hop with the best metric value. If multiple next hops have the same metric value, the VET node MAY use Equal Cost Multi Path (ECMP) to forward different flows via different next-hop addresses, where flows are determined, e.g., by computing a hash of the inner packet's source address, destination address and flow label fields. Note that it is not important that all VET nodes use the same hashing algorithm nor that they perform ECMP at all; however, each VET node SHOULD apply ECMP in a consistent fashion.
If the VET node has multiple default routes of the same inner and outer protocol versions, with some corresponding to EID-based default routers and others corresponding to RLOC-based default routers, it must perform source address based selection of a default route. In particular, if the packet's source address is taken from an EID prefix the VET node selects a default route configured over the VET interface; otherwise, it selects a default route configured over an underlying interface.
As a last resort when there is no matching entry in the FIB (i.e., not even default), VET nodes can discover neighbors within the enterprise network through on-demand name service queries for the packet's destination address. For example, for the IPv6 destination address '2001:DB8:1:2::1' and 'PRLNAME' "linkupnetworks.example.com" the VET node can perform a name service lookup for the domain name:
'1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.0.1.0.0.0.8.b.d.0.1.0.0.2.ip6.linkupnetworks.example.com'.
The name service employs wildcard matching (e.g., [RFC4592]) to determine the most-specific matching entry. For example, if the most-specific prefix that covers the IPv6 destination address is '2001:DB8:1::/48' the matching entry is:
'*.1.0.0.0.8.b.d.0.1.0.0.2.ip6.linkupnetworks.example.com'.
If the name-service lookup succeeds, it will return RLOC addresses (e.g., in DNS A records) that correspond to neighbors to which the VET node can forward packets. Note that this implies that, in enterprise networks in which a last resort address resolution service is necessary, the enterprise administrator MUST publish name service resource records that satisfy the address mapping requirements described above.
Name-service lookups in enterprise networks with a centralized management structure use an infrastructure-based service, e.g., an enterprise-local DNS. Name-service lookups in enterprise networks with a distributed management structure and/or that lack an infrastructure-based name service instead use a distributed name service such as LLMNR over the VET interface. When a distributed name service is used, the VBR that performs the lookup sends a multicast query and accepts the union of all replies it receives from neighbors on the VET interface. When a VET node receives the query, it responds IFF it aggregates an IP prefix that covers the prefix in the query.
VET interfaces encapsulate inner network layer packets in a SEAL header followed by an outer transport-layer header such as UDP (if necessary) followed by an outer IP header. Following all encapsulations, the VET interface submits the encapsulated packet to the outer IP forwarding engine for transmission on an underlying interface. The following sections provide further details on encapsulation.
The inner network layer protocol sees the VET interface as an ordinary network interface, and views the outer network layer protocol as an ordinary L2 transport. The inner- and outer network layer protocol types are mutually independent and can be used in any combination. Inner network layer protocol types include IPv6 [RFC2460] and IPv4 [RFC0791], but they may also include non-IP protocols such as OSI/CLNP [RFC0994][RFC1070][RFC4548].
VET interfaces that use SEAL encapsulate the inner packet in a SEAL header as specified in [I-D.templin-intarea-seal]. SEAL encapsulation must be applied uniformly between all neighbors on the VET link. Note that when a VET node sends a SEAL-encapsulated packet to a neighbor that does not use SEAL encapsulation, it may receive an ICMP "port unreachable" or "protocol unreachable" message. If so, the VET node SHOULD treat the message as a hint that the prospective neighbor is unreachable via the VET link.
The VET interface sets the 'NEXTHDR' value in the SEAL header to the IP protocol number associated with the protocol number of the inner network layer. The VET interface sets the other fields in the SEAL header as specified in [I-D.templin-intarea-seal].
Following SEAL encapsulation, VET interfaces that use UDP encapsulation add an outer UDP header. Inclusion of an outer UDP header MUST be applied by all neighbors on the VET link. Note that when a VET node sends a UDP-encapsulated packet to a neighbor that does not recognize the UDP port number, it may receive an ICMP "port unreachable" message. If so, the VET node SHOULD treat the message as a hint that the prospective neighbor is unreachable via the VET link.
VET interfaces use UDP encapsulation on VET links that may traverse NATs and/or traffic conditioning network gear (e.g., Equal Cost MultiPath (ECMP) routers, Link Aggregation Gateways (LAGs), etc.) that only recognize well-known network layer protocols. When UDP encapsulation is used, the VET interface encapsulates the mid-layer packet in an outer UDP header then sets the UDP port number to the port number reserved for SEAL [I-D.templin-intarea-seal].
The VET interface maintains per-neighbor local and remote UDP port numbers. For bidirectional neighbors, the VET interface sets the local UDP port number to the value reserved for SEAL and sets the remote UDP port number to the observed UDP source port number in packets that it receives from the neighbor. In cases in which one of the bidirectional neighbors is behind a NAT, this implies that the one behind the NAT initiates the neighbor relationship. If both neighbors have a way of knowing that there are no NATs in the path, then they may select and set port numbers as for unidirectional neighbors.
For unidirectional neighbors, the VET interface sets the remote UDP port number to the value reserved for SEAL, and additionally selects a small set of dynamic port number values for use as local UDP port numbers. The VET interface then selects one of this set of local port numbers for the UDP source port for each inner packet it sends, where the port number can be determined e.g., by a hash calculated over the inner network layer addresses and inner transport layer port numbers. The VET interface uses a hash function of its own choosing when selecting a dynamic port number value, but it should choose a function that provides uniform distribution between the set of values, and it should be consistent in the manner in which the hash is applied. This procedure is RECOMMENDED in order to support adequate load balancing, e.g., when Link Aggregation based on UDP port numbers occurs within the path.
Finally, when the SEAL header Integrity Check Vector (ICV) is included the VET interface SHOULD set the UDP checksum field to zero regardless of the IP protocol version (see [I-D.ietf-6man-udpzero][I-D.ietf-6man-udpchecksums]).
Following any mid-layer and/or UDP encapsulations, the VET interface next adds an outer IP header. Outer IP header construction is the same as specified for ordinary IP encapsulation (e.g., [RFC1070][RFC2003], [RFC2473], [RFC4213], etc.) except that the "TTL/Hop Limit", "Type of Service/Traffic Class" and "Congestion Experienced" values in the inner network layer header are copied into the corresponding fields in the outer IP header. The VET interface also sets the IP protocol number to the appropriate value for the first protocol layer within the encapsulation (e.g., UDP, SEAL, IPsec, etc.). When IPv6 is used as the outer IP protocol, the VET interface sets the flow label value in the outer IPv6 header the same as described in [RFC6438].
When a VET node receives an encapsulated packet, it retains the outer headers, processes the SEAL header (if present) as specified in [I-D.templin-intarea-seal], then performs next hop determination on the packet's inner destination address. If the inner packet will be forwarded out a different interface than it arrived on, the VET node copies the "Congestion Experienced" value in the outer IP header into the corresponding field in the inner network layer header. The VET node then forwards the packet to the next inner network layer hop, or delivers the packet locally if the inner packet is addressed to itself.
If the inner packet will be forwarded out the same VET interface that it arrived on, however, the VET node copies the "TTL/Hop Limit", "Type of Service/Traffic Class" and "Congestion Experienced" values in the outer IP header of the received packet into the corresponding fields in the outer IP header of the packet to be forwarded (i.e., the values are transferred between outer headers and *not* copied from the inner network layer header). This is true even if the outer IP protocol version of the received packet is different than the outer IP protocol version of the packet to be forwarded, i.e., the same as for bridging dissimilar L2 media segments. This re-encapsulation procedure is necessary to support diagnostic functions (e.g., 'traceroute'), and to ensure that the TTL/Hop Limit eventually decrements to 0 in case of transient routing loops.
VET interfaces that use SEAL use the SEAL Control Message Protocol (SCMP) as specified in Section 4.6 of [I-D.templin-intarea-seal] to coordinate reachability, routing information, and mappings between the inner and outer network layer protocols. SCMP parallels the IPv6 ND [RFC4861] and ICMPv6 [RFC4443] protocols, but operates from within the tunnel and supports operation for any combinations of inner and outer network layer protocols.
When a VET interface prepares a neighbor coordination SCMP message, the message is formatted the same as described for the corresponding IPv6 ND message, except that the message is preceded by a SEAL header the same as for SCMP error messages. The interface sets the SEAL header flags, NEXTHDR, LINK_ID, Identification, and ICV fields the same as for SCMP error messages.
The VET interface next fills out the SCMP message header fields the same as for SCMP error messages, calculates the SCMP message Checksum, encapsulates the message in the requisite outer headers, then calculates the SEAL header ICV value if it is configured to do so and places the result in the ICV field. The VET interface finally sends the message to the neighbor, which will verify the ICV and Checksum before accepting the message.
VET and SEAL are specifically designed for encapsulation of inner network layer payloads over outer IPv4 and IPv6 networks as a link layer. VET interfaces therefore require a new Source/Target Link-Layer Address Option (S/TLLAO) format that encapsulates IPv4 addresses as shown in Figure 2 and IPv6 addresses as shown in Figure 3:
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type = 2 | Length = 1 | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv4 address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2: SCMP S/TLLAO Option for IPv4 RLOCs
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type = 2 | Length = 3 | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 address (bytes 0 thru 3) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 address (bytes 4 thru 7) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 address (bytes 8 thru 11) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 address (bytes 12 thru 15) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 3: SCMP S/TLLAO Option for IPv6 RLOCs
The following subsections discuss VET interface neighbor coordination using SCMP.
VET hosts and VBRs can send SCMP Router Solicitation (SRS) messages to one or more VBGs in the PRL to receive solicited SCMP Router Advertisements (SRAs).
When a VBG receives an SRS message on a VET interface, it prepares a solicited SRA message. The SRA includes Router Lifetimes, Default Router Preferences, PIOs and any other options/parameters that the VBG is configured to include.
The VBG finally includes one or more SLLAOs formatted as specified above that encode the IPv6 and/or IPv4 RLOC unicast addresses of its own enterprise-interior interfaces or the enterprise-interior interfaces of other nearby VBGs.
VET nodes perform Neighbor Unreachability Detection (NUD) by monitoring hints of forward progress. The VET node can periodically set the 'A' bit in the header of SEAL data packets to elicit SCMP responses from the neighbor. The VET node can also send SCMP Neighbor Solicitation (SNS) messages to the neighbor to elicit SCMP Neighbor Advertisement (SNA) messages.
Responsiveness to routing changes is directly related to the delay in detecting that a neighbor has gone unreachable. In order to provide responsiveness comparable to dynamic routing protocols, a reasonably short neighbor reachable time (e.g., 5sec) SHOULD be used.
Additionally, a VET node may receive outer IP ICMP "Destination Unreachable; net / host unreachable" messages from an ER on the path indicating that the path to a neighbor may be failing. If the node receives excessive ICMP unreachable errors through multiple RLOCs associated with the same FIB entry, it SHOULD delete the FIB entry and allow subsequent packets to flow through a different route (e.g., a default route with a VBG as the next hop).
The VET node connected to the source EUN (i.e., the source VET node) can set R=1 in the SEAL header of a data packet to be forwarded as an indication that redirection messages will be accepted from the VET node connected to the destination EUN (i.e., the target VET node). Each VBG on the VET interface chain to the target preserves the state of the R bit when it re-encapsulates and forwards the packet.
When the VET node that acts as server to the target VET node receives the packet, it sends an SCMP "Predirect" (SPD) message forward to the target VET node. The target VET node in turn creates an SCMP "Redirect" (SRD) message to send back to the source VET node. The SPD and SRD message bodies are formed as specified in AERO [RFC6706], while the encapsulation headers and message header are prepared as for SCMP encapsulation instead of AERO encapsulation.
Before sending the SRD message, the target VET node also creates a 128-bit secret key value (T_Key) that it will use to validate the SEAL header ICV in future packets it will receive from the (redirected) source VET node. The target encrypts T_Key with the secret key it uses to validate the ICV in SEAL packets received from the previous VET interface hop (P_Key(N)). It then writes the encrypted value in the "Target" field of the SRD message, i.e., instead of an IPv6 address. The target VET node then encapsulates the SRD message in a SEAL header as specified above, calculates the SEAL ICVs and returns the message to the previous hop VBG on the chain toward the source.
When the target returns the SRD message, each intermediate VBG in the chain toward the source relays the message by examining the source address of the inner packet within the RHO to determine the previous hop toward the source. Each intermediate VBG in the chain verifies the SRD message SEAL ICV and Checksum, and decrypts the T_Key value in the SRD message "Target" field using its own secret key (P_Key(i)). The VBG then re-encrypts T_Key using the key corresponding to the next hop toward the source (P_Key(i-1)), then re-calculates the SEAL ICV and sends the SRD message to the previous hop. This relaying process is otherwise the same as for SCMP error message relaying specified in Section 4.6 of [I-D.templin-intarea-seal].
When the source VET node receives the SRD message, it discovers both the target's delegated prefix and candidate link layer addresses for this new (unidirectional) target VET node. The source VET node then installs the prefix included in the Redirect message in a forwarding table entry with the target as the next hop. The source node also caches the T_Key value, and uses it to calculate the ICVs it will include in the SEAL header/trailer of subsequent packets it sends to the target.
The source can subsequently send packets destined to an address covered by the destination prefix using SEAL encapsulation via the target as the next hop. The target can then use the ICVs in the SEAL data packets for data origin authentication (similar to the source address validation described in [I-D.ietf-savi-framework]), but it need not also check the outer source addresses/port numbers of the packets. Therefore, the outer addresses may change over time even if the inner source address stays the same.
Following redirection, if the source is subsequently unable to reach the target via the route-optimized path, it deletes the destination prefix forwarding table entry and installs a new forwarding table entry for the destination prefix with a default router as the next hop. The source VET node thereafter sets R=0 in the SEAL headers of data packets that it sends toward the destination prefix, but it may attempt redirection again at a later time by again setting R=1.
Finally, the source and target VET nodes set an expiration timer on the destination forwarding table entry so that stale entries are deleted in a timely fashion as specified in AERO [RFC6706]. The source MAY further engage the target in a bidirectional neighbor synchronization exchange as described in Section 6.6.4 if it is configured to do so.
The tunnel neighbor relationship between a pair of VET interface tunnel neighbors can be either unidirectional or bidirectional. A unidirectional relationship (see Section 6.6.3) can be established when the source VET node 'A' will tunnel data packets directly to a target VET node 'B', but 'B' will not tunnel data packets directly to 'A'. A bidirectional relationship is necessary, e.g., when a pair of VET nodes require a client/server or peer-to-peer binding.
In order to establish a bidirectional tunnel neighbor relationship, the initiator (call it "A") performs a reliable exchange (e.g., a short TCP transaction, a DHCP client/server exchange, etc.) with the responder (call it "B"). The details of the transaction are out of scope for this document, and indeed need not be standardized as long as both the initiator and responder observe the same specifications (typically manifested by a small piece of software provisioned to a client VET node from a service provider). Note that a short transaction instead of a persistent connection is advised if the outer network layer protocol addresses may change, e.g., due to a mobility event, due to loss of state in network middleboxes, etc.
During the transaction, "A" and "B" first authenticate themselves to each other, then exchange information regarding the inner network layer prefixes that will be used for conveying inner packets that will be forwarded over the tunnel. In this process, the initiator and responder register one or more link identifiers (LINK_IDs) with one another to provide "handles" for outer IP connection addresses.
Following this bidirectional tunnel neighbor establishment, the neighbors monitor the soft state for liveness, e.g., using Neighbor Unreachability Detection hints of forward progress. When one of the neighbors wishes to terminate the relationship, it performs another short transaction to request the termination, then both neighbors delete their respective tunnel soft state.
Once a bidirectional neighbor relationship has been established, the initiator and responder can further engage in a dynamic routing protocol (e.g., OSPF[RFC5340], etc.) to exchange inner network layer prefix information if they are configured to do so.
VET interfaces that use IPsec encapsulation [RFC4301] use the Internet Key Exchange protocol, version 2 (IKEv2) [RFC4306] to manage security association setup and maintenance. IKEv2 provides a logical equivalent of the SCMP in terms of VET interface neighbor coordinations; for example, IKEv2 also provides mechanisms for redirection [RFC5685] and mobility ][RFC4555].
IPsec additionally provides an extended Identification field and ICV; these features allow IPsec to utilize outer IP fragmentation and reassembly with less risk of exposure to data corruption due to reassembly misassociations.
VBRs that travel between distinct enterprise networks must either abandon their PA prefixes that are relative to the "old" network and obtain PA prefixes relative to the "new" network, or somehow coordinate with a "home" network to retain ownership of the prefixes. In the first instance, the VBR would be required to coordinate a network renumbering event on its attached networks using the new PA prefixes [RFC4192][RFC5887]. In the second instance, an adjunct mobility management mechanism is required.
VBRs can retain their CPs as they travel between distinct network points of attachment as long as they continue to refresh their CP-to-RLOC address mappings with their serving VBG in a bidirectional neighbor exchange (see Section 6.6.4). (When the VBR moves far from its serving VBG, it can also select a new VBG in order to maintain optimal routing.) In this way, VBRs can update their CP-to-RLOC mappings in real time and without requiring an adjunct mobility management mechanism.
VBRs that have true PI prefixes can withdraw the prefixes from former Internet points of attachment and re-advertise them at new points of attachment as they move. However, this method has been shown to produce excessive routing churn in the global internet BGP tables, and should be avoided for any mobility scenarios that may occur along short timescales. The alternative is to employ a system in which the true PI prefixes are not injected into the Internet routing system, but rather managed through some separate global mapping database. This latter method is employed by the LISP proposal [RFC6830].
The VBGs of a multihomed enterprise network participate in a private inner network layer routing protocol instance (e.g., via an interior BGP instance) to accommodate network partitions/merges as well as intra-enterprise mobility events.
Whether or not the underlying enterprise network supports a native multicasting service, the VET node can act as an inner network layer IGMP/MLD proxy [RFC4605] on behalf of its attached EUNs and convey its multicast group memberships over the VET interface to a VBG acting as a multicast router. The VET node's inner network layer multicast transmissions will therefore be encapsulated in outer headers with the unicast address of the VBG as the destination.
In multicast-capable enterprise networks, ERs provide an enterprise-wide multicasting service (e.g., Simplified Multicast Forwarding (SMF) [RFC6621], Protocol Independent Multicast (PIM) routing, Distance Vector Multicast Routing Protocol (DVMRP) routing, etc.) over their enterprise-interior interfaces such that outer IP multicast messages of site-scope or greater scope will be propagated across the enterprise network. For such deployments, VET nodes can optionally provide a native inner multicast/broadcast capability over their VET interfaces through mapping of the inner multicast address space to the outer multicast address space. In that case, operation of link-or greater-scoped inner multicasting services (e.g., a link-scoped neighbor discovery protocol) over the VET interface is available, but SHOULD be used sparingly to minimize enterprise-wide flooding.
VET nodes encapsulate inner multicast messages sent over the VET interface in any mid-layer headers followed by an outer IP header with a site-scoped outer IP multicast address as the destination. For the case of IPv6 and IPv4 as the inner/outer protocols (respectively), [RFC2529] provides mappings from the IPv6 multicast address space to a site-scoped IPv4 multicast address space (for other encapsulations, mappings are established through administrative configuration or through an unspecified alternate static mapping). Note that VET links will use mid-layer encapsulations as the means for distinguishing VET nodes from legacy RFC2529 nodes.
Multicast mapping for inner multicast groups over outer IP multicast groups can be accommodated, e.g., through VET interface snooping of inner multicast group membership and routing protocol control messages. To support inner-to-outer multicast address mapping, the VET interface acts as a virtual outer IP multicast host connected to its underlying interfaces. When the VET interface detects that an inner multicast group joins or leaves, it forwards corresponding outer IP multicast group membership reports on an underlying interface over which the VET interface is configured. If the VET node is configured as an outer IP multicast router on the underlying interfaces, the VET interface forwards locally looped-back group membership reports to the outer IP multicast routing process. If the VET node is configured as a simple outer IP multicast host, the VET interface instead forwards actual group membership reports (e.g., IGMP messages) directly over an underlying interface.
Since inner multicast groups are mapped to site-scoped outer IP multicast groups, the site administrator MUST ensure that the site-scoped outer IP multicast messages received on the underlying interfaces for one VET interface do not "leak out" to the underlying interfaces of another VET interface. This is accommodated through normal site-scoped outer IP multicast group filtering at enterprise network boundaries.
VET nodes can perform enterprise-wide service discovery using a suitable name-to-address resolution service. Examples of flooding-based services include the use of LLMNR [RFC4795] over the VET interface or multicast DNS (mDNS) [I-D.cheshire-dnsext-multicastdns] over an underlying interface. More scalable and efficient service discovery mechanisms (e.g., anycast) are for further study.
A VET link can be partitioned into multiple distinct logical groupings. In that case, each partition configures its own distinct 'PRLNAME' (e.g., 'linkupnetworks.zone1.example.com', 'linkupnetworks.zone2.example.com', etc.).
VBGs that are configured to support partitioning MAY further create multiple IP subnets within a partition, e.g., by sending SRAs with PIOs containing different IP prefixes to different groups of VET hosts. VBGs can identify subnets, e.g., by examining RLOC prefixes, observing the enterprise-interior interfaces over which SRSs are received, etc.
In the limiting case, VBGs can advertise a unique set of IP prefixes to each VET host such that each host belongs to a different subnet (or set of subnets) on the VET interface.
VBGs retain explicit state that tracks the inner network layer prefixes delegated to VBRs connected to the VET link, e.g., so that packets are delivered to the correct VBRs. When a VBG loses some or all of its state (e.g., due to a power failure), client VBRs MUST refresh the VBG's state so that packets can be forwarded over correct routes.
VBGs can support legacy ISATAP services according to the specifications in [RFC5214]. In particular, VBGs can configure legacy ISATAP interfaces and VET interfaces over the same sets of underlying interfaces as long as the PRLs and IPv6 prefixes associated with the ISATAP/VET interfaces are distinct.
There are no IANA considerations for this document.
Security considerations for MANETs are found in [RFC2501].
The security considerations found in [RFC2529][RFC5214][RFC6324] also apply to VET.
SEND [RFC3971] and/or IPsec [RFC4301] can be used in environments where attacks on the neighbor coordination protocol are possible. SEAL [I-D.templin-intarea-seal] supports path MTU discovery, and provides per-packet authenticating information for data origin authentication, anti-replay and message header integrity.
Rogue neighbor coordination messages with spoofed RLOC source addresses can consume network resources and cause VET nodes to perform extra work. Nonetheless, VET nodes SHOULD NOT "blacklist" such RLOCs, as that may result in a denial of service to the RLOCs' legitimate owners.
VBRs and VBGs observe the recommendations for network ingress filtering [RFC2827].
Brian Carpenter and Cyndi Jung introduced the concept of intra-site automatic tunneling in [RFC2529]; this concept was later called: "Virtual Ethernet" and investigated by Quang Nguyen under the guidance of Dr. Lixia Zhang. Subsequent works by these authors and their colleagues have motivated a number of foundational concepts on which this work is based.
Telcordia has proposed DHCP-related solutions for MANETs through the CECOM MOSAIC program.
The Naval Research Lab (NRL) Information Technology Division uses DHCP in their MANET research testbeds.
Security concerns pertaining to tunneling mechanisms are discussed in [RFC6169].
Default router and prefix information options for DHCPv6 are discussed in [I-D.droms-dhc-dhcpv6-default-router].
An automated IPv4 prefix delegation mechanism is proposed in [RFC6656].
RLOC prefix delegation for enterprise-edge interfaces is discussed in [I-D.clausen-manet-autoconf-recommendations].
MANET link types are discussed in [I-D.clausen-manet-linktype].
The LISP proposal [RFC6830] examines encapsulation/decapsulation issues and other aspects of tunneling.
Various proposals within the IETF have suggested similar mechanisms.
The following individuals gave direct and/or indirect input that was essential to the work: Jari Arkko, Teco Boot, Emmanuel Bacelli, Fred Baker, James Bound, Scott Brim, Brian Carpenter, Thomas Clausen, Claudiu Danilov, Chris Dearlove, Remi Despres, Gert Doering, Ralph Droms, Washam Fan, Dino Farinacci, Vince Fuller, Thomas Goff, David Green, Joel Halpern, Bob Hinden, Sascha Hlusiak, Sapumal Jayatissa, Dan Jen, Darrel Lewis, Tony Li, Joe Macker, David Meyer, Gabi Nakibly, Thomas Narten, Pekka Nikander, Dave Oran, Alexandru Petrescu, Mark Smith, John Spence, Jinmei Tatuya, Dave Thaler, Mark Townsley, Ole Troan, Michaela Vanderveen, Robin Whittle, James Woodyatt, Lixia Zhang, and others in the IETF AUTOCONF and MANET working groups. Many others have provided guidance over the course of many years.
Discussions with colleagues following the publication of RFC5558 have provided useful insights that have resulted in significant improvements to this, the Second Edition of VET.
The following individuals have contributed to this document:
Eric Fleischman (eric.fleischman@boeing.com)
Thomas Henderson (thomas.r.henderson@boeing.com)
Steven Russert (steven.w.russert@boeing.com)
Seung Yi (seung.yi@boeing.com)
Ian Chakeres (ian.chakeres@gmail.com) contributed to earlier versions of the document.
Jim Bound's foundational work on enterprise networks provided significant guidance for this effort. We mourn his loss and honor his contributions.
A priori uniqueness determination (also known as "pre-service DAD") for an RLOC assigned on an enterprise-interior interface would require either flooding the entire enterprise network or somehow discovering a link in the network on which a node that configures a duplicate address is attached and performing a localized DAD exchange on that link. But, the control message overhead for such an enterprise-wide DAD would be substantial and prone to false-negatives due to packet loss and intermittent connectivity. An alternative to pre-service DAD is to autoconfigure pseudo-random RLOCs on enterprise-interior interfaces and employ a passive in-service DAD (e.g., one that monitors routing protocol messages for duplicate assignments).
Pseudo-random IPv6 RLOCs can be generated with mechanisms such as CGAs, IPv6 privacy addresses, etc. with very small probability of collision. Pseudo-random IPv4 RLOCs can be generated through random assignment from a suitably large IPv4 prefix space.
Consistent operational practices can assure uniqueness for VBG-aggregated addresses/prefixes, while statistical properties for pseudo-random address self-generation can assure uniqueness for the RLOCs assigned on an ER's enterprise-interior interfaces. Still, an RLOC delegation authority should be used when available, while a passive in-service DAD mechanism should be used to detect RLOC duplications when there is no RLOC delegation authority.
Some of the IPv4 addresses that appear in the Potential Router List may be anycast addresses, i.e., they may be configured on the VET interfaces of multiple VBRs/VBGs. In that case, each VET router interface that configures the same anycast address must exhibit equivalent outward behavior.
Use of an anycast address as the IP destination address of tunneled packets can have subtle interactions with tunnel path MTU and neighbor discovery. For example, if the initial fragments of a fragmented tunneled packet with an anycast IP destination address are routed to different egress tunnel endpoints than the remaining fragments, the multiple endpoints will be left with incomplete reassembly buffers. This issue can be mitigated by ensuring that each egress tunnel endpoint implements a proactive reassembly buffer garbage collection strategy. Additionally, ingress tunnel endpoints that send packets with an anycast IP destination address must use the minimum path MTU for all egress tunnel endpoints that configure the same anycast address as the tunnel MTU. Finally, ingress tunnel endpoints SHOULD treat ICMP unreachable messages from a router within the tunnel as at most a weak indication of neighbor unreachability, since the failures may only be transient and a different path to an alternate anycast router quickly selected through reconvergence of the underlying routing protocol.
Use of an anycast address as the IP source address of tunneled packets can lead to more serious issues. For example, when the IP source address of a tunneled packet is anycast, ICMP messages produced by routers within the tunnel might be delivered to different ingress tunnel endpoints than the ones that produced the packets. In that case, functions such as path MTU discovery and neighbor unreachability detection may experience non-deterministic behavior that can lead to communications failures. Additionally, the fragments of multiple tunneled packets produced by multiple ingress tunnel endpoints may be delivered to the same reassembly buffer at a single egress tunnel endpoint. In that case, data corruption may result due to fragment misassociation during reassembly.
In view of these considerations, VBGs that configure an anycast address SHOULD also configure one or more unicast addresses from the Potential Router List; they SHOULD further accept tunneled packets destined to any of their anycast or unicast addresses, but SHOULD send tunneled packets using a unicast address as the source address.