Network Working Group | X. Xu |
Internet-Draft | Huawei |
Intended status: Informational | R. Raszuk |
Expires: April 25, 2015 | Mirantis Inc. |
S. Hares | |
Hickory Hill Consulting | |
Y. Fan | |
China Telecom | |
C. Jacquenet | |
Orange | |
T. Boyes | |
Bloomberg LP | |
B. Fee | |
Extreme Networks | |
October 22, 2014 |
Virtual Subnet: A L3VPN-based Subnet Extension Solution
draft-ietf-l3vpn-virtual-subnet-02
This document describes a Layer3 Virtual Private Network (L3VPN)-based subnet extension solution referred to as Virtual Subnet, which can be used for building Layer3 network virtualization overlays within and/or across data centers.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 25, 2015.
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
For business continuity purpose, Virtual Machine (VM) migration across data centers is commonly used in those situations such as data center maintenance, data center migration, data center consolidation, data center expansion, and data center disaster avoidance. It's generally admitted that IP renumbering of servers (i.e., VMs) after the migration is usually complex and costly at the risk of extending the business downtime during the process of migration. To allow the migration of a VM from one data center to another without IP renumbering, the subnet on which the VM resides needs to be extended across these data centers.
To achieve subnet extension across multiple Infrastructure-as-a-Service (IaaS) cloud data centers in a scalable way, the following requirements and challenges must be considered:
This document describes a L3VPN-based subnet extension solution referred to as Virtual Subnet (VS), which can be used for data center interconnection while addressing all of the requirements and challenges as mentioned above. In addition, since VS is mainly built on proven technologies such as BGP/MPLS IP VPN [RFC4364] and ARP/ND proxy [RFC0925][RFC1027][RFC4389], those service providers offering IaaS public cloud services could rely upon their existing BGP/MPLS IP VPN infrastructures and their corresponding experiences to realize data center interconnection.
Although Virtual Subnet is described in this document as an approach for data center interconnection, it actually could be used within data centers as well.
Note that the approach described in this document is not intended to achieve an exact emulation of L2 connectivity and therefore it can only support a restricted L2 connectivity service model with limitations declared in Section 4. As for the discussion about in which environment this service model should be suitable, it’s outside the scope of this document.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
This memo makes use of the terms defined in [RFC4364].
+--------------------+ +-----------------+ | | +-----------------+ |VPN_A:1.1.1.1/24 | | | |VPN_A:1.1.1.1/24 | | \ | | | | / | | +------+ \++---+-+ +-+---++/ +------+ | | |Host A+----+ PE-1 | | PE-2 +----+Host B| | | +------+\ ++-+-+-+ +-+-+-++ /+------+ | | 1.1.1.2/24 | | | | | | 1.1.1.3/24 | | | | | | | | | | DC West | | | IP/MPLS Backbone | | | DC East | +-----------------+ | | | | +-----------------+ | +--------------------+ | | | VRF_A : V VRF_A : V +------------+---------+--------+ +------------+---------+--------+ | Prefix | Nexthop |Protocol| | Prefix | Nexthop |Protocol| +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.1/32 |127.0.0.1| Direct | | 1.1.1.1/32 |127.0.0.1| Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.2/32 | 1.1.1.2 | Direct | | 1.1.1.2/32 | PE-1 | IBGP | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.3/32 | PE-2 | IBGP | | 1.1.1.3/32 | 1.1.1.3 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.0/24 | 1.1.1.1 | Direct | | 1.1.1.0/24 | 1.1.1.1 | Direct | +------------+---------+--------+ +------------+---------+--------+ Figure 1: Intra-subnet Unicast Example
Now assume host A sends an ARP request for host B before communicating with host B. Upon receiving the ARP request, PE-1 acting as an ARP proxy returns its own MAC address as a response. Host A then sends IP packets for host B to PE-1. PE-1 tunnels such packets towards PE-2 which in turn forwards them to host B. Thus, hosts A and B can communicate with each other as if they were located within the same subnet.
+--------------------+ +-----------------+ | | +-----------------+ |VPN_A:1.1.1.1/24 | | | |VPN_A:1.1.1.1/24 | | \ | | | | / | | +------+ \++---+-+ +-+---++/ +------+ | | |Host A+------+ PE-1 | | PE-2 +-+----+Host B| | | +------+\ ++-+-+-+ +-+-+-++ | /+------+ | | 1.1.1.2/24 | | | | | | | 1.1.1.3/24 | | GW=1.1.1.4 | | | | | | | GW=1.1.1.4 | | | | | | | | | +------+ | | | | | | | | +----+ GW +--| | | | | | | | /+------+ | | | | | | | | 1.1.1.4/24 | | | | | | | | | | DC West | | | IP/MPLS Backbone | | | DC East | +-----------------+ | | | | +-----------------+ | +--------------------+ | | | VRF_A : V VRF_A : V +------------+---------+--------+ +------------+---------+--------+ | Prefix | Nexthop |Protocol| | Prefix | Nexthop |Protocol| +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.1/32 |127.0.0.1| Direct | | 1.1.1.1/32 |127.0.0.1| Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.2/32 | 1.1.1.2 | Direct | | 1.1.1.2/32 | PE-1 | IBGP | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.3/32 | PE-2 | IBGP | | 1.1.1.3/32 | 1.1.1.3 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.4/32 | PE-2 | IBGP | | 1.1.1.4/32 | 1.1.1.4 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.0/24 | 1.1.1.1 | Direct | | 1.1.1.0/24 | 1.1.1.1 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 0.0.0.0/0 | PE-2 | IBGP | | 0.0.0.0/0 | 1.1.1.4 | Static | +------------+---------+--------+ +------------+---------+--------+ Figure 2: Inter-subnet Unicast Example (1)
[RFC4364] operation. Assume host A sends an ARP request for its default gateway (i.e., 1.1.1.4) prior to communicating with a destination host outside of its subnet. Upon receiving this ARP request, PE-1 acting as an ARP proxy returns its own MAC address as a response. Host A then sends a packet for Host B to PE-1. PE-1 tunnels such packet towards PE-2 according to the default route learnt from PE-2, which in turn forwards that packet to GW.
+--------------------+ +-----------------+ | | +-----------------+ |VPN_A:1.1.1.1/24 | | | |VPN_A:1.1.1.1/24 | | \ | | | | / | | +------+ \++---+-+ +-+---++/ +------+ | | |Host A+----+-+ PE-1 | | PE-2 +-+----+Host B| | | +------+\ | ++-+-+-+ +-+-+-++ | /+------+ | | 1.1.1.2/24 | | | | | | | | 1.1.1.3/24 | | GW=1.1.1.4 | | | | | | | | GW=1.1.1.4 | | +------+ | | | | | | | | +------+ | |--+ GW-1 +----+ | | | | | | +----+ GW-2 +--| | +------+\ | | | | | | /+------+ | | 1.1.1.4/24 | | | | | | 1.1.1.4/24 | | | | | | | | | | DC West | | | IP/MPLS Backbone | | | DC East | +-----------------+ | | | | +-----------------+ | +--------------------+ | | | VRF_A : V VRF_A : V +------------+---------+--------+ +------------+---------+--------+ | Prefix | Nexthop |Protocol| | Prefix | Nexthop |Protocol| +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.1/32 |127.0.0.1| Direct | | 1.1.1.1/32 |127.0.0.1| Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.2/32 | 1.1.1.2 | Direct | | 1.1.1.2/32 | PE-1 | IBGP | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.3/32 | PE-2 | IBGP | | 1.1.1.3/32 | 1.1.1.3 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.4/32 | 1.1.1.4 | Direct | | 1.1.1.4/32 | 1.1.1.4 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.0/24 | 1.1.1.1 | Direct | | 1.1.1.0/24 | 1.1.1.1 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 0.0.0.0/0 | 1.1.1.4 | Static | | 0.0.0.0/0 | 1.1.1.4 | Static | +------------+---------+--------+ +------------+---------+--------+ Figure 3: Inter-subnet Unicast Example (2)
+------+ +------+ PE-3 +------+ +-----------------+ | +------+ | +-----------------+ |VPN_A:1.1.1.1/24 | | | |VPN_A:1.1.1.1/24 | | \ | | | | / | | +------+ \++---+-+ +-+---++/ +------+ | | |Host A+------+ PE-1 | | PE-2 +------+Host B| | | +------+\ ++-+-+-+ +-+-+-++ /+------+ | | 1.1.1.2/24 | | | | | | 1.1.1.3/24 | | GW=1.1.1.1 | | | | | | GW=1.1.1.1 | | | | | | | | | | DC West | | | IP/MPLS Backbone | | | DC East | +-----------------+ | | | | +-----------------+ | +--------------------+ | | | VRF_A : V VRF_A : V +------------+---------+--------+ +------------+---------+--------+ | Prefix | Nexthop |Protocol| | Prefix | Nexthop |Protocol| +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.1/32 |127.0.0.1| Direct | | 1.1.1.1/32 |127.0.0.1| Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.2/32 | 1.1.1.2 | Direct | | 1.1.1.2/32 | PE-1 | IBGP | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.3/32 | PE-2 | IBGP | | 1.1.1.3/32 | 1.1.1.3 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 1.1.1.0/24 | 1.1.1.1 | Direct | | 1.1.1.0/24 | 1.1.1.1 | Direct | +------------+---------+--------+ +------------+---------+--------+ | 0.0.0.0/0 | PE-3 | IBGP | | 0.0.0.0/0 | PE-3 | IBGP | +------------+---------+--------+ +------------+---------+--------+ Figure 4: Inter-subnet Unicast Example (3)
To support IP multicast between CE hosts of the same virtual subnet, MVPN technologies [RFC6513] could be directly used without any change. For example, PE routers attached to a given VPN join a default provider multicast distribution tree which is dedicated for that VPN. Ingress PE routers, upon receiving multicast packets from their local CE hosts, forward them towards remote PE routers through the corresponding default provider multicast distribution tree.
PE routers SHOULD be able to discover their local CE hosts and keep the list of these hosts up to date in a timely manner so as to ensure the availability and accuracy of the corresponding host routes originated from them. PE routers could accomplish local CE host discovery by some traditional host discovery mechanisms using ARP or ND protocols. Furthermore, Link Layer Discovery Protocol (LLDP) or VSI Discovery and Configuration Protocol (VDP), or even interaction with the data center orchestration system could also be considered as a means to dynamically discover local CE hosts
Acting as an ARP or ND proxies, a PE routers SHOULD only respond to an ARP request or Neighbor Solicitation (NS) message for a target host when it has a best route for that target host in the associated VRF and the outgoing interface of that best route is different from the one over which the ARP request or NS message is received. In the scenario where a given VPN site (i.e., a data center) is multi-homed to more than one PE router via an Ethernet switch or an Ethernet network, Virtual Router Redundancy Protocol (VRRP) [RFC5798] is usually enabled on these PE routers. In this case, only the PE router being elected as the VRRP Master is allowed to perform the ARP/ND proxy function.
During the VM migration process, the PE router to which the moving VM is now attached would create a host route for that CE host upon receiving a notification message of VM attachment (e.g., a gratuitous ARP or unsolicited NA message). The PE router to which the moving VM was previously attached would withdraw the corresponding host route when receiving a notification message of VM detachment (e.g., a VDP message about VM detachment). Meanwhile, the latter PE router could optionally broadcast a gratuitous ARP or send an unsolicited NA message on behalf of that CE host with source MAC address being one of its own. In this way, the ARP/ND entry of this CE host that moved and which has been cached on any local CE host would be updated accordingly. In the case where there is no explicit VM detachment notification mechanism, the PE router could also use the following trick to determine the VM detachment event: upon learning a route update for a local CE host from a remote PE router for the first time, the PE router could immediately check whether that local CE host is still attached to it by some means (e.g., ARP/ND PING and/or ICMP PING). It is important to ensure that the same MAC and IP are associated to the default gateway active in each data center, as the VM would most likely continue to send packets to the same default gateway address after migrated from one data center to another. One possible way to achieve this goal is to configure the same VRRP group on each location so as to ensure the default gateway active in each data center share the same virtual MAC and virtual IP addresses.
In a VS environment, the MAC learning domain associated with a given virtual subnet which has been extended across multiple data centers is partitioned into segments and each segment is confined within a single data center. Therefore data center switches only need to learn local MAC addresses, rather than learning both local and remote MAC addresses.
When default gateway functions are implemented on PE routers as shown in Figure 4, the ARP/ND cache table on each PE router only needs to contain ARP/ND entries of local CE hosts As a result, the ARP/ND cache table size would not grow as the number of data centers to be connected increases.
In VS, the flooding domain associated with a given virtual subnet that has been extended across multiple data centers, is partitioned into segments and each segment is confined within a single data center. Therefore, the performance impact on networks and servers imposed by the flooding of ARP/ND broadcast/multicast and unknown unicast traffic is alleviated.
Take the scenario shown in Figure 4 as an example, to optimize the forwarding path for the traffic between cloud users and cloud data centers, PE routers located at cloud data centers (i.e., PE-1 and PE-2), which are also acting as default gateways, propagate host routes for their own local CE hosts respectively to remote PE routers which are attached to cloud user sites (i.e., PE-3). As such, the traffic from cloud user sites to a given server on the virtual subnet which has been extended across data centers would be forwarded directly to the data center location where that server resides, since the traffic is now forwarded according to the host route for that server, rather than the subnet route. Furthermore, for the traffic coming from cloud data centers and forwarded to cloud user sites, each PE router acting as a default gateway would forward the traffic according to the best-match route in the corresponding VRF. As a result, the traffic from data centers to cloud user sites is forwarded along an optimal path as well.
Although most traffic within and across data centers is IP traffic, there may still be a few legacy clustering applications which rely on non-IP communications (e.g., heartbeat messages between cluster nodes). Since Virtual Subnet is strictly based on L3 forwarding, those non-IP communications cannot be supported in the Virtual Subnet solution. In order to support those few non-IP traffic (if present) in the environment where the Virtual Subnet solution has been deployed, the approach following the idea of “route all IP traffic, bridge non-IP traffic” could be considered. That's to say, all IP traffic including both intra-subnet and inter-subnet would be processed by the Virtual Subnet process, while the non-IP traffic would be resorted to a particular Layer2 VPN approach. Such unified L2/L3 VPN approach requires ingress PE routers to classify the traffic received from CE hosts before distributing them to the corresponding L2 or L3 VPN forwarding processes. Note that more and more cluster vendors are offering clustering applications based on Layer 3 interconnection.
As illustrated before, intra-subnet traffic is forwarded at Layer3 in the Virtual Subnet solution. Therefore, IP broadcast and link-local multicast traffic cannot be supported by the Virtual Subnet solution. In order to support the IP broadcast and link-local multicast traffic in the environment where the Virtual Subnet solution has been deployed, the unified L2/L3 overlay approach as described in Section 4.1 could be considered as well. That’s to say, the IP broadcast and link-local multicast would be resorted to the L2VPN forwarding process while the routable IP traffic would be processed by the Virtual Subnet process.
As illustrated before, intra-subnet traffic is forwarded at Layer3 in the Virtual Subnet context. Since it doesn’t require any change to the TTL handling mechanism of the BGP/MPLS IP VPN, when doing a traceroute operation on one CE host for another CE host (assuming that these two hosts are within the same subnet but are attached to different sites), the traceroute output would reflect the fact that these two hosts belonging to the same subnet are actually connected via an virtual subnet emulated by ARP proxy, rather than a normal LAN. In addition, for any other applications which generate intra-subnet traffic with TTL set to 1, these applications may not be workable in the Virtual Subnet context, unless special TTL processing for such case has been implemented (e.g., if the source and destination addresses of a packet whose TTL is set to 1 belong to the same extended subnet, both ingress and egress PE routers MUST NOT decrement the TTL of such packet. Furthermore, the TTL of such packet SHOULD NOT be copied into the TTL of the transport tunnel and vice versa).
Thanks to Dino Farinacci, Himanshu Shah, Nabil Bitar, Giles Heron, Ronald Bonica, Monique Morrow, Rajiv Asati, Eric Osborne, Thomas Morin, Martin Vigoureux, Pedro Roque Marque, Joe Touch and Wim Henderickx for their valuable comments and suggestions on this document.
There is no requirement for any IANA action.
This document doesn’t introduce additional security risk to BGP/MPLS IP VPN, nor does it provide any additional security feature for BGP/MPLS IP VPN.
[RFC6820] | Narten, T., Karir, M. and I. Foo, "Address Resolution Problems in Large Data Center Networks", RFC 6820, January 2013. |