Internet-Draft | EVPN L3MH | July 2021 |
MacKenzie, et al. | Expires 12 January 2022 | [Page] |
This document brings the machinery and solution providing higher network availability and load balancing benefits of EVPN Multi-Chassis Link Aggregation Group (MC-LAG) to various L3 services delivered by EVPN.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119] and RFC 8174 [RFC8174].¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 12 January 2022.¶
Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
Resilient L3VPN service to a CE requires multiple service PEs to run a MC-LAG mechanism, which previously required a proprietary ICL control plane link between them.¶
This proposed extension to [RFC7432] brings EVPN based MC-LAG all-active multi-homing load-balancing to various services (L2 and L3) delivered by EVPN. Although this solution is also applicable to some L2 service use cases, (example Centralized Gateway) this document will focus on the L3VPN [RFC4364] use case to provide examples.¶
EVPN MC-LAG is completely transparent to a CE device, and provides link and node level redundancy with load-balancing using the existing BGP control plane required by the L3 services.¶
For example, the L3VPN service can be MPLS, VxLAN or SRv6 based, and does not require EVPN signaling to remote neighbors. The EVPN signaling will be limited to the redundant service PEs sharing a Ethernet Segment Identifier (ESI). This will be used to synchronize ARP/ND, multicast Join/Leave, and IGP routes replacing need for ICL link.¶
Figure 1 shows a MC-LAG multi-homing topology where PE1 and PE2 are part of the same redundancy group providing multi-homing to CE1 via interfaces I1 and I2. Interfaces I1 and I2 are Bundle-Ethernet interfaces running LACP protocol. The CE device can be a layer-2 or layer-3 device connecting to the redundant PEs over a single LACP LAG port. In the case of a layer-3 CE device, this document looks to solve the case of an IGP adjacency between PEs and CE, but further study is needed to support BGP PE to CE protocols. The core, shown as IP or MPLS enabled, provides wide range of L3 services. MC-LAG multi-homing functionality is decoupled from those services in the core and it focuses on providing multi-homing to CE.¶
To deliver resilient layer-3 services and provide traffic load-balancing towards the access, the two service PEs will advertise layer-3 reach-ability towards the layer-3 core and will both be eligible to receive traffic and forward towards the Access.¶
The layer-2 hashing performed by CE over its LAG port means that its possible for only one service PE to populate its ARP/ND cache. Take for example PE1 and PE2 from Figure 1. If CE1 ARP/ND response happens to always hash over I1 towards PE1, then PE2 ARP/ND table will be empty. Since unicast traffic from remote PEs can be received by either service PE, traffic that reaches the service PE2 will not find an ARP entry matching the host IP address and traffic will drop until ARP/ND resolves the adjacency.¶
If the CEs hash implementation always calculates the ARP/ND response towards PE1, the resolution on PE2 will never happen and traffic load balanced to PE2 will black-hole.¶
The route sync solution is described in Section 2.4¶
Similar to the unicast behavior above, multicast IGMP join messages from CE to LAG link may always hash to a single PE.¶
When PIM runs on both redundant layer-3 PEs that both service multicast for the same access segment, PIM elects only one of the PEs as a PIM Designated Router (DR) using PIM DR election algorithm [RFC7761]. The PIM DR is responsible for tracking local multicast listeners and forwarding traffic to those listeners. The PIM DR is also responsible for sending local Join/Prune messages towards the RP or source.¶
For example, if in Figure 1 PE2 is designated PIM-RP, but CE IGMP join messages are hashed to I1 towards PE1, then multicast traffic will not be attracted to this service pair as PE2 will not send PIM Join on behalf of CE.¶
In order to ensure that the PIM DR always has all the MCAST route(s) and able to forward PIM Join/Prune message towards RP, BGP-EVPN multicast route-sync will be leveraged to synchronize MCAST route(s) learned to the DR.¶
When a fail-over occurs, multicast states would be pre-programmed on the newly elected DR service PE and assumes responsibility for the routing and forwarding of all the traffic.¶
The multicast route sync solution is described in Section 2.5¶
A layer-3 CE device/router that connects to the redundant PEs may establish an IGP adjacency on the bundle port. In this case, the adjacency will be formed to one of the PEs and IGP customer route(s) will only be present on that PE.¶
This prevents the load-balancing benefits of redundant PEs from supporting this use case, as only one PE will be aware and advertising the customer routes to the core.¶
Figure 2 provides an example of this use case, where CE forms an IGP adjacency with PE1 (example: ISIS or OSPF), and advertises its H1 and R1 routes into the IP-VRF of PE1. PE1 may then redistribute this IGP route into the core as an L3 service. Any remote PEs will only be aware of the service from PE1, and cannot load balance through PE2 as well.¶
Further study is required in order to support the case of BGP PE to CE protocols.¶
A solution to this is described in Section 2.6¶
In the case where the L3 service is L3VPN such as [RFC4364], it is likely the CE device could be a layer-2 switch supporting multiple subnets through the use of VLANs. In addition, each VLAN may be associated with a different customer VRF.¶
When ARP/ND routes are synchronized between the PEs for ARP proxy support using RT-2, a similar problem is encountered as described by Section 1.1 of [I-D.sajassi-bess-evpn-ac-aware-bundling]. The PE receiving RT-2 is unable to determine which sub-interface the ARP/ND entry is associated with.¶
When IGMP routes are synchronized between the PEs using RT-7 and RT-8, a similar problem is encountered as described by Section 1.2 of [I-D.sajassi-bess-evpn-ac-aware-bundling]. The PE receiving RT-7 and RT-8 is unable to determine which sub-interface the IGMP join is associated with.¶
This document proposes to use the solution defined by Section 4 of [I-D.sajassi-bess-evpn-ac-aware-bundling] to solve both these cases. All route sync messages (RT-2, RT-5, RT-7, RT-8) will carry an Attachment Circuit Identifier Extended Community to signal which sub-interface the routes were learnt on.¶
Consider the Figure 3 topology, where 2 AC aware bundling service interfaces are supported. On first bundling interface BE1, PE1 and PE2 share a LAG interface with switch 1 (SW1) and have 2 separate (but overlapping) customer 1 and customer 2 subnets. CUST1 Subnet 1 is resolving over sub-interface VLAN 1 (.1), and CUST2 Subnet 1 is resolving over sub-interface VLAN 2 (.2).¶
On second bundling interface BE2, both PEs share a LAG interface with Customer Edge device 1 (CE1) and only a single Customer (CUST1) subnet on native VLAN.¶
Main interface BE1 on PE1 and PE2 is shared by customer 1 and 2, and represented by ESI-1.¶
Main interface BE2 on PE1 and PE2 is only used by customer 1, and represented by ESI-2.¶
If we focus on CUST1 for now, there are 2 cases visible.¶
Case 1: For CE 1, if its ARP responses hash towards PE2, then PE1 will be unaware of its presence. For PE2 to synchronize this information to PE1, in addition to CE1 IP address (10.0.1.2) and MAC address (m1), 2 additional unique identifiers are needed. 1. IP-VRF. CUST 1 VRF is represented by EVI ID 1 2. Interface. BE2 Interface is represented by ESI-2¶
Case 2: For Host 1 (H1), if its ARP responses hash towards PE2, then PE1 will be unaware of its presence. For PE2 to synchronize this information to PE1, then in addition to H1 IP address (10.0.0.2) and MAC address (m2), 3 additional unique identifiers are required. 1. IP-VRF. CUST 1 VRF is represented by EVI ID 1 2. Main Interface. BE1 Interface is represented by ESI-1 3. Sub-Interface. Subnet/VLAN 1 is represented by Attachment Circuit ID 1.¶
A separate EVPN instance will be configured to each layer-3 VRF and be marked for route-sync only. Each L3-VRF will have a unique associated EVI ID. The multi-homed peer PEs MUST have the same configured EVI to layer-3 VRF mapping. This mapping also extends to the GRT, where a unique EVI ID can be assigned to support non VPN layer-3 services. Mis-configuration detection across peering PEs are left for further study.¶
When an EVPN instance is created as route-sync only, a MAC-VRF table is created to store all advertised routes. Local MAC learning may be disabled as this feature does not require MAC-only RT-2 advertisements.¶
This EVI is applicable to the multi-homed peer PEs only¶
The EVPN instance will be responsible for populating the following layer-3 VRF tables from remotely synced routes from peer PE¶
In the example Figure 3, route-syncs from VRF CUST1 will have EVI-RT BGP Extended Community (EC) with EVI 1, and VRF CUST2 will have EVI 2.¶
The ESI represents the L3 LAG interface between PE and CEs. This ESI is signalled using RT-4 with the ES-Import Route Target as described in Section 8.1.1 of [RFC7432] so that the service PE peers can discover each others common ES.¶
In the example Figure 3, route-syncs from interface BE1 have ES-Import RT EC with ESI 1¶
The Attachment Circuit ID represens the sub-interface subnet on the L3 LAG interface between PE and CEs. The AC-ID is signalled using RT-2, RT-5, RT-7 and RT-8 by attaching Attachment Circuit ID Extended community as described in Section 6.1 of [I-D.sajassi-bess-evpn-ac-aware-bundling].¶
In the example Figure 3, route-syncs from sub-interface BE1.1 (VLAN1) have Attachment-Circuit-ID EC with ID 1¶
This document proposes solving the issue described in Section 1.1 using RT-2 IP/MAC route sync as described in Section 10 of [RFC7432] with a modification described below.¶
Local ARP/ND learning will trigger a RT-2 route sync to any peer PE. There is no need for local MAC learning or sync over the L3 interface, only adjacencies. The MAC-only RT-2 route SHOULD not be advertised to peer PE.¶
Section 9.1 of [RFC7432] describes different mechanisms to learn adjacency routes locally.¶
When consuming a remote layer-3 RT-2 sync route:¶
This document proposes solving the issue described in Section 1.2 using RT-7 and RT-8 route sync as described by [I-D.ietf-bess-evpn-igmp-mld-proxy].¶
Local IGMP join and leave will trigger a RT-7/8 route sync to peer PE.¶
An IGP Join or Leave will trigger a RT-7/8 route sync to any peer PE.¶
Section 9.1 of [RFC7432] describes different mechanisms to learn adjacency routes locally.¶
When consuming a remote multicast RT-7 or RT-8 sync route:¶
Section 3 of [I-D.ietf-bess-evpn-prefix-advertisement] provides a mechanism to synchronize layer-3 customer subnets between the PEs in order to solve problem described in Section 1.3.¶
Using Figure 2 as example, if PE1 forms the IGP adjacency with CE, it will be the only PE with knowledge of the customer subnet R1. BGP on PE1 will then advertise R1 to remote PEs using L3-VPN signalling.¶
Although PE2 has the same ES connection to the CE, and could provide load balancing to remote PEs, due to it not having formed an IGP adjacency with CE it is not aware of the customer subnet R1.¶
This can be solved by PE1 signaling R1 to PE2 using a RT-5 synch route. BGP on PE2 can then advertise this customer subnet R1 towards the core is if it was locally learned through IGP, and provide load-balancing from the remote PEs.¶
The route-type(5) will carry the ESI as well as the gateway address GW (prefix next-hop address).¶
The same mapping mechanism will be used as for Route and IGMP sync, where EVI will determine the L3-VRF, ESI carried with route-type(5) will provide the main interface, and the gateway address will provide the nexthop.¶
Another possible signalling of VLAN/sub-interface between service PE peers is to use the Ethernet Tag (ETAG) ID value in RT-2, RT-5, RT-7 and RT-8 as apposed to the Attachment Circuit Extended Community.¶
This will not work with vlan-aware bundling mode, but as that is a layer2 mode this should not prevent ETAGs use for L3 services.¶
This document proposes extending the usecase of Extended communities already defined in other drafts for the route types RT-2, RT-5, RT-7 and RT-8.¶
The use of EVPN MC-LAG all active multi-homing brings the following benefits to L3 BGP services:¶
Replaces legacy MC-LAG ICCP-based solution, and offers following additional benefits:¶
The same Security Considerations described in [RFC7432] are valid for this document.¶
There are no IANA considerations.¶