Internet DRAFT - draft-ietf-nvo3-vmm
draft-ietf-nvo3-vmm
Network Working Group L. Dunbar
Internet Draft Futurewei
Intended status: Informational B. Sarikaya
Expires: December 17, 2020 Denpel Informatique
B.Khasnabish
Independent
T. Herbert
Intel
S. Dikshit
Aruba-HPE
June 17, 2020
Virtual Machine Mobility Solutions for L2 and L3 Overlay
Networks draft-ietf-nvo3-vmm-16
Abstract
This document describes virtual machine (VM) mobility
solutions commonly used in data centers built with an overlay
network. This document is intended for describing the
solutions and the impact of moving VMs, or applications, from
one rack to another connected by the overlay network.
For layer 2, it is based on using an NVA (Network
Virtualization Authority) to NVE (Network Virtualization
Edge) protocol to update ARP (Address Resolution Protocol)
tables or neighbor cache entries after a VM moves from an old
NVE to a new NVE. For Layer 3, it is based on address and
connection migration after the move.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. This document may not be
modified, and derivative works of it may not be created,
except to publish it as an RFC and to translate it into
languages other than English.
xxx, et al. Expires December 17, 2020 [Page 1]
Internet-Draft VM Mobility Solution June 17, 2020
Internet-Drafts are working documents of the Internet
Engineering Task Force (IETF), its areas, and its working
groups. Note that other groups may also distribute working
documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of
six months and may be updated, replaced, or obsoleted by
other documents at any time. It is inappropriate to use
Internet-Drafts as reference material or to cite them other
than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed
at http://www.ietf.org/shadow.html
This Internet-Draft will expire on December 17, 2020.
Copyright Notice
Copyright (c) 2020 IETF Trust and the persons identified as
the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date
of publication of this document. Please review these
documents carefully, as they describe your rights and
restrictions with respect to this document. Code Components
extracted from this document must include Simplified BSD
License text as described in Section 4.e of the Trust Legal
Provisions and are provided without warranty as described in
the Simplified BSD License.
Table of Contents
1. Introduction................................................ 3
2. Conventions used in this document........................... 4
3. Requirements................................................ 5
Dunbar, et al. Expires December 17, 2020 [Page 2]
Internet-Draft VM Mobility Solution June 17, 2020
4. Overview of the VM Mobility Solutions....................... 6
4.1. Inter-VN and External Communication.................... 6
4.2. VM Migration in a Layer 2 Network...................... 7
4.3. VM Migration in Layer-3 Network........................ 8
4.4. Address and Connection Management in VM Migration...... 9
5. Handling Packets in Flight................................. 10
6. Moving Local State of VM................................... 11
7. Handling of Hot, Warm and Cold VM Mobility................. 12
8. Other Options.............................................. 13
9. VM Lifecycle Management.................................... 13
10. Security Considerations................................... 14
11. IANA Considerations....................................... 15
12. Acknowledgments........................................... 15
13. References................................................ 15
13.1. Normative References................................. 15
13.2. Informative References............................... 16
1. Introduction
This document describes the overlay-based data center
network solutions in support of multitenancy and VM
mobility. Being able to move VMs dynamically, from one
server to another, makes it possible for dynamic load
balancing or work distribution. Therefore, dynamic VM
Mobility is highly desirable for large scale multi-tenant
DCs.
This document is strictly within the DCVPN, as defined by
the NVO3 Framework [RFC7365]. The intent is to describe
Layer 2 and Layer 3 Network behavior when VMs are moved
from one NVE to another. This document assumes that the
VM's move is initiated by the VM management system, i.e.
planed move. How and when to move VMs is out of the scope
of this document. RFC7666 already has the description of
the MIB for VMs controlled by Hypervisor. The impact of VM
mobility on higher layer protocols and applications is
outside its scope.
Many large DCs (Data Centers), especially Cloud DCs, host
tasks (or workloads) for multiple tenants. A tenant can be
an organization or a department of an organization. There
are communications among tasks belonging to one tenant and
communications among tasks belonging to different tenants
or with external entities.
Server Virtualization, which is being used in almost all of
today's data centers, enables many VMs to run on a single
physical computer or server sharing the
Dunbar, et al. Expires December 17, 2020 [Page 3]
Internet-Draft VM Mobility Solution June 17, 2020
processor/memory/storage. Network connectivity among VMs
is provided by the network virtualization edge (NVE)
[RFC8014]. It is highly desirable [RFC7364] to allow VMs
to be moved dynamically (live, hot, or cold move) from one
server to another for dynamic load balancing or optimized
work distribution.
There are many challenges and requirements related to VM
mobility in large data centers, including dynamic
attachment and detachment of VMs to/from Virtual Network
Edges (VNEs). In addition, retaining IP addresses after a
move is a key requirement [RFC7364]. Such a requirement is
needed in order to maintain existing transport layer
connections.
In traditional Layer-3 based networks, retaining IP
addresses after a move is generally not recommended because
frequent moves will cause fragmented IP addresses, which
introduces complexity in IP address management.
In view of the many VM mobility schemes that exist today,
there is a desire to document comprehensive VM mobility
solutions that cover both IPv4 and IPv6. The large Data
Center networks can be organized as one large Layer-2
network geographically distributed in several
buildings/cities or Layer-3 networks with large number of
host routes that cannot be aggregated as the result of
frequent moves from one location to another without
changing their IP addresses. The connectivity between
Layer 2 boundaries can be achieved by the NVE functioning
as a Layer 3 gateway router across bridging domains.
2. Conventions used in this document
This document uses the terminology defined in [RFC7364].
In addition, we make the following definitions:
VM: Virtual Machine
Task: A task is a program instantiated or running on a
VM or a container. Tasks running in VMs or
containers can be migrated from one server to
Dunbar, et al. Expires December 17, 2020 [Page 4]
Internet-Draft VM Mobility Solution June 17, 2020
another. We use task, workload and VM
interchangeably in this document.
Hot VM Mobility: A given VM could be moved from one server
to another in a running state without terminating
the VM.
Warm VM Mobility: In case of warm VM mobility, the VM
states are mirrored to the secondary server (or
domain) at predefined regular intervals. This
reduces the overheads and complexity, but this
may also lead to a situation when both servers
may not contain the exact same data (state
information)
Cold VM Mobility: A given VM could be moved from one
server to another in stopped or suspended state.
Old NVE: refers to the old NVE where packets were
forwarded to before migration.
New NVE: refers to the new NVE after migration.
Packets in flight: refers to the packets received by the
old NVE sent by the correspondents that have old
ARP or neighbor cache entry before VM or task
migration.
Users of VMs in diskless systems or systems not using
configuration files are called end user clients.
Cloud DC: Third party data centers that host
applications, tasks or workloads owned by
different organizations or tenants.
3. Requirements
This section states requirements on data center network VM
mobility.
Dunbar, et al. Expires December 17, 2020 [Page 5]
Internet-Draft VM Mobility Solution June 17, 2020
- Data center network should support both IPv4 and IPv6 VM
mobility.
- VM mobility should not require changing an VM's IP
address(es) after the move.
- "Hot Migration" requires the transport service continuity
across the move, while in "Cold Migration" the transport
service is restarted, i.e. the task is stopped on the old
NVE, is moved to the new NVE and then restarted. Not all DCs
support "Hot Migration. DCs that only support Cold Migration
should make their customers aware of the potential service
interruption during a Cold Migration.
- VM mobility solutions/procedures should minimize triangular
routing except for handling packets in flight.
- VM mobility solutions/procedures should not need to use
tunneling except for handling packets in flight.
4. Overview of the VM Mobility Solutions
4.1. Inter-VN and External Communication
Inter VN (Virtual Network) communication refers to
communication among tenants (or hosts) belonging to
different VNs. Those tenants can be attached to the NVEs
co-located in the same Data Center or in different Data
centers. When a VM communicates with an external entity,
the VM is effectively communicating with a peer in a
different network or a globally reachable host.
This document assumes that the inter-VNs communication and
the communication with external entities are via NVO3
Gateway functionality as described in Section 5.3 of RFC
8014 [RFC8014]. NVO3 Gateways relay traffic onto and off of
a virtual network, enabling communication both across
different VNs and with external entities.
NVO3 Gateway functionality enforces appropriate policies to
control communication among VNs and with external entities
(e.g., hosts).
Moving a VM to a new NVE may move the VM away from the NVO3
Gateway(s) used by the VM's traffic, e.g., some traffic may
be better handled by an NVO3 Gateway that is closer to the
new NVE than the NVO3 Gateway that was used before the VM
move. If NVO3 Gateway changes are not possible for some
reason, then the VM's traffic can continue to use the prior
Dunbar, et al. Expires December 17, 2020 [Page 6]
Internet-Draft VM Mobility Solution June 17, 2020
NVO3 Gateway(s), which may have some drawbacks, e.g.,
longer network paths.
4.2. VM Migration in a Layer 2 Network
In a Layer-2 based approach, a VM moving to another NVE
does not change its IP address. But this VM is now under a
new NVE, previously communicating NVEs may continue sending
their packets to the old NVE. Therefore, the previously
communicating NVEs need to promptly update their Address
Resolution Protocol (ARP) caches of IPv4 [RFC826] or
neighbor caches of IPv6 [RFC4861]. If the VM being moved
has communication with external entities, the NVO3 gateway
needs to be notified of the new NVE where the VM is moved
to.
In IPv4, the VM immediately after the move should send a
gratuitous ARP request message containing its IPv4 and
Layer 2 MAC address in its new NVE. Upon receiving this
message, the new NVE can update its ARP cache. The new NVE
should send a notification of the newly attached VM to the
central directory [RFC7067] embedded in the NVA to update
the mapping of the IPv4 address & MAC address of the moving
VM along with the new NVE address. An NVE-to-NVA protocol
is used for this purpose [RFC8014]. The old NVE, upon a VM
is moved away, should send an ARP scan to all its attached
VMs to refresh its ARP Cache.
Reverse ARP (RARP) which enables the host to discover its
IPv4 address when it boots from a local server [RFC903], is
not used by VMs if the VM already knows its IPv4 address
(most common scenario). Next, we describe a case where RARP
is used.
There are some vendor deployments (diskless systems or
systems without configuration files) wherein the VM's user,
i.e. end-user client askes for the same MAC address upon
migration. This can be achieved by the clients sending
RARP request message which carries the MAC address looking
for an IP address allocation. The server, in this case the
new NVE needs to communicate with NVA, just like in the
gratuitous ARP case to ensure that the same IPv4 address is
assigned to the VM. NVA uses the MAC address as the key in
the search of ARP cache to find the IP address and informs
Dunbar, et al. Expires December 17, 2020 [Page 7]
Internet-Draft VM Mobility Solution June 17, 2020
this to the new NVE which in turn sends RARP reply message.
This completes IP address assignment to the migrating VM.
Other NVEs that have attached VMs or the NVO3 Gateway that
have external entities communicating with this VM may still
have the old ARP entry. To avoid old ARP entries being used
by other NVEs, the old NVE upon discovering a VM is
detached should send a notification to all other NVEs and
its NVO3 Gateway to time out the ARP cache for the VM
[RFC8171]. When an NVE (including the old NVE) receives
packet or ARP request destined towards a VM (its MAC or IP
address) that is not in the NVE's ARP cache, the NVE should
send query to NVA's Directory Service to get the associated
NVE address for the VM. This is how the old NVE tunneling
these in-flight packets to the new NVE to avoid packets
loss.
When VM address is IPv6, the operation is similar:
In IPv6, after the move, the VM immediately sends an
unsolicited neighbor advertisement message containing its
IPv6 address and Layer-2 MAC address to its new NVE. This
message is sent to the IPv6 Solicited Node Multicast
Address corresponding to the target address which is the
VM's IPv6 address. The NVE receiving this message should
send request to update VM's neighbor cache entry in the
central directory of the NVA. The NVA's neighbor cache
entry should include IPv6 address of the VM, MAC address of
the VM and the NVE IPv6 address. An NVE-to-NVA protocol is
used for this purpose [RFC8014].
To avoid other NVEs communicating with this VM using the
old neighbor cache entry, the old NVE upon discovering a VM
being moved or VM management system which initiates the VM
move should send a notification to all NVEs to timeout the
ND cache for the VM being moved. When a ND cache entry for
those VMs times out, their corresponding NVEs should send
query to the NVA for an update.
4.3. VM Migration in Layer-3 Network
Traditional Layer-3 based data center networks usually have
all hosts (tasks) within one subnet attached to one NVE. By
this design, the NVE becomes the default route for all
hosts (tasks) within the subnet. But this design requires
Dunbar, et al. Expires December 17, 2020 [Page 8]
Internet-Draft VM Mobility Solution June 17, 2020
IP address of a host (task) to change after the move to
comply with the prefixes of the IP address under the new
NVE.
A VM migration in Layer 3 Network solution is to allow IP
addresses staying the same after moving to different
locations. The Identifier Locator Addressing or ILA
[Herbert-ILA] is one of such solutions.
Because broadcasting is not available in Layer-3 based
networks, multicast of neighbor solicitations in IPv6 and
ARP for IPv4 would need to be emulated. Scalability of the
multicast (such as IPv6 ND and IPv4 ARP) can become
problematic because the hosts belonging to one subnet (or
one VLAN) can span across many NVEs. Sending broadcast
traffic to all NVEs can cause unnecessary traffic in the
DCN if the hosts belonging to one subnet are only attached
to a very small number of NVEs. It is preferable to have a
directory [RFC7067] or NVA to manage the updates to an NVE
of the potential other NVEs a specific subnet may be
attached and get periodic reports from an NVE of all the
subnets being attached/detached, as described by RFC8171.
Hot VM Migration in Layer 3 involves coordination among
many entities, such as VM management system and NVA. Cold
task migration, which is a common practice in many data
centers, involves the following steps:
- Stop running the task.
- Package the runtime state of the job.
- Send the runtime state of the task to the new NVE where
the task is to run.
- Instantiate the task's state on the new machine.
- Start the tasks for the task continuing from the point
at which it was stopped.
RFC7666 has the more detailed description of the State
Machine of VMs controlled by Hypervisor
4.4. Address and Connection Management in VM Migration
Since the VM attached to the new NVE needs to be assigned
with the same address as VM attached to the old NVE, extra
processing or configuration is needed, such as:
Dunbar, et al. Expires December 17, 2020 [Page 9]
Internet-Draft VM Mobility Solution June 17, 2020
- Configure IPv4/v6 address on the target VM/NVE.
- Suspend use of the address on the old NVE. This
includes the old NVE sending query to NVA upon receiving
packets destined towards the VM being moved away. If
there is no response from NVA for the new NVE for the
VM, the old NVE can only drop the packets. Referring to
the VM State Machine described in RFC7666.
- Trigger NVA to push the new NVE-VM mapping to other NVEs
which have the attached VMs communicating with the VM
being moved.
Connection management for the applications running on the
VM being moved involves reestablishing existing TCP
connections in the new place.
The simplest course of action is to drop all TCP
connections to the applications running on the VM during a
migration. If the migrations are relatively rare events in
a data center, impact is relatively small when TCP
connections are automatically closed in the network stack
during a migration event. If the applications running are
known to handle this gracefully (i.e. reopen dropped
connections) then this approach may be viable.
More involved approach to connection migration entails a
proxy to the application (or the application itself) to
pause the connection, package connection state and send to
target, instantiate connection state in the peer stack, and
restarting the connection. From the time the connection is
paused to the time it is running again in the new stack,
packets received for the connection could be silently
dropped. For some period of time, the old stack will need
to keep a record of the migrated connection. If it
receives a packet, it can either silently drop the packet
or forward it to the new location, as described in Section
5.
5. Handling Packets in Flight
The old NVE may receive packets from the VM's ongoing
communications. These packets should not be lost; they
should be sent to the new NVE to be delivered to the VM.
The steps involved in handling packets in flight are as
follows:
Dunbar, et al. Expires December 17, 2020 [Page 10]
Internet-Draft VM Mobility Solution June 17, 2020
Preparation Step: It takes some time, possibly a few
seconds for a VM to move from its old NVE to a new NVE.
During this period, a tunnel needs to be established so
that the old NVE can forward packets to the new NVE. The
old NVE gets the new NVE address from its NVA assuming that
the NVA gets the notification when a VM is moved from one
NVE to another. It is out of the scope of this document on
which entity manages the VM move and how NVA gets notified
of the move. The old NVE can store the new NVE address for
the VM with a timer. When the timer expired, the entry for
the new NVE for the VM can be deleted.
Tunnel Establishment - IPv6: Inflight packets are tunneled
to the new NVE using the encapsulation protocol such as
VXLAN in IPv6.
Tunnel Establishment - IPv4: Inflight packets are tunneled
to the new NVE using the encapsulation protocol such as
VXLAN in IPv4.
Tunneling Packets - IPv6: IPv6 packets received for the
migrating VM are encapsulated in an IPv6 header at the old
NVE. The new NVE decapsulates the packet and sends IPv6
packet to the migrating VM.
Tunneling Packets - IPv4: IPv4 packets received for the
migrating VM are encapsulated in an IPv4 header at the old
NVE. The new NVE decapsulates the packet and sends IPv4
packet to the migrating VM.
Stop Tunneling Packets: When the Timer for storing the new
NVE address for the VM expires. The Timer should be long
enough for all other NVEs that need to communicate with the
VM to get their NVE-VM cache entries updated.
6. Moving Local State of VM
In addition to the VM mobility related signaling (VM
Mobility Registration Request/Reply), the VM state needs to
be transferred to the new NVE. The state includes its
memory and file system if the VM cannot access the memory
and the file system after moving to the new NVE.
The mechanism of transferring VM States and file system is
out of the scope of this document. Referring to RFC7666 for
detailed information.
Dunbar, et al. Expires December 17, 2020 [Page 11]
Internet-Draft VM Mobility Solution June 17, 2020
7. Handling of Hot, Warm and Cold VM Mobility
Both Cold and Warm VM mobility (or migration) refer to the
complete shutdown of the VM at the old NVE before
restarting the VM at the new NVE. Therefore, all transport
services to the VM need to be restarted.
In this document, all VM mobility is initiated by VM
Management System. In case of Cold VM mobility, the
exchange of states between the old NVE and the new NVE
occurs after the VM attached to the old NVE is completely
shut down. There is a time delay before the new VM is
launched. The cold mobility option can be used for non-
mission-critical applications and services that can
tolerate interruptions of TCP connections.
For Hot VM Mobility, a VM moving to a new NVE does not
change its IP address and the service running on the VM is
not interrupted. The VM needs to send a gratuitous Address
Resolution message or unsolicited Neighbor Advertisement
message upstream after each move.
In case of Warm VM mobility, the functional components of
the new NVE receive the running status of the VM at
frequent intervals. Consequently, it takes less time to
launch the VM under the new NVE. Other NVEs that
communicate with the VM can be notified promptly about the
VM migration. The duration of the time interval determines
the effectiveness (or benefit) of Warm VM mobility. The
larger the time duration, the less effective the Warm VM
mobility becomes.
In case of Cold VM mobility, the VM on the old NVE is
completely shut down and the VM is launched on the new NVE.
To minimize the chance of the previously communicating NVEs
sending packets to the old NVE, the NVA should push the
updated ARP/neighbor cache entry to all previously
communicating NVEs when the VM is started on the new NVE.
Alternatively, all NVEs can periodically pull the updated
ARP/neighbor cache entry from the NVA to shorten the time
span that packets are sent to the old NVE.
Upon starting at the new NVE, the VM should send an ARP or
Neighbor Discovery message.
Dunbar, et al. Expires December 17, 2020 [Page 12]
Internet-Draft VM Mobility Solution June 17, 2020
8. Other Options
Hot, Warm and Cold mobility are planned activities which
are managed by VM management system.
For unexpected events, such as overloads and failure, a VM
might need to move to a new NVE without any service
interruption, and this is called Hot VM Failover in this
document. In such case, there are redundant primary and
secondary VMs whose states are continuously synchronized by
using methods that are outside the scope of this draft. If
the VM in the primary NVE fails, there is no need to
actively move the VM to the secondary NVE because the VM in
the secondary NVE can immediately pick up and continue
processing the applications/services.
The Hot VM Failover is transparent to the peers that
communicate with this VM. This can be achieved via
distributed load balancing when both active VM and standby
VM share the same TCP port and same IP address. In the
absence of a failure, the new VM can pick up providing
service while the sender (peer) continues to receive Ack
from the old VM. If the situation (loading condition of the
primary responding VM) changes the secondary responding VM
may start providing service to the sender (peers). When a
failure occurs, the sender (peer) may have to retry the
request, so this structure is limited to requests that can
be safely retried.
If the load balancing functionality is not used, the Hot VM
Failover can be made transparent to the sender (peers)
without relying on request retry and by using the
techniques that are described in section 4. This does not
depend on the primary VM or its associated NVE doing
anything after the failure. This restriction is necessary
because a failure that affects the primary VM may also
cause its associated NVE to fail. For example, a physical
server failure can cause the VM and its NVE to fail.
The Hot VM Failover option is the costliest mechanism, and
hence this option is utilized only for mission-critical
applications and services.
9. VM Lifecycle Management
The VM lifecycle management is a complicated task, which is
beyond the scope of this document. Not only it involves
Dunbar, et al. Expires December 17, 2020 [Page 13]
Internet-Draft VM Mobility Solution June 17, 2020
monitoring server utilization, balancing the distribution
of workload, etc., but also needs to support seamless
migration of VM from one server to another.
10. Security Considerations
Security threats for the data and control plane for overlay
networks are discussed in [RFC8014]. ARP (IPv4) and ND
(IPv6) are not secure, especially if they can be sent
gratuitously across tenant boundaries in a multi-tenant
environment.
In overlay data center networks, ARP and ND messages can be
used to mount address spoofing attacks from untrusted
VMs and/or other untrusted sources. Examples of untrusted
VMs are the VMs instantiated with the third-party
applications that are not written by the tenant of the VMs.
Those untrusted VMs can send false ARP (IPv4) and ND (IPv6)
messages, causing significant overloads in NVEs, NVO3
Gateways, and NVAs. The attacker can intercept, modify, or
even stop data in-transit ARP/ND messages intended for
other VNs and initiate DDOS attacks to other VMs attached
to the same NVE. A simple black-hole attacks can be mounted
by sending a false ARP/ND message to indicate that the
victim's IP address has moved to the attacker's VM. That
technique can also be used to mount man-in-the-middle
attacks. Additional effort is required to ensure that the
intercepted traffic can be eventually delivered to the
impacted VMs.
The locator-identifier mechanism given as an example (ILA)
doesn't include secure binding. It does not discuss how to
securely bind the new locator to the identifier.
Because of those threats, VM management system needs to
apply stronger security mechanisms when adding a VM to an
NVE. Some tenants may have requirements that prohibit their
VMs to be co-attached to the NVEs with other tenants. Some
Data Centers deploy additional functionality in their NVO3
Gateways to mitigate the ARP/ND threats. These may include
periodically sending each Gateway's ARP/ND cache contents
to the NVA or other central control system. The objective
is to identify the ARP/ND cache entries that are not
consistent with the locations of VMs and their IP addresses
indicated by the VM Management System.
Dunbar, et al. Expires December 17, 2020 [Page 14]
Internet-Draft VM Mobility Solution June 17, 2020
11. IANA Considerations
This document makes no request to IANA.
12. Acknowledgments
The authors are grateful to Bob Briscoe, David Black, Dave R.
Worley, Qiang Zu, Andrew Malis for helpful comments.
13. References
13.1. Normative References
[RFC826] Plummer, D., "An Ethernet Address Resolution
Protocol: Or Converting Network Protocol Addresses
to 48.bit Ethernet Address for Transmission on
Ethernet Hardware", RFC826, November 1982.
[RFC903] Finlayson, R., Mann, T., Mogul, J., and M.
Theimer, "A Reverse Address Resolution Protocol",
STD 38, RFC 903.
[RFC4861] Narten, T., Nordmark, E., Simpson, W., and H.
Soliman, "Neighbor Discovery for IP version 6
(IPv6)", RFC 4861, DOI 10.17487/RFC4861, September
2007, <https://www.rfc-editor.org/info/rfc4861>.
[RFC7067] L. Dunbar, D. Eastlake, R. Perlman, I. Gashinsky,
"directory Assistance Problem and High Level Design
Proposal", RFC7067, Nov. 2013
[RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang,
L., Kreeger, L., and M. Napierala, "Problem
Statement: Overlays for Network Virtualization",
RFC 7364, DOI 10.17487/RFC7364, October 2014,
<https://www.rfc-editor.org/info/rfc7364>.
Dunbar, et al. Expires December 17, 2020 [Page 15]
Internet-Draft VM Mobility Solution June 17, 2020
[RFC7365] Lesserre, M, et al, "Framework for Data Center (DC)
Network Virtualization", RFC7365, Oct 2014.
[RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M.,
and T. Narten, "An Architecture for Data-Center
Network Virtualization over Layer 3 (NVO3)", RFC
8014, DOI 10.17487/RFC8014, December 2016,
<https://www.rfc-editor.org/info/rfc8014>.
[RFC8171] D. Eastlake, L. Dunbar, R. Perlman, Y. Li, "Edge
Directory Assistance Mechanisms", RFC 8171, June
2017
13.2. Informative References
[Herbert-ILA] Herbert, T. and P. Lapukhov, "Identifier-
locator addressing for IPv6", draft-herbert-
intarea-ila-01 (work in progress), September 2018.
Dunbar, et al. Expires December 17, 2020 [Page 16]
Internet-Draft VM Mobility Solution June 17, 2020
Authors' Addresses
Linda Dunbar
Futurewei
Email: ldunbar@futurewei.com
Behcet Sarikaya
Denpel Informatique
Email: sarikaya@ieee.org
Bhumip Khasnabish
Info.: https://about.me/bhumip
Email: vumip1@gmail.com
Tom Herbert
Intel
Email: tom@herbertland.com
Saumya Dikshit
Aruba-HPE
Bangalore, India
Email: saumya.dikshit@hpe.com
Dunbar, et al. Expires December 17, 2020 [Page 17]