Network Working Group | F. Baker |
Internet-Draft | C. Marino |
Intended status: Informational | I. Wells |
Expires: April 20, 2015 | Cisco Systems |
October 17, 2014 |
A Model for IPv6 Operation in OpenStack
draft-baker-openstack-ipv6-model-00
This is an overview of a network model for OpenStack, designed to dramatically simplify scalable network deployment and operations.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 20, 2015.
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
OpenStack, and its issues.
OpenStack is a cloud computing orchestration solution developed using an open source community process. It consists of a collection of 'projects', each implementing the creation, control and administration of tenant resources. There are separate OpenStack projects for managing compute, storage and network resources.
Neutron is the project that manages OpenStack networking. It exposes a northbound API to the other OpenStack projects for programmatic control over tenant network connectivity. The southbound interface is implemented as one or more device driver plugins that are built to interact with specific devices in the network. This approach provides the flexibility to deploy OpenStack networking using a range of alternative techniques.
An OpenStack tenant is required to create what OpenStack identifies as a 'Network' connecting their virtual machines. This Network is instantiated via the plugins as either a layer 2 network, a layer 3 network, or as an overlay network. The actual implementation is unknown to the tenant. The technology used to provide these networks is selected by the OpenStack operator based upon the requirements of the cloud deployment.
The tenant also is required to specify a 'Subnet' for each Network. This specification is made by providing a CIDR prefix for IPv4 address allocation via DHCP and for IPv6 address allocation via DHCP or SLAAC. This address range may be from within the address range of the datacenter (non-overlapping), or overlapping RFC 1918 addresses. Tenants may create multiple Networks, each with its own Subnet.
An OpenStack Subnet is a logical layer 2 network and requires layer 3 routing for packets to exit the Subnet. This is achieved by attaching the Subnet to a Neutron Router. The Neutron router implements Network Address Translation for external traffic from tenant networks as well for providing connectivity to tenant networks from the outside. Using Linux utilities, OpenStack can support overlapping RFC 1918 addresses between tenants.
OpenStack Subnets are typically implemented as VLANs in a datacenter. When tenant scalability requirement grow large, an overlay approach is typically used. Because of the difficulties in scaling and administering large layer 2 and/or overlay networks, some OpenStack integrations chose not to provide isolated Subnets and simply offer tenants a layer 3 based network alternative.
OpenStack uses Layer 3 and Layer 2 Linux utilities on hosts to provide protection against IP/MAC spoofing and ARP poisoning.
One of the fundamental requirements of OpenStack Networking (Neutron) is to provide scalable, isolated tenant networks. Today this is achieved via L2 segmentation using either a) standard 802.1Q VLANs or b) an overlay approach based on one of several L2 over L3 encapsulation techniques available today such as 802.1ad, VXLAN, STT or NVGRE.
However, these approaches still struggle to provide scalable, transparent, manageable, high performing isolated tenant networks. VLAN's don't scale beyond 4096 (2^12) networks and have complex trunking requirements when tenants span host and racks. IEEE 802.1ad (QinQ) partially solves that, but adds another limit - at most 2^12 tenants, each of which may have 2^12 VLANs. IP Encapsulation introduces additional complexity on host computers running hypervisors as well as impact performance of tenant applications running on virtual machines. Overlay based isolation techniques may also impair traditional network monitoring and performance management tools. Moreover, when these isolated (L2) networks require external access to other networks or the public Internet, they require even more complex solutions to accommodate overlapping IP prefixes and network address translation (NAT).
As more capabilities are built on to these layer 2 based ‘virtual’ networks, complexity continues to grow.
This draft presents a new Layer 3 based approach to OpenStack networking using IPv6 that can be deployed natively on IPv6 networks. It will be shown that this approach can provide tenant isolation without the limitations of existing L2 based alternatives, as well as deliver high performance networks transparently using a simplified tenant network connectivity model without the overhead of encapsulation or managing overlapping IP addresses and address translations. We note that some large content providers, notably Google and Facebook [FaceBook-IPv6], are going in exactly this direction.
In this section, we attempt to list critical requirements.
As a design approach, we presume an IPv6-only data center in a world that might have IPv4 clients outside of it. This design explicitly does not depend on VLANs, QinQ, VXLAN, MPLS, Segment Routing, IP/IP or GRE tunnels, or anything else. Data center operators remain free to use any of those tools, but they are not required. If we can do everything required for OpenStack networking with IPv6 alone, these other networking technologies may be used as optimizations. If we are unable to satisfy the OpenStack requirements that also is something we wish to know and understand.
OpenStack is designed to be used by many cloud users or tenants. Scalable, secure and isolated tenant networks are a requirement for building a multi-tenant cloud datacenter. The OpenStack administrator/operator can design and configure a cloud environment to provide network isolation using the approach described in this document, alone, or in combination with any of the above network technologies . However, all the details of the underlying technology and implementation details are completely transparent to the tenant itself.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
A common requirement in network and data center operations is reliability, serviceability, and maintainability of their operations in the presence of an outage. At minimum, this implies multihoming in the sense of having multiple upstream ISPs; in many cases, it also implies multiple and at times duplicate data centers, and tenants stretched or able to be readily moved or recreated across multiple data centers.
Microsoft Azure [Microsoft-Azure] has purchased a 100 acre piece of land for the construction of a single data center. In terms of physical space, that is enough for a data center with about half a million 19" RETMA racks.
With even modest virtual machine density, infrastructure at this scale easily exhausts the 16M available RFC 1918 private addresses (i.e. 10.0.0.0/8) and explains the recent efforts by webscale cloud providers to deploy IPv6 throughout their new datacenters.
While it is possible that a single tenant would require a 100 acre data center, it would be unusual. In most such data centers, one would expect a large number of tenants.
Isolation is required between tenants, and at times between components of a single tenant.
A "tenant" is defined as a set of resources under common administrative control. It may be appropriate for tenants to communicate with each other within the context of an application or relationships among their owners or operators. However, unless specified otherwise, tenants are intended to operate as if they were on their own company's premises and be isolated from one another.
There are often security compartments within a corporate network, just as there are security barriers between companies. As a result, there is a recursive isolation requirement: it must be possible to isolate an identified part of a tenant from another part of the same tenant.
To the extent possible (and, for operators, the concept will bring a smile), operation of a multi-location multi-tenant data center, and the design of an application that runs in one, should be simple and uncoupled.
As discussed in [RFC3439], this requires that the operational model required to support a tenant with only two physical machines, or virtual machines in the same physical chassis, should be the same as that required to support a tenant running a million machines in a multiple data center application. Additionally, this same operational model should scale from running a single tenant up to many thousands of tenants.
As described in Section 1.1, currently, an OpenStack tenant is required to specify a Subnet's CIDR prefix for IP address allocation. With this proposal, this is no longer required.
It must be possible to extend the architecture across multiple data centers. These data centers may be operated by distinct entities, with security policies that apply to their interconnection.
In the OpenStack model, the cloud computing user, or tenant, is building something Edward Yourdon might call a "structured design" for the application they are building. In the 1960's, when Yourdon started specifying process and data flow diagrams, these were job steps in a deck of Job Control Language cards; in OpenStack, they are multiple, individual machines, virtual or physical, running parts of a structured application.
In these, one might find a load balancer that receives and distributes requests to request processors, a set of stored data processing applications, and the storage they depend on. What is important to the OpenStack tenant is that "this" and "that" communicate, potentially using or not using multicast communications, and don't communicate with "the other". Typically unnecessary is any and all information regarding how this communication actually needs to occur (i.e. placement of routers, switches, and IP subnets, prefixes, etc.).
An IPv6 based networking model simplifies the configuration of tenant connectivity requirements. Global reachability eliminates the need for network address translation devices as well as tenant-specified Subnet prefixes (Section 2.8), although tenant-specified ULA prefixes or prefixes from the owner of the tenant's address space are usable with it. With the exception of network security functions, no network devices need to be specified or configured to provide connectivity.
The premises of the routing and addressing models are that
We expect to find the data center to be composed of some minimal unit of connectivity and maintenance, such as a rack or row, and equipped with one or more Top-of-Rack or End-of-Row switch(es); each configured with at least one subnet prefix, perhaps one per such switch. For the purposes of this note, these will be called Racks and Top-of-Rack switches, and when applied to other architectures the appropriate translation needs to be imposed.
Figure 1 describes a relatively typical rack design. It is a simple fat-tree architecture, with every device in a pair, so that any failure has an immediate hot backup. There are other common designs, such as those that consider each rack to be in a "row" and in a "column", with a distribution switch in each.
Distribution Switches connecting / Layer / racks in a pod, and / / connecting pods / / +-+-+ +-+-+ Mutual backup TOR +-+TOR+---+TOR+-+ switches | +---+ +---+ | | +-----------+ | +-+ host +-+ Each host has two | +-----------+ | Ethernet interfaces +-+ host +-+ with separate subnets | +-----------+ | | . . | | . . | | . . | Design premise: complete | +-----------+ | redundancy, with every +-+ host +-+ switch and every cable | +-----------+ | backed up by a doppelganger +-+ host +-+ +-----------+
Figure 1: Typical Rack Design
Tenant resources need to be told, by configuration or naming, the addresses of resources they communicate with. This is true regardless of their location or relationship to a given tenant. In environments with well-known addresses, this gets complex and unscalable. This was learned very early with Internet hostnames; a single "hostfile" was maintained by a central entity and updated daily, which quickly became unmanageable. The result was the development of the Domain Name System; the level of indirection between names and addresses made the system more scalable. It also facilitated ongoing maintenance. If a service needed multiple servers, or a server needed to change its address, that was trivially solved by changing the DNS Resource Record; every resource that needed the new address would obtain it the next time it queried the DNS. It has also facilitated the IPv4/IPv6 transition; a resource that has an IPv6 address is given a AAAA record in addition to, or to replace, its IPv4 A record.
Similarly, today's reliance on NAPT technology frequently limits the capabilities of an application. It works reasonably well for client accessing a client/server application when the protocol does not carry addressing information. If there is an expectation that one resource's private address will be meaningful to a peer, such as when an SIP client presents its address in SDP or an HTTP server presents an address in a redirection, either the resource needs to understand the difference between an "inside" and an "outside" address and know which is which, or it needs a traversal algorithm that changes the addresses. For peer-to-peer applications, this ultimately means providing a network design in which those issues don't apply.
IPv6 provides global addresses, enough of them that there is no real expectation of running out any time soon, making these issues go away. In addition, with the IPv4 address space running out, both globally and within today's large datacenters, there aren't necessarily addresses available for an IPv4 application to use, even as a floating IP address.
Hence, the model we propose is that a resource in a tenant is told the addresses of the other resources with which it communicates. They are IPv6 addresses, and the data center takes care to ensure that inappropriate communications do not take place.
A unicast address in an IP network identifies a topological location, by association with an IP prefix (which might be for a subnet or any aggregate of subnets). It also identifies a single interface located within that subnet, which may or may not be instantiated at the time. We assume that there is a subnet associated with a top-of-rack switch or whatever its counterpart would be in a given network design, and that the physical and virtual machines located in that rack have addresses in that subnet. This is the same prefix that is used by the datacenter administrator.
A common requirement is that tenants have the use of some form of private address space. In an IPv6 network, a Unique Local IPv6 Unicast Address [RFC4193] may be used to accomplish this. In this case, however, the addresses will need to be explicitly assigned to physical or virtual machines used by the tenant, perhaps using DHCP or YANG, where a standard IPv6 address could be allocated using SLAAC or other technologies.
The value of this is that Are we suggesting 2.8 address model is for GUA and that ULAs are a corner case in the data center. Tenants have no routing information or other awareness of the prefix. This is not intended for use behind a NAPT; resources that need accessibility to or from resources outside the tenant, and especially outside the data center, need global addresses.
Multicast capability is a capability enjoyed by some groups of resources, that they can send a single message and have it delivered to multiple destinations roughly simultaneously. At the link layer, this means sending a message once that is received by a specified set of recipient resources using hardware capabilities. IP multicast can be implemented on a LAN as specified in [RFC4291], and can also cross multiple subnets directly, using routing protocols such as Protocol Independent Multicast [RFC4601] [RFC4602] [RFC4604] [RFC4605] [RFC4607]. In IPv6, the model would be that when a group of resources is created with a multicast capability, it is allocated one or more source-specific transient group addresses as defined in section 2.7 of that RFC.
OpenStack IPv4 Neutron uses “floating IPv4 addresses” – global or public IPv4 addresses - to enable remote resources to connect to tenant private network endpoints. Tenant end points can connect out to remote resources through an “External Default Gateway”. Both of these depend on NAPT (DNAT/SNAT) to ensure that IPv4 end points are able communicate and at the same time ensure tenant isolation.
If IPv6 is deployed in a data center, there are fundamentally two ways a tenant can interact with IPv4 peers:
The first model is complex, if for no other reason than that there are two fundamental models in use, one with various encapsulations hiding overlapping address space and one with non-overlapping address space.
To simplify the network, as noted in Section 2.1, we suggest that the data center be internally IPv6-only, and IPv4 be translated to IPv6 at the data center edge. The advantage is that it enables IPv4 access while that remains in use, and as IPv6 takes over, it reduces the impact of vestigial support for IPv4.
The SIIT Translation model in [I-D.anderson-v6ops-siit-dc] has IPv4 traffic come to an translator [RFC6145][RFC6146] having a pre-configured translation, resulting in an IPv6 packet indistinguishable from the packet the remote resource might have sent had it been IPv6-capable, with one exception. The IPv6 destination address is that of the endpoint (the same address advertised in a AAAA record), but the source address is an IPv4-Embedded IPv6 Address [RFC6052] with the IPv4 address of the sender embedded in a prefix used by the translator.
Access to external IPv4 resources is provided in the same way: an DNS64 [RFC6147] server is implemented that contains AAAA records with an IPv4-Embedded IPv6 Address [RFC6052] with the IPv4 address of the remote resource embedded in a prefix used by the translator.
This follows the Framework for IPv4/IPv6 Translation [RFC6144], making the internal IPv4 address a floating IP address attached to an internal IPv6 address, and the external "dial-out" address indistinguishable from a native IPv6 address.
The other possible mdel, applicable to IPv4-only devices, is to run a legacy OpenStack environment inside IPv6 tunnels. This preserves the data center IPv6-only, and enables IPv4-only applications, notably those whose licenses tie them to IPv4 addresses, to run. However, it adds significant overhead in terms of encapsulation size and network management complexity.
Every rack and physical host requires an IP prefix that is reachable by the OpenStack operator. This will normally be a global IPv6 unicast address. For scalability purposes, as isolation is handled separately, this is normally the same prefix as is used by tenants in the rack.
In this model, the a label is used to identify a project/tenant or part of a project/tenant, and is used to facilitate access control based on the label value. In Section 1, we noted the limitation of 802.11ad QinQ, in that with Metro Ethernet networks it assigns a VLAN ID to a customer and a second VLAN id to a VLAN used by that customer, and is in that sense limited to 2^12 customers. Alternatively, it could be considered to be 2^12 geographies, with 2^12 tenant VLANs in each, meaning that the data center operator has to think about placement of tenant VMs in places their VLANs reach. Labels manage that space with different limits.
Three different types of labels are in view here: the IPv6 Flow Label, a federated identity in the IPv6 Destination Header, or a federated identity in the IPv6 Hop-by-Hop Header. These have different capabilities and implications.
The IPv6 flow label may be used to identify a tenant or part of a tenant, and to facilitate access control based on the flow label value. The flow label is a flat 20 bits, facilitating the designation of 2^20 (1,048,576) tenants without regard to their location. 1,048,576 is less than infinity, but compared to current data centers is large, and much simpler to manage.
Note that this usage differs from the current IPv6 Flow Label Specification [RFC6437]. It also differs from the use of a flow label recommended by the IPv6 Specification [RFC2460], and the respective usages of the flow label in the Resource ReSerVation Protocol [RFC2205] and the previous IPv6 Flow Label Specification [RFC3697], and the projected usage in Low-Power and Lossy Networks [RFC5548][RFC5673]. Within a target domain, the usage may be specified by the domain. That is the viewpoint taken in this specification.
Alternatively, [I-D.baker-openstack-rbac-federated-identity] defines a numeric label usable in network layer Role-based Access Control. The syntax is a sequence of positive integers; the semantics of the integers are defined by the administration(s) using them. A single integer may be used in the same way as the Flow Label in Section 3.3.1.1, but without the 20 bit limitation. A pair of integers might, for example, signify a data center and its tenants, or a company and its departments. Three integers might, again as an example, signify one data center operator among a set of data center operators, the second the clients of those operators, and the third subsets of those clients. In any event, as in Section 3.3.1.1, it identifies a set of machines, physical or virtual, that are authorized to communicate freely among themselves, but may or may not be authorized to communicate with other equipment.
Carried in the Destination Options Extension Header, this option is visible to Neutron in the originating and terminating chassis. This limits overheads in intermediate switches, and enables filtering in the destination system.
Carried in the Hop-by-Hop Extension Header, this option is visible to routers en route in addition to Neutron in the originating and terminating chassis, at the possible cost of some processing overhead. This facilitates filtering at any system.
Neutron today already implements a form of Network Ingress Filtering [RFC2827]. It prevents the VM from emitting traffic with an unauthorized MAC, IPv4, or IPv6 source address.
In addition to this, in this model Neutron prevents the VM from transmitting a network packet with an unauthorized label value. The VM MAY be configured with and authorized to use one of a short list of authorized label values, as opposed to simply having its choice overridden; in that case, Neutron verifies the value and overwrites one not in the list.
When a hypervisor is about to deliver an IPv6 packet to a VM, it checks the label value against a list of values that the VM is permitted to receive. If it contains an unauthorized value, the hypervisor discards the packet rather than deliver it. If the Flow Label is in use, Neutron zeros the label prior to delivery.
The intention is to hide the label value from malware potentially found in the VM, and enable the label to be used as a form of first and last hop security. This provides basic tenant isolation, if the label is assigned as a tenant identifier, and may be used more creatively such as to identify a network management application as separate from a managed resource.
This concept has the weakness that if a packet is not dropped at its source, it is dropped at its destination. It would be preferable for the packet to be dropped in flight, such as at the top-of-rack switch or an aggregation router.
Concepts discussed in IS-IS LSP Extendibility [I-D.baker-ipv6-isis-dst-flowlabel-routing][RFC5120][RFC5308] and OSPFv3 LSA Extendibility [I-D.baker-ipv6-ospf-dst-flowlabel-routing] [I-D.ietf-ospf-ospfv3-lsa-extend][RFC5340] may be used to isolate tenants in the routing of the data center backbone. This is not strictly necessary, if Section 3.3.2 is uniformly and correctly implemented. It does, however, present a second defense against misconfiguration, as the filter becomes ubiquitous in the data center and as scalable as routing.
As noted in Section 3.3.2, Neutron today implements a form of Network Ingress Filtering [RFC2827]. It prevents the VM from emitting traffic with an unauthorized MAC, IPv4, or IPv6 source address.
In IPv6, this is readily handled when the address or addresses used by a VM are selected by the OpenStack operator. It may then configure a per-VM filter with the addresses it has chosen, following logic similar to the Source Address Validation Solution for DHCP [I-D.ietf-savi-dhcp] or SEND [RFC7219]. This is also true of IPv6 Stateless Address Autoconfiguration (SLAAC) [RFC4862] when the MAC address is known and not shared.
However, when SLAAC is in use and either the MAC address is unknown or SLAAC's Privacy Extensions [RFC4941][RFC7217], are in use, Neutron will need to implement the provisions of FCFS SAVI: First-Come, First-Served Source Address Validation [RFC6620] in order to learn the addresses that a VM is using and include them in the per-VM filter.
This design supports these kinds of required layer 2 networks with the additional use of a layer 2 over layer 3 encapsulation and tunneling protocol, such as VXLAN [I-D.mahalingam-dutt-dcops-vxlan]. The important point here being that these overlays are used to address specific tenant network requirements and NOT deployed to remove the scalability limitations of OpenStack networking.
There are at least three ways VM movement can be accomplished:
The simplest and most reliable is to
This is obviously not movement of an existing VM, but preservation of the same number and function of VMs by creation of a new VM and killing the old.
At http://blogs.vmware.com/vsphere/2011/02/vmotion-whats-going-on-under-the-covers.html, VMWare describes its capability, called vMotion, in the following terms:
In a native-address environment, we add three steps:
If the VM is moved within the same subnet (which usually implies the same rack), there is no stitching or renumbering apart from ensuring that the MAC address moves with the VM. When the VM moves to a different subnet, however, we need to restitch routing, at least temporarily. This obviously calls for some definitions.
On messages transmitted by a virtual machine
On messages received for delivery to a virtual machine
This document does not ask IANA to do anything.
In Section 2.6 and Section 3.3, this specification considers inter-tenant and intra-tenant network isolation. It is intended to contribute to the security of a network, much like encapsulation in a maze of tunnels or VLANs might, but without the complexities and overhead of the management of such resources. This does not replace the use of IPsec, SSH, or TLS encryption or the use authentication technologies; if these would be appropriate in an on-premises corporate data center, they remain appropriate in a multi-tenant data center regardless of the isolation technology. However, one can think of this as a simple inter-tenant firewall based on the concepts of role-based access control; if it can be readily determined that a sender is not authorized to communicate with a receiver, such a transmission is prevented.
This specification places no personally identifying information in an unencrypted part of a packet, unless a person is explicitly represented by an IPv6 address or label value, which is beyond its scope. That said, if the RBAC Identifier in [I-D.baker-openstack-rbac-federated-identity] is used, the security and privacy considerations of that document are relevant here.
This document grew out of a discussion among the authors and contributors.
Puneet Konghot Cisco Systems San Jose, California 95134 USA Email: pkonghot@cisco.com Rohit Agarwalla Cisco Systems San Jose, California 95134 USA Email: roagarwa@cisco.com Shannon McFarland Cisco Systems Boulder, Colorado 80301 USA Email: shmcfarl@cisco.com
Figure 2
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. |
[RFC2460] | Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, December 1998. |