TOC |
|
This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”
The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 29, 2010.
Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents in effect on the date of publication of this document (http://trustee.ietf.org/license-info). Please review these documents carefully, as they describe your rights and restrictions with respect to this document.
Group communication services are most efficiently implemented on the lowest layer available. However, as the deployment status of multicast distribution largely varies throughout the Intenet, globally operational group solutions are frequently restricted to using a stable, upper layer protocol controlled by the application itself. This document describes a common multicast API that serves the requirements of data distribution and maintenance for multicast and broadcast on a middleware abstraction layer, suitable for transparent underlay and overlay communication. It proposes and discusses mapping mechanisms between different namespaces and forwarding technologies. Additionally, it describes the application of this API at gateways operating between current multicast instances throughout the Internet.
1.
Introduction
2.
Terminology
3.
Objectives and Reference Scenarios
4.
Overview
5.
Hybrid Multicast API
5.1.
Abstract Data Types
5.2.
Send/Receive Calls
5.3.
Service Calls
6.
Deployment Use Cases
6.1.
DVMRP
6.2.
PIM-SM
6.3.
PIM-SSM
6.4.
BIDIR-PIM
7.
IANA Considerations
8.
Security Considerations
9.
Acknowledgements
10.
Informative References
§
Authors' Addresses
TOC |
Group communication is implemented on different layers (e.g., IP vs. application layer multicast) as well as based on different technologies on the same tier (e.g. IPv4 vs. IPv6). To allow for a reliable deployment of applications and group services, a common API is required that offers calls to transmit and receive multicast data independent of the underlying technology, and also provides a consistent view on multicast states. This document describes an abstract group communication API and core functions required for transparent operations. Specific implementation guidelines with respect to operating systems or programming languages are out-of-scope of this document.
The aim of this common API is twofold:
Multicast technologies may be various P2P-based, IPv4 or IPv6 network layer multicast, or implemented by some other application service. Corresponding namespaces may be IP addresses, overlay hashes, other application layer group identifiers, e.g., <sip:*@peanuts.org>, or names defined by the applications.
This document also proposes and discusses mapping mechanisms between different namespaces and forwarding technologies. Additionally, the multicast API provides internal interfaces to access current multicast states at the host. Multiple multicast protocols may run in parallel on a single host. These protocols may interact to provide a gateway function that bridges data between different domains. The application of this API at gateways operating between current multicast instances throughout the Internet is described, as well.
TOC |
This document uses the terminology as defined for the multicast protocols [RFC2710] (Deering, S., Fenner, W., and B. Haberman, “Multicast Listener Discovery (MLD) for IPv6,” October 1999.),[RFC3376] (Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. Thyagarajan, “Internet Group Management Protocol, Version 3,” October 2002.),[RFC3810] (Vida, R. and L. Costa, “Multicast Listener Discovery Version 2 (MLDv2) for IPv6,” June 2004.),[RFC4601] (Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, “Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised),” August 2006.),[RFC4604] (Holbrook, H., Cain, B., and B. Haberman, “Using Internet Group Management Protocol Version 3 (IGMPv3) and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast,” August 2006.). In addition, the following terms will be used.
- Multicast Namespace:
- A Multicast Namespace is a collection of designators for groups that share a common syntax. Typical instances of namespaces are IPv4 or IPv6 multicast addresses, overlay group ids, group names defined on the application layer, or some human readable
- Multicast Context:
- A Multicast Context is a domain that accommodates nodes and routers of a common, single mutlicast forwarding technology and is bound to a single namespace.
- Inter-domain Multicast Gateway:
- An Inter-domain Multicast Gateway (IMG) is an entity that interconnects domains of different mutlicast contexts. Its objective is to transparently forward data between contexts, e.g., between IP layer and overlay multicast.
TOC |
The default use case addressed in this memo targets at applications that jointly communicate in a group by using a common identifier taken from some common namespace. Programmers shall be entitled to transparently use this identifier in their program without the need to consider a deployment status in target domains. Aided by gateways and, where available, by a node-specific multicast middleware, applications shall be enabled to establish group communication, even if resident in domains that are not connected by a common multicast service technology.
This draft covers the following two general scenarios:
+-------+ +-------+ | Member| | Member| | Foo | | G | +-------+ +-------+ \ / *** *** *** *** * ** ** ** * * * * MCast Tec A * * * * ** ** ** * *** *** *** *** +-------+ +-------+ | | Member| | Member| +-------+ | G | | Foo | | IMG | +-------+ +-------+ +-------+ | | | *** *** *** *** *** *** *** *** * ** ** ** * * ** ** ** * * * +-------+ * * * MCast Tec A * --| IMG |-- * MCast Tec B * +-------+ * * +-------+ * * - | Member| * ** ** ** * * ** ** ** * | G | *** *** *** *** *** *** *** *** +-------+
Reference scenarios for hybrid multicast, interconnecting group members from isolated homogeneous and heterogeneous domains.
It is assumed throughout the document that the domain composition, as well as the node attachement to a specific technology remain unchanged during a multicast session.
TOC |
The extended multicast functions should be implemented by a middleware. This middleware exhibits two tasks, it
*-------* *-------* | App 1 | | App 2 | *-------* *-------* | | *---------------------* | Middleware | *---------------------* | | *---------* | | Overlay | | *---------* | | | | | *---------------------* | Underlay | *---------------------*
Figure 1: The middleware covers underlay and overlay for the application |
The general procedure to initiate multicast communication is the following:
The application communicates via the logical ID and data distribution is based on the technical ID. A mapping is required between the IDs, especially if both identifiers belong to different namespaces. This mapping can be realized by embedding smaller in larger namespaces or selecting an arbitrary, unused ID in the target space. The relation between logical and technical ID is stored based on a mapping service (e.g. DHT). The middleware, thus queries the mapping service first, and creates an new technical group ID only if there is no identifier available for the namespace in use. Depending on the scope of the mapping service, it ensures a consistent use of the technical ID in a local or global domain.
Hosts may support several multicast protocols. In this case, they will be enabled to forward data between the different technologies using the service calls of the API. Such a proxy function can be implemented on each host or on dedicated gateways. These gateways also assist multicast members that have no middleware support to be integrated in additional namespaces.
TOC |
TOC |
- Namespace
- describes the domain-specific context in which the applications operate.
- Address
- is any kind of address in underlay (e.g. IPv4, IPv6) or overlay (e.g. SIP, hash-based ID).
- Mode
- denotes the layer on which the corresponding call will be effective. This may be unspecified to leave the decicision at the group communication stack.
TOC |
- init(in Namespace n)
- This call is implemented
- join(in Address a, in Mode m)
- This operation initiates a group subscription. Depending on the mode, this may result in an IGMP/MLD report.
- leave(in Address a, in Mode m)
- This operation results in an unsubscription for the given address.
- send(in Address a, in Mode m, out Message msg)
- receive(in Address a, in Mode m, out Message msg)
TOC |
- groupSet(out Address[] g, in Mode m)
- This operation returns all registered multicast groups. The information can be provided by group management or routing protocols. The return values distinguish between sender and listener states.
- neighborSet(out Address[] a, in Mode m)
- This function can be invoked to get the set of multicast routing neighbors.
- designatedHost(out Bool b, in Address a)
- This function returns true, if the host has the role of a designated forwarder or querier. Such an information is provided by almost all multicast protocols to handle packet duplication, if multiple multicast instances serve on the same subnet.
- updateListener(out Address g, in Mode m)
- This upcall is invoked to inform a group service about a change of listener states for a group. This is the result of receiver new subscriptions or leaves. The group service may call groupSet to get updated information.
- updateSender(out Address g, in Mode m)
- This upcall should be implemented to inform the application about source state changes. Analog to the updateListener case, the group service may call thereupon groupSet.
TOC |
This section describes the application of the defined API to implement an IMG.
TOC |
The following procedure describes a transparent mapping of a DVMRP-based any source multicast service to another many-to-many multicast technology.
An arbitrary DVMRP [RFC1075] (Waitzman, D., Partridge, C., and S. Deering, “Distance Vector Multicast Routing Protocol,” November 1988.) router will not be informed about new receivers, but will learn about new sources immediately. The concept of DVMRP does not provide any central multicast instance. Thus, the IMG can be placed anywhere inside the multicast region, but requires a DVMRP neighbor connectivity. The group communication stack used by the IMG is enhanced by a DVMRP implementation. New sources in the underlay will be advertised based on the DVMRP flooding mechanism and received by the IMG. Based on this the updateSender() call is triggered. The relay agent initiates a corresponding join in the native network and forwards the received source data towards the overlay routing protocol. Depending on the group states, the data will be distributed to overlay peers.
DVMRP establishes source specific multicast trees. Therefore, a graft message is only visible for DVMRP routers on the path from the new receiver subnet to the source, but in general not for an IMG. To overcome this problem, data of multicast senders will be flooded in the overlay as well as in the underlay. Hence, an IMG has to initiate an all-group join to the overlay using the namespace extension of the API. Each IMG is initially required to forward the received overlay data to the underlay, independent of native multicast receivers. Subsequent prunes may limit unwanted data distribution thereafter.
TOC |
The following procedure describes a transparent mapping of a PIM-SM-based any source multicast service to another many-to-many multicast technology.
The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] (Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, “Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised),” August 2006.) establishes rendezvous points (RP). These entities receive listener and source subscriptions of a domain. To be continuously updated, an IMG has to be co-located with a RP. Whenever PIM register messages are received, the IMG must signal internally a new multicast source using updateSender(). Subsequently, the IMG joins the group and a shared tree between the RP and the sources will be established, which may change to a source specific tree after a sufficient number of data has been delivered. Source traffic will be forwarded to the RP based on the IMG join, even if there are no further receivers in the native multicast domain. Designated routers of a PIM-domain send receiver subscriptions towards the PIM-SM RP. The reception of such messages invokes the updateListener() call at the IMG, which initiates a join towards the overlay routing protocol. Overlay multicast data arriving at the IMG will then transparently be forwarded in the underlay network and distributed through the RP instance.
TOC |
The following procedure describes a transparent mapping of a PIM-SSM-based source specific multicast service to another one-to-many multicast technology.
PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM and admits source specific joins (S,G) according to the source specific host group model [RFC4604] (Holbrook, H., Cain, B., and B. Haberman, “Using Internet Group Management Protocol Version 3 (IGMPv3) and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast,” August 2006.). A multicast distribution tree can be established without the assistance of a rendezvous point.
Sources are not advertised within a PIM-SSM domain. Consequently, an IMG cannot anticipate the local join inside a sender domain and deliver a priori the multicast data to the overlay instance. If an IMG of a receiver domain initiates a group subscription via the overlay routing protocol, relaying multicast data fails, as data are not available at the overlay instance. The IMG instance of the receiver domain, thus, has to locate the IMG instance of the source domain to trigger the corresponding join. In the sense of PIM-SSM, the signaling should not be flooded in underlay and overlay.
One solution could be to intercept the subscription at both, source and receiver sites: To monitor multicast receiver subscriptions (updateListener()) in the underlay, the IMG is placed on path towards the source, e.g., at a domain border router. This router intercepts join messages and extracts the unicast source address S, initializing an IMG specific join to S via regular unicast. Multicast data arriving at the IMG of the sender domain can be distributed via the overlay. Discovering the IMG of a multicast sender domain may be implemented analogously to AMT [I‑D.ietf‑mboned‑auto‑multicast] (Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. Pusateri, “Automatic IP Multicast Without Explicit Tunnels (AMT),” March 2010.) by anycast. Consequently, the source address S of the group (S,G) should be built based on an anycast prefix. The corresponding IMG anycast address for a source domain is then derived from the prefix of S.
TOC |
The following procedure describes a transparent mapping of a BIDIR-PIM-based any source multicast service to another many-to-many multicast technology.
Bidirectional PIM [RFC5015] (Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, “Bidirectional Protocol Independent Multicast (BIDIR-PIM),” October 2007.) is a variant of PIM-SM. In contrast to PIM-SM, the protocol pre-establishes bidirectional shared trees per group, connecting multicast sources and receivers. The rendezvous points are virtualized in BIDIR-PIM as an address to identify on-tree directions (up and down). However, routers with the best link towards the (virtualized) rendezvous point address are selected as designated forwarders for a link-local domain and represent the actual distribution tree. The IMG is to be placed at the RP-link, where the rendezvous point address is located. As source data in either cases will be transmitted to the rendezvous point address, the BIDIR-PIM instance of the IMG receives the data and can internally signal new senders towards the stack via updateSender(). The first receiver subscription for a new group within a BIDIR-PIM domain needs to be transmitted to the RP to establish the first branching point. Using the updateListener() invocation, an IMG will thereby be informed about group requests from its domain, which are then delegated to the overlay.
TOC |
This document makes no request of IANA.
TOC |
This draft does neither introduce additional messages nor novel protocol operations. TODO
TOC |
TODO
TOC |
TOC |
Matthias Waehlisch | |
link-lab & FU Berlin | |
Hoenower Str. 35 | |
Berlin 10318 | |
Germany | |
Email: | mw@link-lab.net |
URI: | http://www.inf.fu-berlin.de/~waehl |
Thomas C. Schmidt | |
HAW Hamburg | |
Berliner Tor 7 | |
Hamburg 20099 | |
Germany | |
Email: | schmidt@informatik.haw-hamburg.de |
URI: | http://inet.cpt.haw-hamburg.de/members/schmidt |
Stig Venaas | |
cisco Systems | |
Tasman Drive | |
San Jose, CA 95134 | |
USA | |
Email: | stig@cisco.com |