Internet DRAFT - draft-aazam-cdni-inter-cloud-architecture
draft-aazam-cdni-inter-cloud-architecture
Internet Engineering Task Force Mohammad Aazam
Internet-Draft Carleton University and Kyung Hee University
CDNI Working Group Eui-Nam Huh
Intended status: Informational Kyung Hee University
Expires: May 13, 2016 SooWong Kim
Alticast
November 14, 2015
Inter-Cloud Computing Architecture
draft-aazam-cdni-inter-cloud-architecture-03
Abstract
With the rapid rise in digital content, cloud computing has become
the focus of academia and industry. Cloud computing comes with
sophisticated technologies for data storage, management, and
distribution. Ubiquitous access facility and pay-as-you-go billing
features are additional advantages associated with this paradigm.
However, the massive digital content, more importantly multimedia,
has to be managed in a more effective way. Heterogeneous cloud
customers, accessing different customized services, with more advanced
access networks and devices available today, makes it tough at times
for solitary clouds to fulfill the needs. In such cases, different
clouds have to interoperate and federate their resources. This
scenario is called inter-cloud computing of cloud federation. Since
the concept of inter-cloud computing is still very new, it lacks
standard architecture. This document focuses on presenting architectural
fundamentals and key concerns associated with inter-cloud computing.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on May 13, 2016.
Copyright Notice
Copyright (c) 2014 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
Aazam et al., Expires May 13, 2016 [Page 1]
Internet-Draft Inter-Cloud Computing Architecture November 2015
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1
2. Related Work . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Media Cloud . . . . . . . . . . . . . . . . . . . . . . . . 4
3.1. Media Cloud Storage . . . . . . . . . . . . . . . . . . . 5
3.2. Media Cloud Desing Considerations . . . . . . . . . . . . 5
4. Media Cloud Architecture . . . . . . . . . . . . . . . . . . 7
5. Media Cloud Inter-Cloud Computing Architecture. . . . . . . . 8
5.1. Inter-Cloud Entities. . . . . . . . . . . . . . . . . . . 9
5.2. Inter-Cloud Topology Elements. . . . . . . . . . . . . . 12
5.3. Inter-Cloud Scenarios . . . . . . . . . . . . . . . . . . 13
5.4. Media Cloud Inter-Cloud Computing Protocols . . . . . . . 14
6. Major Issues in Interoperability. . . . . . . . . . . . . . . 16
7. References . . . . . . . . . . . . . . . . . . . . . . . . . 19
8.1. Normative References . . . . . . . . . . . . . . . . . . 19
Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 22
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22
1. Introduction
Digital media has convincingly surpassed traditional media, as a
result of which this trend makes big and possibly long-term changes
to the contents being exchanged over the Internet [1]. The global
Internet video traffic had surpassed global peer-to-peer (P2P)
traffic in 2010 [2]. Excluding the amount of video exchanged through
P2P file sharing, at the time being, Internet video is 40 percent
of consumer Internet traffic. In 2015, it reached 62 percent.
If all forms of video are counted, the number will be approximately
90 percent by 2015 [2].
To meet the great opportunities and challenges coming along with media
revolution, sophisticated technology and better facilities
with more powerful capabilities have become the most urgent demands.
Cloud computing recently has emerged and advanced rapidly as a
promising as well as inevitable technology. Generally it can be
seen as the integration of Software as a Service (SaaS),
Platform as a Service (PaaS), Infrastructure as a
Service (IaaS), and Network as a Service (NaaS) [3], [4], [5]. Cloud
computing platform provides highly scalable, manageable and
schedulable virtual servers, storage, computing power, virtual
networks, and network bandwidth, according to user?s requirement
and affordability. So, it can provide solution
package for the media revolution, if wisely designed for media cloud
and deployed & integrated with the advanced technologies on media
processing, transmission, and storage, keeping
in view the industrial and commercial trends and models as well.
An average user generates content very quickly, until runs out of
storage space. Most of the content may be used frequently by the
user, which requires to be accessed easily.
Media management is among the key aspects of cloud computing,
since cloud makes it possible to store, manage, and share large
amounts of digital media.
Aazam et al., Expires May 13, 2016 [Page 2]
Internet-Draft Inter-Cloud Computing Architecture November 2015
Cloud computing is a handy solution for processing content in
distributed environments. Cloud computing provides
ubiquitous access to the content, without the hassle of keeping
large storage and computing devices. Sharing large amount of media
content is another feature that cloud computing provides. Other than
social media, traditional cloud computing provides
additional features of collaboration and editing of content.
Also, if content is to be shared, downloading individual
files one by one is not easy.
Cloud computing caters this issue, since all the content
can be accessed at once by other parties, with whom the
content is being shared.
To meet user's requirements, there comes situations when two or more
clouds have to interoperate through an intermediary cloud or gateway.
This scenario is known as Inter-cloud computing or cloud federation.
Inter-cloud Computing involves transcoding and interoperability
related issues, which also affect the overall process of multimedia
content delivery.
In this draft, we have tried to dig out deep not only on
Media Cloud, but also on Inter-cloud computing.
We have presented a detailed architecture, communication
pattern and protocols, according to different scenarios and
also highlighted some key issues in this regard.
Aazam et al., Expires May 13, 2016 [Page 3]
Internet-Draft Inter-Cloud Computing Architecture November 2015
2. Related Work
Media cloud and inter-cloud computing is still in its start,
so there is no standard architecture available for data
communication, media storage, compression, and media delivery.
Already done studies mainly focus on presenting architectural
blueprints for this purpose.
[6] presents an industrial overview of the media cloud.
The authors state that media cloud is the solution to
suffice the dramatically increasing trends of media content
and media consumption. For media content delivery, QoS is
going to be the main concern. We present details in this
regard in [7].
To reduce delay and jitter of media streaming, better QoS is required,
for which [8] proposes media-edge cloud (MEC) architecture.
It proposes usage of P2P for inter and intra MEC communications, for
the purpose of scalability. The authors present in this article that
an MEC is a cloudlet which locates at the edge of the cloud. MEC is
composed of storage space, central processing unit (CPU), and graphics
processing unit (GPU) clusters. The MEC stores, processes, and
transmits media content at the edge, thus incurring a
shorter delay. In turn the media cloud is composed of MECs, which can
be managed in a centralized or peer-to-peer (P2P) manner.
This architecture presents three major features: (i) MECs at the edge
of media cloud for the purpose of reducing delay; (ii) P2P technology
is used in both intra and inter MEC domains, for scalability;
(iii) proxy, which is located at the edge of an MEC or in the gateway,
for the purpose of multimedia content caching to compensate for mobile
devices, since they have limited computational power and
battery life. Proxy can be adopted to seamlessly integrate media
cloud, hence addresses heterogeneity problems.
In [9], authors present an approach to use a pair of proxy, a client proxy
at the user?s side and a server proxy at the cloud side, to integrate
the cloud seamlessly to the wireless applications.
Authors in [10], [11] also present proxy as a bridge, for sharing
the contents of home cloud to other home clouds and to the outside,
public media clouds. This proxy can do additional task of indexing
the multimedia content, allowing public cloud to build search
database and content classification.
Media cloud can then provide discovery service to the users to
search the content of their interest. [12] also presents a proxy
scheme for transcoding and delivery of media. On the other hand,
[13] and [14] propose usage of P2P for delivering media stream
outside the media cloud. In both the cases, it builds a
hybrid architecture, which includes P2P as well as media cloud.
Transcoding and compression of media content requires
a lot of resources.
Aazam et al., Expires May 13, 2016 [Page 4]
Internet-Draft Inter-Cloud Computing Architecture November 2015
[15] and [16] presents an architecture, in which Map-Reduce model is
applied for this purpose, in private and public clouds.
[17] has proposed the concept of stream oriented cloud and
stream-oriented object. The authors introduce stream-oriented cloud
with a high-level description. In [18], the authors discuss about
mobile multimedia broadcast over cloud computing.
[19] discusses about personal rich media information management,
searching, and sharing.
3. Media Cloud
Meeting the consumption requirements of future is a challenge now.
The extraordinary growth in mobile phone usage, specially with smart
phones along with 3G, LTE, and LTE Advanced, Multimedia Broadcast and
Multicast Services (MBMS) networks, and then the availability
of more convenient access network, like:
Wi-Fi, WiMAX, Fiber to the home, and broadband networks, has hugely
increased the production as well as communication of
multimedia content. It is estimated that by 2015, up to 500 billion
hours of content will be available for digital distribution. With
social media, IPTV, Video on Demand, Voice over IP,
Time Shifted Television (TSTV), Pause Live Television (PLTV),
Remote Storage Digital Video Recorder (RSDVR),
Network Personal Video Recorder (nPVR), and other such services
available more easily, users now demand anytime and on-the-go
access to content. As estimated, by 2015, there will be
1 billion mobile video customers and
15 billion devices will be able to receive content over
the Internet [6].
Aazam et al., Expires May 13, 2016 [Page 5]
Internet-Draft Inter-Cloud Computing Architecture November 2015
3.1. Media Cloud Storage
Since media cloud architecture is not standardized and study
available is not enough. Some of the existing work uses
simple storage schemes for multimedia content, while most
of the works rely on Hadoop Distributed File System (HDFS) [20].
But the issue with HDFS is that it is designed mainly for
batch processing, rather than for interactive user activities.
Also, HDFS files are write-once and can have only one writer
at a time, which makes it very restricted for those
applications which require real-time processing, before the
actual delivery of data.
Other than these issues, dynamic load-balancing cannot be
done with HDFS, since it does not support data rebalancing schemes.
Energy consumption is an important issue in media cloud content
communication, since it requires lot of processing, quality display,
and playback. [21] discusses about this issue. The forecast, based
on the power consumption trend, says that by year 2021, the world
population would require 1175GW power to support media consumption.
Storage also plays an important role in this regard, because
efficient storage technique will consumes less energy.
3.2. Media Cloud Design Considerations
Architecting media cloud is among the main concerns right now.
As it has been discussed that content receiving devices are
heterogeneous, so a media cloud must have the capability to
deliver content according to device?s capabilities and via
multiple pathways. A cloud must be able to deliver content via
multiple paths, having support for multiple tenants and
allowing multiple service providers to share the infrastructure
and software components.
When there are multiple tenants, their need keeps on changing,
which media cloud should be able to meet.
Cloud architecture should be able to add or remove virtual
machines and servers quickly and cost-effectively. Same is
the case with storage capacity. Low latency transcoding,
caching, streaming, and delivery of content are
must for media cloud. Disk I/O subsystem speed is going to
be crucial in this regard. Using advanced technologies,
like solid state drives (SSD), serial attached
SCSI (SAS) interfaces, next-generation processors, etc.,
would become a necessity. Power utilization is another
vital thing to be considered. Due to huge amount of
processing and communication of large amount of data, which
is then received by devices which also include small,
power constraint nodes, it is going to be very important
to have an affective power saving mechanism.
Aazam et al., Expires May 13, 2016 [Page 6]
Internet-Draft Inter-Cloud Computing Architecture November 2015
Since we are talking about media cloud, which involves
virtual machines (VMs), which would be created during
run-time to suffice user?s needs and at times, many VM's
will not be in use, whether temporarily or permanently,
so they should be monitored and suspended or shutdown,
when required. This will save a lot of power of the
datacenter. On the receiver?s side, data received should be
only what is required and should be in the most appropriate
format, which suits the receiving node?s requirements
as well as processing and power utilization
attributes. For thin client, the device which has to perform
all these activities, like, broker or access network gateway,
must ensure presentation conformity of the
client node. Security and protection is going to be another
issue [22] [23] [25]. Data, receiver, and VM security
would become difficult to manage, though very important.
In a virtual networking and multitenant environment, VM
isolation and isolation of clients becomes very important.
Similarly for data, it is not only required to store the
data protectively and securely, but also to be transmitted
through secure session. So, both kinds of security, storage
security and communication security are required.
Current state-of-the-art devices can produce, store and deliver
high quality media content, that can be further shared on social
media and other media forums. Since, different types of
digital media contents can be produced and disseminated across
different networks, so a standard mechanism is required to allow
interoperability between clouds and transcoding of
media contents [11]. Purpose of media cloud is to address this
problem and to allow users constitute a cloud and manage media
content transparently, even if it is located outside the
user's domain. Different device types, resolutions, and
qualities require generating different versions of the
same content. This makes transcoding one of the most
critical tasks to be conducted, when media traverses networks.
This requires a lot of media processing that is
computationally expensive.
Media cloud helps fulfilling four major goals: ubiquitous access;
content classification; sharing large amount of media; content
discovery service. Since media content is produced big time and
very rapidly, it also requires efficient access, other than being
ubiquitously accessed. Media cloud provides indexing and proper
classification of content, which makes access of content easier
as well as makes searching efficient. Media cloud also provide
content discovery service, with which, content stored on other
clouds can be accessed, after searching and negotiating licensing
terms and conditions. This creates accessibility of huge amount
of multimedia content and creates cloud of clouds (CoC) that
can interoperate with each other.
Aazam et al., Expires May 13, 2016 [Page 7]
Internet-Draft Inter-Cloud Computing Architecture November 2015
4. Media Cloud Architecture
Architecting media cloud and its standardization is becoming very
important now. Media content is increasing big time and with
cloud computing, the online resource utilization is becoming
ubiquitous. Users do not tend to keep data with them anymore.
Other than the conventional things media cloud has to do, as
compared to standard cloud computing, media cloud has to handle
different types of multimedia content as well. Handling
multimedia does not only mean transcoding of different media
contents into interoperable form, but also to be able to
communicate multimedia according to the quality and type of
content the user wants. Storage of multimedia content plays a
very vital role in this regard. Storage
technology has to be standardized to ensure efficiency of
coding-decoding and storage space.
In a study we conducted on media cloud storage, it was observed
that different cloud storage services use different storage
schemes, which affect the size of data, its presentation,
and quality. For bulk data, this heterogeneity of storage
technologies matters a lot, since they are going to put more
impact, when there is cloud federation taking place.
Media cloud tasks are divided into layers, to make it more
comprehensive that what kinds of things media cloud has to do
and to what extent. Figure 1 presents an overall layered
architecture of media cloud. In figure 1,
at virtualization layer, cloud has to deal with computing
virtualization, memory virtualization, and network virtualization.
Storage layer deals with storage space virtualization and
storage technology to be used for media content storage,
like Network Attached Storage (NAS), Direct Attached Storage (DAS),
Fiber Channel (FC), Fiber Channel over IP (FCIP),
Internet Fiber Channel Protocol (iFCP),
Content Addressed Storage (CAS) or Fixed Content Storage (FCS),
and Internet Small Computer Systems Interface (iSCSI).
It also has to deal with data security, privacy, and integrity.
Other than this, data replication, de-duplication, and other added
features are also part of this layer.
Data replication is for the overall protection of stored
contents, to make sure that if one copy of data is lost or
corrupted, its replica exists. On the other hand,
data de-duplication is for the user, which protects making
unnecessary copies of the same content. Its purpose is to increase
storage efficiency.
Next is the Access Layer, which has to deal with network access,
whether the access network or wide area network. It also has to
ensure secure communication of the data.
Middleware layer is to deal with encoding/decoding tasks and
interoperability related things. As discussed, heterogeneous
clients, having heterogeneous requirements, access heterogeneous
types and formats of data, so transcoding
and interoperability is perhaps the most important part in media
delivery. That is why this layer is very crucial. Section V discusses
on it in more detail.
Application layer provides the user interface (UI). UI can be
in two forms, a web interface or a client application, running
on the user?s machine. Media cloud provider, or cloud service provider
has a business in all these services that it provides to its customers.
Business layer deals with that part of media cloud architecture.
The services will have different types for different customers
and accessing devices.
Quality and quantity of data is also considered when offering services.
Different kinds of packages can be made available to the user.
Aazam et al., Expires May 13, 2016 [Page 8]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5. Media Cloud Inter-Cloud Computing Architecture
Communication of two or more clouds with each other is known as
Inter-cloud computing. When there are many clouds existing with
multimedia content, clouds should be able to communicate with
each other, creating inter-cloud computing scenario. This is
also important to meet the increasing demands, as diverse type
of requirements can be made by the user, which may not be offered
by one single cloud. To meet the requirement, one cloud has to
request another cloud or multiple clouds. Other than this, cloud
should be able to discover services available elsewhere.
This inter-cloud computing will create a Cloud of Clouds (CoC),
being able to communicate the data that is not stored by its
datacenters directly. For this, cloud interoperability must be
in a standardized way. Standardized way of
service level agreement (SLA) must be made part of it.
Inter-cloud Protocol, with the support of 1-to-1,
1-to-many, and many-to-many cloud to cloud communication
and messaging must exist. Some of the basics are presented in [24].
To start with, first the entities are to be defined.
Aazam et al., Expires May 13, 2016 [Page 9]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5.1. Inter-Cloud Entities
Four entities are involved in Inter-cloud communication, as given below.
5.1.1. Cloud Service Provider
Cloud Service Provider provides network connectivity, cloud services,
and network management services to the Cloud Service Customer,
Cloud Service Partner, and other Cloud Service Providers.
Provider may be operating from within the data center, outside, or both.
Cloud Service Provider has the roles of: cloud service administrator,
cloud service manager, business manager, and security & risk manager.
They are further discussed below.
Cloud service administrator has the responsibility of performing all
operational processes and procedures of the cloud service provider,
making it sure that services and associated infrastructure meets
the operational targets.
Cloud service manger ensures that services are available to the
customers for usage. It also ensures that service function correctly
and adhere to the service level agreement. It also makes sure that
provider's business support system and operational support system
work smoothly.
Business manager is responsible for business related matters of
the services being offered. Creating and then keeping track of
business plans, making service offering strategies, and maintaining
relationship with the customers are also among the jobs business
manager performs.
Security & risk manger makes it sure that the provider manages
the risks appropriately, which are associated with deployment,
delivery, and use of the services being offered. It includes
ensuring the adherence of security policies to the
service level agreement.
The sub-roles of cloud service provider include: inter-cloud
provider, deployment manager,
customer support & care representative.
Inter-cloud provider relies on more than one cloud service
providers to provide services to the customers. Inter-cloud
provider allows customers to access data residing in external
cloud service providers by aggregating,
federating, and intermediating services of multiple
cloud service providers and adding a layer of technical
functionality that provides consistent interface and addresses
interoperability issues. Inter-cloud provider
can be combined with business services or independent of this.
Deployment manager performs deployment of service into production.
It defines operational environment for the services,
initial steps, and requirements for the deployment and proper
working of the services. It also gathers the metrics and ensures
that services meet Service Level Agreements (SLAs).
Aazam et al., Expires May 13, 2016 [Page 10]
Internet-Draft Inter-Cloud Computing Architecture November 2015
Customer support & care representative is the main interface between
customers and provider. Its purpose is to address customer's queries
and issues. Customer support & care representative monitors customer's
requests and performs the required initial problem analysis.
5.1.2. Cloud Service Customer
Cloud Service Customer in that entity which uses cloud services and has
a business relationship with the Cloud Service Provider. The roles of
Cloud Service Customer are: cloud service user, customer cloud service
administrator, customer business manager, and customer
cloud service integrator.
Cloud service user only uses cloud service(s), according to the needs.
Customer cloud service administrator ensures that the usage of
cloud services goes smoothly. It oversees the administrative tasks and
operational processes, related to the use of services and communication
between the customer and the provider.
Cloud business manager has a role of meeting business goals of customer,
by using cloud services in a cost effective way. It takes into account
the financial and legal aspects of the use of services, including
accountability, approval, and ownership. It creates a business plan and
then keeps track of it. It then selects service(s), according to the plan
and then purchases it. It also requests audit reports from the Auditor,
an independent third party.
Customer cloud service integrator integrates the cloud services with
customers internal, non-cloud based services. For smooth operations and
efficient working, Integrator has a very vital role to play. Services?
interoperability and compatibility are the main concerns in this task.
5.1.3. Cloud Service Partner
Cloud service partner is kind of a third party which provides auxiliary
roles, which are beyond the scope of cloud service provider and
cloud service customer. Cloud service partner has the roles of
Cloud Developer, Auditor, and Cloud Broker.
In a broad sense, Cloud Developer develops services for other entities,
like Cloud Service Customer and Cloud Service Provider.
Among the roles, Cloud Developer performs the tasks of designing,
developing, testing, and maintaining the cloud service.
Among the sub-roles, Cloud Developer performs as Service Integrator
and Service Component Developer. As Service Integrator, Cloud Developer
deals with composition of service from other services.
While as a Service Component Developer, it deals with design,
development, testing, and maintenance of individual components
of service. Cloud Developer ensures meeting the standard of
development, based on certain user or general users, according to
the needs of the project.
Aazam et al., Expires May 13, 2016 [Page 11]
Internet-Draft Inter-Cloud Computing Architecture November 2015
Since, Inter-cloud computing is going to be standardized, it
should be mentioned in case of Cloud Service Developer, that the services
being developed, must meet the standard.
Since heterogeneous clients (devices) are going to use the services
and on the other hand, various diverse types of development environments
are available to the developer, so it must be tightly coupled with
some specific standard of development.
Since service provider and service customer are separate entities,
so the service quality, usage behavior, and conformance to
service level agreement, all this has to be audited by the
third party, having the role of Auditor.
Cloud Auditor performs the audit of the provision and use of
cloud services. Audit covers operations, performance, security,
and examines whether a specific criteria of service level agreement
and the audit is met or not. Auditor can be a software system
or an organization.
Cloud Broker offers business and relationship services to
Cloud Service Customers to evaluate and select
Cloud Service Providers, according to their needs.
Negotiating between provider and customer is
among the main roles of Cloud Broker. With Inter-cloud or
Cloud of Clouds communication, it will be very important for
Cloud Broker to perform inter-cloud interoperability, instead of
having another node for this purpose. The basic activities a
Cloud Broker performs are that it acquires and assesses customers,
assesses marketplace, performs negotiation and setting
up service level agreements, getting information from
cloud service providers on services and resources, receive and
response to requests from cloud service customers, evaluate
the service requests and select appropriate service(s) for
customer and complete the requests. Assessment of marketplace
can also be done prior to customer acquisition, when
Cloud Broker provisions the customers to select a
service or provider, from a given set of catalogue.
In any case, Cloud Broker plays its role only during the
contracting phase between customer and provider, not during
the consumption of services. For negotiation between customer
and provider, while assessing both of them, it is important
to assess interoperability issues, because they directly impact
the transcoding process and eventually affect the overall
processing delay. SLA negotiation has to be done in prior,
because the services are provided on the basis of agreed SLAs.
Explicit negotiation makes it easy for customer
to decide whether to avail a particular service or not.
Aazam et al., Expires May 13, 2016 [Page 12]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5.1.4. Cloud Service Carrier
Cloud carrier is an intermediary that provides connectivity
and transport of cloud services, from cloud providers to
cloud customers. With the role of Cloud network provider it
provides network connectivity and related services.
It may operate within the date center, outside of it, or both.
It provides network connectivity, provides other network
related services, and manages the services.
5.2. Inter-Cloud Topology Elements
Inter-cloud computing involves three basic entities,
which are explained in this part of the article.
5.2.1. Inter-Cloud Exchanges
Inter-cloud Exchanges (ICXs) are those entities which
are capable of introducing attributes of cloud environment
for inter-cloud computing. It is a complex and variable system
of service providers and the users of
services. It involves applications, platform, and services
needed to be accessible through uniform interfaces.
Brokering tools play an important part in actively
balancing the demands and offerings, to guarantee the
required SLA at higher levels of service.
Cloud Service Providers exchange resources among each other
effectively pooling together part of their infrastructure.
ICXs perform aggregation of offer and demand of
computing resources, creating an opportunity for
brokering services. In ICXs, proxy mechanisms are
required to handle active sessions, when migration is
to be performed. This is done by Redirecting Proxy
in Inter-Cloud Exchange. Redirecting Proxy performs
public IP to private IP mapping. This public IP to
private IP mapping is important to provide transparent addressing.
5.2.2. Inter-Cloud Root
Inter-cloud Root contains services like, Naming Authority,
Directory Services, Trust Authority, etc. it is
physically not a single entity, but a DNS-like global
replicating and hierarchical system. It may also act
as broker.
5.2.3. Inter-Cloud Gateway
It is a router that implements Inter-cloud protocols and
allows Inter-cloud interoperability. It provides mechanism
for supporting the entire profile of Inter-cloud protocols
and standards. Once the initial negotiation is
done, each cloud collaborates with each other directly.
The purpose of Inter-Cloud Gateway is providing mechanism
for supporting the entire profile of Inter-Cloud
standards and protocols. On the other hand, the
Inter-Cloud Root and Inter-Cloud Exchanges mediate and
facilitate the initial Inter-Cloud negotiation process
among Clouds.
Aazam et al., Expires May 13, 2016 [Page 13]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5.3. Inter-Cloud Scenarios
Communication between cloud service customer and
cloud service provider(s) can take place in two ways:
(a). with broker and (b). without broker. The cloud broker
is an intermediary that negotiates between the
cloud service customer and one or more cloud providers by
providing attractive value-added services to users on top
of various cloud service providers. A cloud broker provides
a single interface through which multiple clouds can be
managed and share resources. Cloud broker operates
outside of the clouds and controls and monitors those clouds.
The main purpose of the broker is assisting the customer
to find the best provider and the service, according to
customer's needs, with respect to specified SLA and providing
him with a uniform interface to manage and observe the
deployed services.
A broker earns its profit by fulfilling requirements of both
the parties. Cloud broker uses a variety of methods,
such as a repository for data sharing and integration across
data sharing services [26] to develop a commendable
service environment and achieve the best possible deal and
service level agreement between two parties
(i.e., Cloud Service Provider and Cloud Service Customer).
Broker typically makes a profit either by taking remuneration
from the completed deal or by varying the broker's spread,
or some combination of both. The spread is the difference
between the price at which a broker buys from seller (provider)
and the price at which it sells to the buyer (customer).
To handle commercial services, Broker has a
cost management system.
Cloud Broker includes applicationprogramming interfaces (APIs)
and a standard abstract API, which is used to manage cloud
resources from different cloud providers. Cloud Broker holds
another abstract API for the negotiation of cloud service
facilities with the customer. This access of services can be
direct, between cloud service customer and
cloud service provider(s). In that case, the interoperability
and transcoding related things are handled by the customer itself.
Aazam et al., Expires May 13, 2016 [Page 14]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5.4. Media Cloud Inter-Cloud Computing Protocols
The generic Inter-cloud computing architecture has a
Cloud Service Customer, one or more Cloud Service Provider(s),
and an Inter-cloud Provider.
For different types of communications, different Inter-cloud
protocols are used [27] [28]. According to their type and
extent of use, they are discussed here.
5.4.1. Basic communication
"Extensible Messaging and Presence Protocol (XMPP)
[29][30] for basic communication, transport, and using
Semantic Web [31] techniques such as
Resource Description Framework (RDF) [32] to specify resources."[27].
XMPP is an eXtensible Markup Language (XML) based communications
protocol, for message-oriented middleware.
XMPP is for near real-time instant messaging (IM),
presence information, and contact list maintenance.
As it is 'extensible', it has also been used for
VoIP signaling, gaming, videos, file transfer,
publish-subscribe systems, and Internet of Things applications,
such as the smart grid, and social networking services.
Resource Description Format (RDF) is a metadata data model,
which is used as a general method for conceptual description
or modeling of information, implemented in web resources,
using various syntax notations and data serialization formats.
5.4.2. Services framework
On top of the base XMPP, one of its extensions, XEP 0244,
provides a services framework for M2M communications,
named IO Data. XEP-0244 is designed for sending messages from
one computer to another, providing a transport
for remote service invocation. It also overcomes the problems
with SOAP and REST.
5.4.3. Authentication and encryption
Transport Layer Security (TLS) is used for communication
security over the Internet. Simple Authentication and
Security Layer (SASL) is used for authentication purpose.
Streams are first secured with TLS, before
completing the authentication through SASL. SASL
authenticates a stream by means of an XMPP-specific
profile of the protocol. SASL adds authentication support
in a generalized way to connection-based protocols.
Security Assertion Markup Language (SAML) provides
authentication services for cloud federation scenario,
but it is still not fully supported in XMPP-specific profiles.
Aazam et al., Expires May 13, 2016 [Page 15]
Internet-Draft Inter-Cloud Computing Architecture November 2015
5.4.4. Identity and access management
SAML is particularly used for authentication and
authorization between identity provider and service provider.
A significance SAML has in this regard is web browser
single sign-on (SSO) mechanism. SSO provides access control
of multiple independent, but related software systems.
Its counter action is single sign-off, which disallows
access to multiple services with one action at
once, hence saving time and effort. eXtensible
Access Control Markup Language (XACML) is also used
for access control. It evaluates access requests
according to the rules already defined in policies.
XACML is more useful in inter-cloud scenarios,
where it provides common terminology and interoperability
between access control implementations by multiple
service providers or vendors. XACML is an
Attribute Based Access Control (ABAC), in which rights
are granted to the users on the basis of
attributes (user attributes, resource attributes, etc.)
associated with the user. For larger enterprises,
XACML can also be implemented with
Role Based Access Control (RBAC), which restricts systems
access to the authorized users.
5.4.5. Exchange services directory
Resource Description Framework (RDF) is used for
resource allocation, such as, storage and processing,
in inter-cloud environment, while SPARQL Protocol
and RDF Query Language (SPARQL) is a query/matching
service for RDF.
SPARQL can retrieve and manipulate data in RDF format.
When a request is made, it invokes a SPARQL query
over an XMPP connection to the Inter-cloud Root, to
apply the constraints and preferences to the
computing semantics catalog, where it is determined
whether the service description on another cloud are
according to the requirements of the first cloud.
5.4.6. Media related communication
H.264/MPEG4 (Motion Picture Experts Group) or also
known as Advanced Video Coding (AVC), is one the most
commonly used coding scheme for high quality
video recording, compression, and distribution.
Because of its Block Motion Compensation (BMC) feature,
it is also the most widely used encoding scheme by
Internet streaming video services, like, YouTube, Vimeo,
iTunes, etc., and also in web-based softwares, like,
Adobe Flash Player and Microsoft Silverlight.
Aazam et al., Expires May 13, 2016 [Page 16]
Internet-Draft Inter-Cloud Computing Architecture November 2015
HDTV broadcasts over terrestrials and satellite also
use H.264/MPEG4. With Block Motion Compensation, frames
are divided into blocks of pixels and then each block
is predicted from the previous blocks, making transmission
more efficient. H.264 supports both lossy and loss-less
compressions, so it is suited for Internet streaming
services, in which, streaming quality can be dynamically
decided based on the condition of the network or user's link.
Adobe Flash, a platform for rich Internet applications (RIA),
also uses H.264. Flash is most widely used for embedding
multimedia contents into webpages. Most of the interactive
multimedia content, videos, and advertisements are made
in Adobe Flash.
For the delivery of streamed media,
Real Time Streaming Protocol (RTSP) is used.
RTSP is responsible for establishing and maintaining
sessions between two endpoints, while streaming of
content is performed by Real-time Transport Protocol (RTP)
along with Real Time Control Protocol (RTCP), which is
responsible for providing statistics and control
information to RTP flows.
6. Major Issues in Interoperability
As an emerging technology, media cloud is competing with
the existing media technologies and systems and it has
to deal with a lot of challenges, to evolve smoothly and
effectively. Interoperability is the key challenge
faced by media cloud. Due to various heterogeneities,
as discussed below, interoperability becomes the key issue.
6.1. Heterogeneous media contents and media transcoding
Very diverse types of services are available in the
media cloud arena, making transcoding and content
presentation an area of concern. Services like,
Video on Demand (VoD), IPTV, Voice over IP (VoIP),
Time Shifted Television (TSTV), Pause Live Television (PLTV),
Remote Storage Digital Video Recorder (RSDVR),
Network Personal Video Recorder (nPVR), and the increasing
social media content require a
lot of effort in this regard.
Aazam et al., Expires May 13, 2016 [Page 17]
Internet-Draft Inter-Cloud Computing Architecture November 2015
6.2. Heterogeneous media storage technologies
Storage is an important part. Multimedia content require a lot of
space. Also, in case of multimedia contents, it becomes more
difficult to search on the basis of actual contents the file contains.
Efficiency in storage and searching is an important aspect
media cloud should have. Different storage technologies available are
Network Attached Storage (NAS), Direct Attached Storage (DAS),
Fiber Channel (FC), Fiber Channel over IP (FCIP), Internet
Fiber Channel Protocol (iFCP), Content Addressed Storage (CAS) or
Fixed Content Storage (FCS), and Internet
Small Computer Systems Interface (iSCSI). Communication between
clouds creates inefficiency when different storage technologies
are provided by the service providers.
6.3. Heterogeneous access networks
Every access network, like, broadband, WiFi, WiBro, GPRS, 2G,
3G, 4G, LTE, LTE Advanced, and other upcoming standards have
different attributes, bandwidth, jitter tolerance, and performance.
6.4. Heterogeneous client devices
When contents are available on cloud, or media cloud, to be
more specific, then any device that has access to the Internet
can request for service. All types of client nodes have
different capabilities and constraints. Other than the type of
contents it can support, the size of display, buffer memory,
power consumption, processing speed, and other such attributes
have to be considered before fulfilling the request.
6.5. Heterogeneous applications
Requesting applications are also of different types and
require different treatment. Other than the heterogeneity
of device, the application type also matters. For example,
web browser requesting service will have different requirements,
while cloud client application will have different requirements
for the same service.
6.6. Heterogeneous QoS requirements and QoS provisioning mechanisms
Depending upon the access network, condition of core network,
the requesting device, user?s needs, and type of service,
heterogeneous QoS requirements can be made. Dynamic QoS
provisioning schemes needs to be implemented
in this regard. We have worked on it in detail in our
study presented in [7].
Aazam et al., Expires May 13, 2016 [Page 18]
Internet-Draft Inter-Cloud Computing Architecture November 2015
6.7. Data/media sanitization
When a client requests for storage space from the cloud,
it does not mean that any type of data can be now stored.
Data has to be filtered. Some of the cloud storage service
providers do not allow some specific type of data
to be stored, like pornographic material. One of such
services is Microsoft SkyDrive.
6.8. Security and trust model
Outsourced data poses new security risks in terms of
correctness and privacy of the data in cloud. When we
talk about media cloud, not only data service will be request
by the user, but also, storage service would
also be requested. Storing contents, which may have
some sensitive or private information, poses risks to the customers.
Some of the details are presented in our
work in [5].
6.9. Heterogeneous Internet Protocols
IPv4 address space has exhausted. Migration towards IPv6
has formally been expedited. Both of these versions of IP
are not directly interoperable. Since this migration is going
to take some time, may be a decade [33], so it
creates an overhead by them operate with each other.
Tunneling is the viable solution in hand, but has its own overhead.
We have worked extensively on this and presented
our findings in [33].
Other than the existing heterogeneities and the ones that are
to be emerged, media cloud needs to be able to deal with
dramatically increasing video contents. Until 2015, in every
second, 1 million minutes of video content will
cross the network [2]. Therefore, it is very important to
carefully design the architecture of media cloud, to be a
successful media cloud platform and to be able to adapt to
continuously increasing amount of media content, new
applications, and services.
Aazam et al., Expires May 13, 2016 [Page 19]
Internet-Draft Inter-Cloud Computing Architecture November 2015
7. References
[1] Mingfeng Tan, Xiao Su, "Media Cloud: When Media Revolution
Meets Rise of Cloud Computing", Proceedings of The 6th IEEE
International Symposium on Service Oriented System Engineering.
[2] Cisco-White-Paper, "Cisco Visual Networking Index Forecast
and Methodology, 2010-2015," June 1, 2011.
[3] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph,
R. H. Katz, A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin,
I. Stoica, and M. Zaharia, "Above the Clouds: A Berkeley
View of Cloud Computing," EECS Department,
University of California, Berkeley UCB/EECS-2009-28,
February 10 2009.
[4] W. Lizhe, T. Jie, M. Kunze, A. C. Castellanos, D. Kramer,
and W. Karl, "Scientific Cloud Computing: Early Definition
and Experience," in High Performance Computing and
Communications, 2008. HPCC '08. 10th IEEE International
Conference on, 2008, pp. 825-830.
[5] Mohammad Aazam, Pham Phuoc Hung, Eui-Nam Huh,
"Cloud of Things: Integrating Internet of Things with
Cloud Computing and the Issues Involved"?, in the proceedings
of 11th IEEE International Bhurban Conference on Applied Sciences
and Technologies, Islamabad, Pakistan, 14-18 January, 2014.
[6] "Moving to the Media Cloud", Viewpoint paper, Intel-HP, November 2010.
[7] Mohammad Aazam, Adeel M. Syed, Eui-Nam Huh, "Redefining Flow Label
in IPv6 and MPLS Headers for End to End QoS in Virtual Networking
for Thin Client", in the proceedings of 19th IEEE Asia Pacific
Conference on Communications, Bali, Indonesia, 29-31 August, 2013
[8] Z. Wenwu, L. Chong, W. Jianfeng, and L. Shipeng, "Multimedia
Cloud Computing," Signal Processing Magazine, IEEE, vol. 28,
pp. 59-69, 2011.
[9] S. Ferretti, V. Ghini, F. Panzieri, and E. Turrini,
"Seamless Support of Multimedia Distributed Applications
Through a Cloud," in Cloud Computing (CLOUD), 2010
IEEE 3rd International Conference on, 2010, pp. 548-549.
[10] D. Diaz-Sanchez, F. Almenares, A. Marin, and D. Proserpio,
"Media Cloud: Sharing contents in the large," in Consumer
Electronics (ICCE), 2011 IEEE International Conference on,
2011, pp. 227-228.
Aazam et al., Expires May 13, 2016 [Page 20]
Internet-Draft Inter-Cloud Computing Architecture November 2015
[11] Daniel Diaz-Sanchez, Florina Almenarez, Andres Marin,
Davide Proserpio, and Patricia Arias Cabarcos,"Media Cloud:
An Open Cloud Computing Middleware for Content Management",
IEEE Transactions on Consumer Electronics,
Vol. 57, No. 2, May 2011
[12] H. Zixia, M. Chao, L. E. Li, and T. Woo, "CloudStream:
Delivering high-quality streaming videos through a cloud-based
SVC proxy," in INFOCOM, 2011 Proceedings IEEE, 2011, pp. 201-205.
[13] J. Xin and K. Yu-Kwong, "Cloud Assisted P2P Media Streaming
for Bandwidth Constrained Mobile Subscribers," in Parallel and
Distributed Systems (ICPADS), 2010 IEEE 16th International
Conference on, 2010, pp. 800-805.
[14] I. Trajkovska, J. S. Rodriguez, and A. M. Velasco,
"A novel P2P and cloud computing hybrid architecture for
multimedia streaming with QoS cost functions," in the
Proceedings of the International Conference
on Multimedia, Firenze, Italy, 2010.
[15] R. Pereira, M. Azambuja, K. Breitman, and M. Endler,
"An Architecture for Distributed High Performance Video Processing
in the Cloud," in Cloud Computing (CLOUD), 2010 IEEE 3rd
International Conference on, 2010, pp. 482-489.
[16] R. Pereira and K. Breitman, "A Cloud Based Architecture for
Improving Video Compression Time Efficiency: The Split & Merge
Approach," in Data Compression Conference (DCC), 2011, 2011, pp. 471-471.
[17] J. Feng, P. Wen, J. Liu, and H. Li, "Elastic stream cloud (ESC):
A stream-oriented cloud computing platform for Rich Internet Application,"
in High Performance Computing and Simulation (HPCS), 2010 International
Conference on, 2010, pp. 203-208.
[18] L. Li, L. Xiong, Y. Sun, and W. Liu, "Research on Mobile
Multimedia Broadcasting Service Integration Based on Cloud Computing,"
in the proceedings of International Conferene in Multimedia
Technology (ICMT), 2010 International Conference on, 2010, pp. 1-4.
[19] J. Shen, S. Yan, and X.-S. Hua, "The e-recall environment for
cloud based mobile rich media data management," presented at the
Proceedings of the 2010 ACM multimedia workshop on Mobile cloud
media computing, Firenze, Italy, 2010.
[20] D. Borthakur. (2010). Hadoop Distributed File System design.
Available: http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfs_design.html
[21] C. Preist and P. Shabajee, "Energy Use in the Media Cloud: Behaviour
Change, or Technofix?," in Cloud Computing Technology and
Science (CloudCom), 2010 IEEE Second International Conference on, 2010,
pp. 581-586.
Aazam et al., Expires May 13, 2016 [Page 21]
Internet-Draft Inter-Cloud Computing Architecture November 2015
[22] Wang, Cong, et al. "Toward secure and dependable storage
services in cloud computing", IEEE Transactions on Services
Computing, 5.2, 220-232, 2012.
[23] Wang, Cong, et al. "Privacy-preserving public auditing for
secure cloud storage." IEEE Transactions on Computers,
Vol. 62, No. 2, February 2013.
[24] Fang Liu, Jin Tong, Jian Mao, Robert Bohn, John Messina,
Lee Badger and Dawn Leaf," NIST Cloud Computing Reference Architecture",
September 2011.
[25] Yongdong Wu, Vivy Suhendra, Huaqun Guo,
"A Gateway-based Access Control Scheme for Collaborative Clouds",
in the proceedings of 7th International Conference on Internet
Monitoring and Protection, May 27 - June 1, 2012,
Stuttgart, Germany
[26] Yau, Stephen S., and Yin Yin. "A privacy preserving
repository for data integration across data sharing services.",
IEEE Transactions on Services Computing 1.3, 130-140, 2008.
[27] David Bernstein, Deepak Vij, "Intercloud Directory and
Exchange Protocol Detail using XMPP and RDF", IEEE CLOUD,
July 5-10, 2010, Miami, Florida, USA
[28] Lloret, Jaime, et al. "Architecture and Protocol for
InterCloud Communication." Information Sciences, 2013.
[29] Extensible Messaging and Presence Protocol (XMPP):
Core, and related other RFCs at: http://xmpp.org/rfcs/rfc3920.html.
[30] XMPP Standards Foundation at http://xmpp.org/
[31] W3C Semantic Web Activity, at: http://www.w3.org/2001/sw/
[32] Resource Description Framework (RDF), at: http://www.w3.org/RDF/
[33] Mohammad Aazam, Eui-Nam Huh, "Impact of IPv4-IPv6
Coexistence in Cloud Virtualization Environment", Springer
Annals of Telecommunications, vol. 68, August 2013
Aazam et al., Expires May 13, 2016 [Page 22]
Internet-Draft Inter-Cloud Computing Architecture November 2015
Appendix A. Acknowledgements
This draft is supported by the IT R&D program of MKE/KEIT.
[10041891, Development on Community Broadcast Technology based
on MaaS (Media as a Service) providing Smart Convergence Service].
Authors' Addresses
Mohammad Aazam
Department of Systems and Computer Engineering, Carleton University
Ottawa, Canada
Computer Engineering Department, Kyung Hee University
Yongin, South Korea
Phone: +1 613 617 8481
Email: aazam@ieee.org
Eui-Nam Huh
Computer Engineering Department, Kyung Hee University
Yongin, South Korea
Phone: +82 (0)10 45 51 94 18
Email: johnhuh@khu.ac.kr
SooWong Kim
Alticast, Seoul, South Korea
Phone: +82 (0) 2-2007-7807
Email: swkim@alticast.com