Network Working Group | H. Tschofenig |
Internet-Draft | J. Arkko |
Intended status: Informational | D. Thaler |
Expires: January 15, 2013 | D. McPherson |
July 16, 2012 |
Architectural Considerations in Smart Object Networking
draft-tschofenig-smart-object-architecture-01.txt
Following the theme "Everything that can be connected will be connected", engineers and researchers designing smart object networks need to decide how to achieve this in practice. How can different forms of embedded and constrained devices be interconnected? How can they employ and interact with the currently deployed Internet? This memo discusses smart objects and some of the architectural choices involved in designing smart object networks and protocols that they use.
The document is being discussed at https://www.ietf.org/mailman/listinfo/architecture-discuss
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http:/⁠/⁠datatracker.ietf.org/⁠drafts/⁠current/⁠.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 15, 2013.
Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http:/⁠/⁠trustee.ietf.org/⁠license-⁠info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
In RFC 6574 [RFC6574], we refer to smart objects as devices with constraints on energy, bandwidth, memory, size, cost, etc. This is a fuzzy definition, as there is clearly a continuum in device capabilities and there is no hard line to draw between devices that can be classified as smart objects and those that can't.
Following the theme "Everything that can be connected will be connected", engineers and researchers designing smart object networks need to address a number of questions. How can different forms of embedded and constrained devices be interconnected? How can they employ and interact with the currently deployed Internet?
These questions have been discussed at length. For instance, when the Internet Architecture Board (IAB) scheduled a workshop on Smart Objects, the IETF community was asked to develop views on how Internet protocols can be utilized by smart objects. The workshop participants contributed their views on the topic and a report was published [RFC6574].
This memo discusses smart objects and some of the architectural choices involved in designing smart object networks and protocols that they use. The main issues that we focus on are interaction with the Internet, the use of Internet protocols for these applications, models of interoperability, and approach to standardization.
In drawing conclusions from the prior IETF work and from the IAB workshop it is useful to look back at the criteria for success of the Internet. Luckily, various publications provide valuable insight into the history. Many of the statements are very much applicable to the discussion on smart objects. RFC 1958 [RFC1958] says:
It goes on to add:
Internet protocols are immediately relevant for any smart object development and deployment. However, building very small, often battery-operated devices is challenging. It is difficult to resist the temptation to build specific solutions tailored to a particular application, or to re-design everything from scratch. Yet, due to network effects, the case for using the Internet Protocol(s) and other generic technology is compelling.
This writeup describes the IAB's view on these issues. The document is being discussed at https://www.ietf.org/mailman/listinfo/architecture-discuss.
The rest of the document is organized as follows. Section 2 discusses the problems associated with vertically integrated industry-specific solutions, and suggests the use of generic technologies and a more flexible architecture as a way to reduce these problems. Section 3 discusses the problems associated with attempting to use options and communication patterns other than those in current widespread use in the Internet. Often middleboxes and assumptions built into existing devices makes such usage problematic. Section 4 discusses different levels of interoperability, and the different level of effort required to achieve them. Finally, Section 6 presents some of the relevant security issues, Section 7 discusses privacy, and Section 8 summarizes the recommendations.
The Internet protocols are relevant for any smart object development and deployment. In the context of one use case of smart objects, the smart grid and smart meters in particular, RFC 6272 "Internet Protocols for the Smart Grid" [RFC6272] identifies a range of IETF protocols that can be utilized.
The wide range of protocols listed in that document illustrates the view of the authors about how a large fraction of the Internet technology can be readily used in these new applications. There are occasional attempts for re-design in some application areas; sometimes only at the marketing level but often also in ignorance of what had been developed in the past. But by and large the industry has understood the value of using Internet communications for various smart object deployments.
Nevertheless, there are several architectural concerns that deserve to be highlighted.
As a result, the following recommendations can be made. First, while there are some cases where specific solutions are needed, the benefits of general-purpose technology are often compelling, be it about choosing IP over some more specific communication mechanism, a widely deployed link layer (such as wireless LAN) over a more specific one, web technology over application specific protocols, and so on.
However, when employing these technologies it is important to embrace them in their entirety, allowing for the architectural flexibility that is built onto them. As an example, it rarely makes sense to limit communications on-link or to specific media. We should also design our applications so that the participating devices can easily interact with multiple other applications.
Despite the applicability of the Internet Protocols for smart objects, picking the right protocols for a specific use case can be tricky. As the Internet has evolved over time, certain protocols and protocol extensions cannot be utilized in all circumstances. The following list illustrates a few of those challenges, and every communication protocol comes with its own challenges. Protocol designers need to be aware of the deployment challenges; it is not enough to just look at the specifications.
Extending protocols to fulfill new uses and to add new functionality may range from very easy to difficult, as [I-D.iab-extension-recs] investigates in great detail. A challenge many protocol designers are facing is to ensure incremental deployability and interoperability with incumbent elements in a number of areas. In various cases, the effort it takes to design incrementally deployable protocols has not been taken seriously enough at the outset.
As these examples illustrate, protocol architects have to take developments in the greater Internet into account, as not all features can be expected to be usable in all environments. For instance, middleboxes [RFC3234] complicate the use of extensions in the basic IP protocols and transport layers.
RFC 1958 [RFC1958] considers this aspect and says "... the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network." This statement is challenged more than ever with the perceived need to develop clever intermediaries interacting with dumb end devices but we have to keep in mind what RFC 3724 [RFC3724] has to say about this crucial aspect: "One desirable consequence of the end-to-end principle is protection of innovation. Requiring modification in the network in order to deploy new services is still typically more difficult than modifying end nodes." RFC 4924 [RFC4924] adds that a network that does not filter or transform the data that it carries may be said to be "transparent" or "oblivious" to the content of packets. Networks that provide oblivious transport enable the deployment of new services without requiring changes to the core. It is this flexibility that is perhaps both the Internet's most essential characteristic as well as one of the most important contributors to its success.
New smart object applications are developed every day; in many cases they are created using standardized Internet technology even though various components cannot easily be replaced by third party components. Even where a common underlying technology (such as IP) is used, current smart object networks often have challenges related to interoperability of the entire protocol stack, including application behavior. It is of strategic importance to make a conscious decision about the desired level of interoperability and where the points of interconnection are.
These decisions also relate to the required effort to complete the application, and overall complexity of the system. A system may appear complex for variety of reasons. First, there is legitimate heterogeneity in the used networking technology and applications. This variation is necessary and useful, as for instance different applications and environments benefit from varying networking technology. The range and other characteristics of cellular, wireless local area networking, and RFID are very different from each other, for instance. There are literally thousands of different applications, and it is natural that they have differing requirements on what parties need to communicate with each other, what kind of security solutions are appropriate, and other aspects.
The answer to managing complexity in the face of this lies in layers of communication mechanisms and keeping the layers independent, e.g., in the form of the hourglass model. If there is a common waist of the hourglass, then all applications can work over all physical networking technology, ensuring widest possible coverage of networking applications. "Everything over IP and IP over everything." This model provides some guidance for thinking about the Internet of Things architecture. First of all, it shows how we need common internetworking infrastructure (IP) to allow heterogeneous link media to work seamlessly with each other, and with the rest of the system. Secondly, there are various transport and middleware communications mechanisms that will likely become useful in the different applications. For instance, today embedded web services (HTTP, COAP, XML, and JSON) appear to be popular, regardless of what specific link technology they are run over.
But there can also be undesirable complexity and variation. Creation of alternative standards where one would have sufficed may be harmful. Creating systems and communications mechanisms with unnecessary dependencies between different layers and system components limits our ability to migrate systems to the most economic and efficient platforms, and limits our ability to connect as many objects as possible.
To summarize, complexity and alternative technologies can be very useful as a part of architecture, or can be problematic when it creates unnecessary competition and deployment barriers in the market place. The complexity will be addressed by regular technological evolution in the industry through underlying layers of bridging, tunneling, security etc.
It is also valuable to look back at earlier IETF publications, for example, RFC 1263 [RFC1263] considers different protocol design strategies and makes an interesting observation about the decision to design new protocols from scratch or to design them in a non-backwards compatible way based on existing protocols:
While [RFC1263] was written in 1991 when the standardization process in the Internet community was far more lightweight than today (among other reasons, because fewer stakeholders were interested in participating in the standards process) it is remarkable to read these thoughts since they are even more relevant today. This is particularly true for the smart object environment.
Regardless of how hard we work on optimizing the standard process, designing systems in an open and transparent consensus process where many parties participate takes longer than letting individual stakeholders develop their own proprietary solutions. Therefore, it is important to make architectural decisions that keep a good balance between proprietary developments vs. standardized components.
While RFC 1263 [RFC1263] certainly provides good food for thought, it also gives recommendations that may not always be appropriate for the smart object space, such as the preference for a so-called evolutionary protocol design where new versions of the protocols are allowed to be non-backwards compatible and all run independently on the same device. RFC 1263 adds:
Even though it is common practice today to run many different software applications that have similar functionality (for example, multiple Instant Messaging clients) in parallel it may indeed not be the most preferred approach for smart objects, which may have severe limitations regarding RAM, flash memory, and also power constraints.
To deal with exactly this problem, profiles have been suggested in many cases. Saying "no" to a new protocol stack that only differs in minor ways may be appropriate but could be interpreted as blocking innovation and, as RFC 1263 [RFC1263] describes it nicely "In the long term, we envision protocols being designed on an application by application basis, without the need for central approval.". "Central approval" here refers to the approval process that happens in a respective standards developing organization.
So, how can we embrace rapid innovation with distributed developments and at the same time accomplish a high level of interoperability?
Clearly, standardization of every domain-specific profile will not be the solution. Many domain-specific profiles are optimizations that will be already obsoleted by technological developments (e.g., new protocol developments), new security threats, new stakeholders entering the system or changing needs of existing stakeholders, new business models, changed usage patterns, etc. RFC 1263 [RFC1263] states the problem succinctly: "The most important conclusion of this RFC is that protocol change happens and is currently happening at a very respectable clip. We simply propose to explicitly deal with the changes rather keep trying to hold back the flood."
Even worse, different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. In [Tussels], Clark, et al. call this process 'the tussle' and ask the important question "How can we, as designers, build systems with desired characteristics and improve the chances that they come out the way we want?". In an attempt to answer that question, the authors of [Tussels] develop a high-level principle, which is not tailored to smart object designs but to Internet protocol develop in general:
In order to accomplish this, Clark, et al. suggest to
These are valid guidelines, and many protocols standardized in the IETF have taken exactly this approach, namely to identify building blocks that can be used in a wide variety of deployments. Others then put the building blocks together in a way that suits their needs. There are, however, limits to this approach. Certain building blocks are only useful in a limited set of architectural variants and producing generic building blocks requires a good understanding of the different architectural variants and often limits the ability to optimize. Sometimes the value of an individual building block is hard for others to understand without providing the larger context, which requires at least to illustrate one deployment variant that comes with a specific architectural setup. That said, it is also critical to consider systemic interdependencies between the set of elements that constitute a system, lest they impose constraints that weren't envisioned at the outset.
Since many Internet protocols are used as building blocks by other organizations or in deployments that may have never been envisioned by the original designs, one can argue that this approach has been fairly successful. It may, however, not lead to the level of interoperability many desire: they want interoperability of the entire system rather than interoperability at a specific protocol level. Consequently, an important architectural question arises, namely "What level of interoperability should Internet protocol engineers aim for?"
In the diagrams below, we illustrate a few interoperability scenarios with different interoperability needs. Note that these are highly simplified versions of what protocol architects are facing, since there are often more parties involved in a sequence of required protocol exchanges, and the entire protocol stack has to be considered - not just a single protocol layer. As such, the required coordination and agreement between the different stakeholders is likely to be far more challenging than illustrated. We do, however, believe that these figures illustrate that the desired level of interoperability needs to be carefully chosen.
Figure 1 shows a typical deployment of many Internet applications: an application service provider (example.com in our illustration) wants to make an HTTP-based protocol interface available to its customers. Example.com allows their customers to upload sensor measurements using a RESTful HTTP design. Customers need to write code for their embedded systems to make use of the HTTP-based protocol interface (and keying material for authentication and authorization of the uploaded data). These applications work with the servers operated by Example.com and with nobody else; there is no interoperability with third parties (at the application layer at least), i.e., Alice, a customer of Example.com, cannot use their embedded system which was programmed to use the protocol interface for Example.com with another service provider without re-writing at least parts of her embedded software. Nevertheless, Example.com re-use standardized protocol components to speed-up the process of developing their software, which is certainly useful from a time-to-market and cost efficiency point of view. For example, Example.com relies on HTTP and offers JSON to encode sensor data. Example.com will also have to rely at least on IP to have their customers access the Internet in order to reach their server farm.
................. | Application | | Service | | Provider | | Example.com | |_______________| _, . ,' `. Proprietary _,' `. Protocol offered ,' `._ by Example.com -' - ,'''''''''''''| ,''''''''| Sensors | Temperature | | Light | operated by | Sensor | | Sensor | customers of |.............' |........' Example.com
Figure 1: Proprietary Deployment
Clearly, the above scenario does not provide a lot of interoperability even though standardized Internet protocols are re-used.
Since example.com is focused on storage of sensor data and not on the actually processing it offers an HTTP-based protocol interface to others to get access to the uploaded sensor data. In our example, b-example.com and c-example.com are two of such companies that make use of this functionality in order to provide data visualization and data mining computations. Example.com again uses standardized protocols (such as RESTful HTTP design combined with OAuth) for offering access but overall the entire protocol stack is not standardized.
................. | Application | .| Service | ,-` | Provider | .` | b-example.com | ,-` |_______________| .` ................. ,-` | Application |-` Proprietary | Service | Protocol | Provider | | example.com |-, |_______________| '. _, `', Proprietary ,' '. ... Protocol _,' `', ................. ,' '. | Application | -' `'| Service | ,''''''''| | Provider | | Light | | c-example.com | | Sensor | |_______________| |........'
Figure 2: Backend Interworking
In contrast to the scenario described in Section 4.3 we illustrate a sensor where two devices developed by independent manufacturers are desired to interwork. This is shown in Figure 3. To pick an example from [RFC6574], consider a light bulb that talks to a light switch with the requirement that each may be manufactured by a different company, represented as company A and B.
_,,,, ,,,, / -'`` \ | | \ | / \ ,''''''''| / Standardized . ,''''''''| | Light | ------|---Protocol-------\------| Light | | Bulb | . | | Switch | |........' `'- / |........' \ _-...-` Manufacturer `. ,.' Manufacturer A ` B
Figure 3: Interoperability between two random devices
In order for this scenario to work manufacturer A, B, and probably many other manufacturers f lightbulbs and light switches need to get together and agree on the protocol stack they would like to use. Let us assume that they do not want any manual configuration by the user to happen and that these devices should work in a typical home network this consortium needs to make a decision about the following protocol design aspects:
This list is not meant to be exhaustive but aims to illustrate that for every usage scenario many design decisions will have to be made in order to accommodate the constrained nature of a specific device in a certain usage scenario. Standardizing such a complete solution to accomplish a full level of interoperability between two devices manufactured by different vendors will take time.
With the description in Section 4.3 and in Section 4.4 we present two extreme cases of interoperability. To "design for varation in outcome", as postulated by [Tussels], the design of the system does not need to be cast in stone during the standardization process but may be changed during run-time using software updates.
For many reasons, not only for adding new functionality, it can be said that many smart objects will need a solid software update mechanism. Note that adding new functionality to smart objects may not be possible for certain classes of constrained devices, namely those with severe memory limitations. As such, a certain level of sophistication from the embedded device is assumed in this section.
Software updates are common in operating systems and application programs today. Arguably, the Web today employs a very successful software update mechanism with code being provided by many different parties (i.e., by websites loaded into the browser or by the Web application). While JavaScript (or the proposed successor, Dart) may not be the right choice of software distribution for smart objects, and other languages such as embedded eLua [eLua] may be more appropriate, the basic idea of offering software distribution mechanisms may present a middleground between the two extreme interoperability scenario presented in this section.
Based on the previous description, we developed suggestions for different audiences.
For engineers in the IETF, we suggest the following.
For researchers we offer the following suggestions:
Section 3.3 of [RFC6574] reminds us about the IETF workstyle regarding security:
In the IETF, security functionality is incorporated into each protocol as appropriate, to deal with threats that are specific to them. It is extremely unlikely that there is a one-size-fits-all security solution given the large number of choices for the 'right' protocol architecture (particularly at the application layer). For this purpose, [RFC6272] offers a survey of IETF security mechanisms instead of suggesting a preferred one.
A more detailed security discussion can be found in the report from the 'Smart Object Security' workshop. that was held prior to the IETF meeting in Paris, March 2012.
In 1980, the Organization for Economic Co-operation and Development (OECD) published eight Guidelines on the Protection of Privacy and Trans-Border Flows of Personal Data [OECD], which are often referred to as Fair Information Practices (FIPs). The FIPs, like other privacy principles, are abstract in their nature and have to be applied to a specific context.
From a technical point of view, many smart object designs are not radically different from other application design. Often, however, the lack of a classical user interface, such as is used on a PC or a phone, that allows users to interact with the devices in a convenient and familiar way creates problems to provide users with information about the data collection, and to offer them the ability to express consent. Furthermore, in some verticals (e.g., smart meter deployments) users are not presented with the choice of voluntarily signing up for the service but deployments are instead mandated through regulation. Therefore, these users have no right to consent; a right that is core to many privacy principles including the FIPs. In other cases, the design is more focused on dealing with privacy at the level of a privacy notice rather than by building privacy into the design of the system, which [I-D.iab-privacy-considerations] asks engineers to do.
The interoperability models described in this document highlight that standardized interfaces are not needed in all cases, nor that this is even desirable. Depending on the choice of certain underlying technologies, various privacy problems may be inherited by the upper-layer protocols and therefore difficult to resolve as an afterthought. Many smart objects leave users little ability for enabling privacy-improving configuration changes. Technologies exist that can be applied also to smart objects to involve users in authorization decisions before data sharing takes place.
As a summary, for an Internet protocol architect, the guidelines described in [I-D.iab-privacy-considerations] are applicable. For those looking at privacy from a deployment point of view, the following additional guidelines are suggested:
Interconnecting smart objects with the Internet creates exciting new use cases and engineers are eager to play with small and constrained devices. With various standardization efforts ongoing and the impression that smart objects require a new Internet Protocol and many new extensions, we would like to provide a cautious warning. We believe that protocol architects are best served by the following high level guidance:
This document does not require actions by IANA.
We would like to thank the participants of the IAB Smart Object workshop for their input to the overall discussion about smart objects.
Furthermore, we would like to thank Jan Holler, Patrick Wetterwald, Atte Lansisalmi, Hannu Flinck, Joel Halpern, Markku Tuohino, and the IAB for their review comments.