Network Working Group | N. Rooney |
Internet-Draft | GSMA |
Expires: August 23, 2018 | S. Dawkins, Ed. |
Wonder Hamster | |
February 19, 2018 |
IAB Workshop on Managing Radio Networks in an Encrypted World (MaRNEW) Report
draft-iab-marnew-report-01
The MarNEW workshop aimed to discuss solutions for bandwidth optimisation on mobile networks for encrypted content, as current solutions rely on unencrypted content which is not indicative of the security needs of today’s Internet users. The workshop gathered IETF attendees, IAB members and various organisations involved in the telecommunications industry including original equipment manufacturers and mobile network operators.
The group discussed the current Internet encryption trends and deployment issues identified within the IETF, and the privacy needs of users which should be adhered. Solutions designed around sharing data from the network to the endpoints and vice versa were then discussed as well as analysing whether the current issues experienced on the transport layer are also playing a role here. Content providers and CDNs gave an honest view of their experiences delivery content with mobile network operators. Finally, technical responses to regulation was discussed to help the regulated industries relay the issues of impossible to implement or bad-for-privacy technologies back to regulators.
A group of suggested solutions were devised which will be discussed in various IETF groups moving forward.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 23, 2018.
Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Mobile networks have a set of requirements and properties which places a large emphasis on sophisticated bandwidth optimization. Encryption is increasing on the Internet which is positive for consumer and business privacy and security. Many existing mobile bandwidth optimization solutions primarily operate on non-encrypted communications; this can lead to performance issues being amplified on mobile networks. Encryption on networks will continue to increase; and with this understanding the workshop aimed to understand how we can solve the issues of bandwidth optimization and performance on radio networks in this encrypted world.
For the purposes of this workshop, bandwidth optimization encompasses a variety of technical topics related to traffic engineering, prioritisation, optimisation, efficiency enhancements, as well as user-related topics such as specific subscription or billing models. These can include:
Many of these functions can continue as they’re performed today, even with more encryption. Others are using methods which inspect parts of the communication that are encrypted, and these will have to be done differently in an encrypted Internet.
Finally, while not strictly speaking traffic management, some networks employ policy-based filtering (e.g., requested parental controls) and many networks support some form of legal interception functionality per applicable laws.
The workshop aimed to answer questions including:
The further aim was to gather architectural and engineering guidance on future work in the bandwidth optimisation area based on the discussions around the proposed approaches. The workshop also explored possible areas for standardization, e.g. new protocols that can aid bandwidth optimization whilst ensuring user security inline with new work in the transport layer.
This workshop report summarizes the contributions to and discussions at the workshop, organized by topic. The workshop began with scene setting topics which covered the issues around deploying encryption, the increased need for privacy on the Internet and setting a clear understanding that ciphertext should remain unbroken. Later sessions focused on key solution areas; these included evolution on the transport layer and sending data up or down the path. A session on application layers and CDNs aimed to highlight both issues and solutions experienced on the application layer. The workshop ended with a session dedicated to technical response to regulation with regards to encryption. The contributing documents were split between identifying the issues experienced with encryption on radio networks and suggested solutions. Of the solutions suggested some focused on transport evolution, some on trusted middleboxes and others on collaborative data exchange. Solutions were discussed within the sessions. All accepted position papers and detailed transcripts of discussion are available at [MARNEW].
The outcomes of the workshop are discussed in Section 7 and 8, and discuss progress after the workshop toward each of the identified work items as of the time of publication of this report.
Report readers should be reminded that this workshop did not aim to discuss regulation or legislation, although policy topics were mentioned in discussions from time to time.
The workshop was conducted under the IETF [NOTE_WELL] with the exception of the “Technical Analysis and Response to Potential Regulatory Reaction” session which was conducted under [CHATHAM_HOUSE_RULE].
The IETF and GSMA [GSMA] have divergent working practices, standards and processes. IETF is an open organisation with community driven standards with the key aim of functionality and security for the Internet’s users, the GSMA is membership based and serves the needs of its membership base most of whom are mobile network operators.
Unlike IETF, GSMA makes few standards. Within the telecommunications industry standards are set in various divergent groups depending on their purpose. Perhaps of most relevance to the bandwidth optimisation topic here is the work of the [SDO_3GPP] which work on radio network and core network standards with their members which include mobile operators and original equipment manufacturers.
One of the [SDO_3GPP] standards relevant to this workshop is PCC-QoS [PCC-QOS]. Traditionally mobile networks have managed different applications and services based on the resources available and priorities given; for instance, emergency services have a top priority, data has a lower priority and voice services are somewhere in-between. [SDO_3GPP] defined the PCC-QoS mechanism to support this functionality, and this depends on unencrypted communications [EffectEncrypt].
Scene setting sessions aimed to bring all attendees up to a basic understanding of the problem and the scope of the workshop. There were three scene setting sessions: Scene Setting (defining scope), Encryption Deployment Considerations and Trust Models and User Choice (Privacy).
The telecommunications industry and Internet standards community are extremely different in terms of ethos and practices. Both groups drive technical standards in their domain and build technical solutions with some policy-driven use cases. These technologies, use cases and technical implementations are different; not only this but motivators between the two industries are also diverse.
To ensure all attendees were aligned with contributing to discussions and driving solutions this “Scene Setting” session worked on generating a clear scope with all attendees involved. In short: it was agreed that ciphertext encrypted by one party and intended to be decrypted by a second party should not be decrypted by a third party in any solution, that the radio access network (RAN) is different and does experience issues with increased encrypted traffic, that we need to understand what those problems are precisely and that our goal is to improve user experience on the Internet. Proposing new technical solutions based on presumed future regulation was not in scope. The full scope is given below.
The attendees identified and agreed the following scope:
Attendees were shown that encrypted content is reaching around 50% according to recent statistics [STATE_BROWSER] and [STATE_SERVER]. The IAB are encouraging all IETF groups to consider encryption by default on their new protocol work and the IETF are also working on encryption on lower layers, for example TCP encryption within the [TCPINC] Working Group. The aims of these items of work are greater security and privacy for users and their data.
Telecommunications networks often contain middleboxes that operators have previously considered to be trusted; but qualifying trust is difficult and should not be assumed. Some interesting use cases exist with these middleboxes; such as anti-spam and malware detection, but these need to be balanced against their ability to open up cracks in the network for attacks such as pervasive monitoring.
When operators increase the number of radio access network cells (“Base Stations”), this can improve the radio access network quality of service , but also adds to radio pollution. This is one example of the balancing act required when devising radio access network architecture.
Encryption across the Internet is on the rise. However, some organisations and individuals come across a common set of operational issues when deploying encryption, mainly driven by commercial perspectives. The [UBIQUITOUS] draft explains these network management function impacts, detailing areas around incident monitoring, access control management, and regulation on mobile networks. The data was collected from various Internet players, including system and network administrators across enterprise, governmental organisations and personal use. The aim of the document is to gain an understanding of what is needed for technical solutions to these issues, maintaining security and privacy for users. Attendees commented that worthwhile additions would be: different business environments (e.g. cloud environments) and service chaining. Incident monitoring in particular was noted as a difficult issue to solve given the use of URL in today’s incident monitoring middleware.
Some of these impacts to mobile networks can be resolved using difference methods and the [NETWORK_MANAGEMENT] draft details these methods. The draft focuses heavily on methods to manage network traffic without breaching user privacy and security.
By reviewing encryption depoyment issues and the alternative methods of network management MaRNEW attendees were made aware of the issues which affect radio networks, the deployment issues which are solvable and require no further action, and those which aren’t currently solveable and which should be addressed within the workshop.
Some solutions intended to improve delivery of encrypted content could affect some or all of the privacy benefits that encryption provides. Understanding user needs and desires for privacy is therefore important when designing these solutions.
From a recent study [Pew2014] 64% of users said concerns over privacy have increased, 67% of mobile Internet users would like to do more to protect their privacy. The W3C and IETF have both responded to user desires for better privacy by recommending encryption for new protocols and web technologies. Within the W3C new security standards are emerging and the design principles for HTML hold that users are the stakeholders with most priority, followed by implementors and other stakeholders, further enforcing the “user first” principle. Users also have certain security expectations from particular contexts, and sometimes use new technologies to further protect their privacy even if those technologies weren’t initially developed for that purpose.
Operators may deploy technologies which can impact user privacy without being aware of those privacy implications or incorrectly assume that the benefits users gain from the new technology outweigh the loss of privacy. If these technologies are necessary they should be opt-in.
Internet stakeholders should understand the priority of other stakeholders. Users should be considered the first priority, other stakeholders include implementors, developers, advertisers, operators and other ISPs. Some technologies have been absued by these parties, such as cookie use or JavaScript injection. This has caused some developers to encrypt content to circumvent these technologies which they find intrusive or bad for their users privacy.
If users and content providers are to opt-in to user network management services with negative privacy impacts they should see clear value from using these services, and understand the impacts on clear interfaces. Users should also have easy abilities to opt-out. Some users will always automatically click through consent requests, so any model relying on explicit consent is flawed for these users. Understanding the extent of “auto click through” may help make better decisions for consent requests in the future. One model (Cooperative Traffic Management) works as an agent of the user; by opting-in metadata can be shared. Issues with this involve trust only being applied at endpoints.
Network or Transport Solution Sessions aimed to discuss suggested and new solutions for managing encrypted traffic on radio access networks. Most solutions focus on the sharing of metadata; either from the endpoint to the network, from the network to the endpoint, or cooperative sharing between both. Evolutions on the transport layer could be another approach to solve some of the issues radio access networks erience which cause them to require network management middleboxes. By removing problems at the transport layer, reliance on expensive middleboxes could decrease.
Collaboration between network elements and endpoints could bring about better content distribution. A number of suggestions were given, these included:
Some of these suggestions rely on signaling from network elements to endpoint.
Others aim to create “hop-to-hop” solutions, which could be more inline with how congestion is managed today, but with greater privacy implications.
Still others rely on signaling from endpoints to network elements. Some of these rely on implicit signaling, and others on explicit signaling. Some workshop attendees agreed that applications explicitly declaring what quality of service they require was not a good route, given the lack of success with this model in the past.
One of the larger issues in the sharing of data is the matter of competition; network operators are reluctant to relinquish data about their own networks because it can competitive information, and application providers wish to protect their users and reveal as little information as possible to the network. Some people think that if middleboxes were authenticated and invoked explicitly, this would be an improvement over current transparent middleboxes that intercept traffic without endpoint consent. Some workshop attendees suggested any exchange of information should be bidirectional, in an effort to improve cooperation between the elements. A robust incentive framework could provide a solution to these issues, or at least help mitigate them.
The radio access network is complex because it must deal with a number of conflicting demands. Base stations reflect this environment, and information within these base stations can be of value to other entities on the path. Some workshop participants thought solutions for managing congestion on radio networks should involve the base station if possible. For instance, understanding how the Radio Resource Controller and AQM [RFC7567] interact (or don’t interact) could provide valuable information for solving issues. Although many workshop attendees agreed that even though there is a need to understand the base station not all agreed that the base station should be part of a future solution.
Some suggested solutions were based on network categorisation and providing this information to the protocols or endpoints. Categorising radio networks could be impossible due to their complexity, but categorising essential network properties could be possible and valuable.
TCP has been the dominant transport protocol since TCP/IP replaced NCP on the Arpanet in March 1983. TCP was originally devised to work on a specific network model that did not anticipate the high error rates and highly variable available bandwidth scenarios experienced on modern radio access networks. Furthermore new network elements have been introduced (NATs and network devices with large buffers creating bufferbloat), and considerable peer-to-peer traffic is competing with traditional client-server traffic. Consequently the transport layer today has requirements beyond what TCP was designed to meet. TCP has other issues as well; too many services rely on TCP and only TCP, blocking deployment of new transport protocols like SCTP and DCCP. This means that true innovation on the transport layer becomes difficult because deployment issues are more complicated than just building a new protocol.
The IETF is trying to solve these issues through the “Stack Evolution” programme, and the first step in this programme is to collect data. Network and content providers can provide data including: the cost of encryption, the advantages of network management tools, the deployment of protocols, and the effects when network management tools are disabled. Network operators do not tend to reveal network information mostly for competition reasons and so is unlikely to donate this information freely to IETF. The GSMA is in a position to try to collect this data and anonymise it before bringing it to IETF which should alleviate the network operator worries but still provide IETF with some usable data.
A considerable amount of work has already been done on TCP, especially innovation in bandwidth management and congestion control; although congestion is usually detected by detecting loss, and better methods based on detecting congestion would be beneficial.
Furthermore, although the deficiencies of TCP are often considered as key issues in the evolution of the stack, the main route to resolve these issues may not be a new TCP, but an evolved stack. Some workshop participants thought SPUD [SPUD] and ICN [RFC7476] are two suggestions which may help here. QUIC [QUIC] engineers stated that the problems solved by QUIC are general problems, rather than TCP issues. This view was not shared by all attendees of the workshop. Moreover, TCP has had some improvements in the last few years which may mean some of the network lower layers should be investigated to see whether improvements can be made here.
Many discussions on the effects of encrypted traffic on radio access networks happen between implementers and the network operators; this session aimed to gather the opinions of the content and caching providers including their experiences running over mobile networks, the experience their users expect, and what they would like to achieve by working with or using the mobile network.
Content providers explained how even though this workshop cited encrypted data over radio access networks as the main issue the real issue is network management generally, and all actors (applications providers, networks and devices) need to work together to overcome these general network management issues. Content providers explained how they assume the mobile networks are standard compliant. When the network is not standards compliant (e.g. using non-standards-compliant intermediaries) content providers can experience real costs as users contact their support centres to report issues which are difficult to test for and build upon.
Content providers cited other common issues concerning data traffic over mobile networks. Data caps cause issues for users; users are confused about how data caps work or are unsure how expensive media is and how much data it consumes. Developers build products on networks not indicative of the networks their customers are using and not every organisation has the finances to build a caching infrastructure.
Strongly related to content providers, Content owners consider CDNs to be trusted deliverers of content and CDNs have shown great success in fixed networks. Now traffic is moving more to mobile networks there is a need to place caches at the edge of the mobile network, near the users. Placing caches at the edge of the mobile network is a solution, but requires standards developed by content providers and mobile network operators. The CNDi Working Group [CDNI] at IETF aims to allow global CDNs to interoperate with mobile CDNs; but this causes huge issues for the caching of encrypted data between these CDNs. Some CDNs are experimenting with approaches like “Keyless SSL” [KeylessSSL] to enable safer storage of content without passing private keys to the CDN. Blind Caching [BLIND_CACHING] is another proposal aimed at caching encrypted content closer to the user and managing the authentication at the original content provider servers.
At the end of the session the panelists were asked to identify one key collaborative work item, these were: evolving caching to cache encrypted content, using one-bit for latency / bandwidth trade-off (explained below), better collaboration between the network and application, better metrics to aid bug solving and innovation, and indications from the network to allow the application to adapt.
This session was conducted under Chatham House Rule. The session aimed to discuss regulatory and political issues; but not their worth or need, rather to understand the laws that exist and how technologists can properly respond to these.
Mobile networks are regulated, compliance is mandatory (and non-compliance can result in service license revocation in some nations round the world) and can incur costs on the mobile network operator. Regulation does vary geographically. Some regulations are court orders, others are self-imposed regulations, for example, “block lists” of websites such as the Internet Watch Foundation list [IWF]. Operators are not expected to decrypt sites, so those identified sites which are encrypted will not be blocked.
Parental control-type filters also exist on the network and are easily bypassed today, vastly limiting their effectiveness. Better solutions would allow for users to easily set these restrictions themselves. Other regulations are also hard to meet - such as user data patterns, or will become harder to collect - such as IoT cases. Most attendees agreed that if a government cannot get information it needs and is legally entitled to have from network operators they will approach content providers. Some governments are aware of the impact of encryption and are working with or trying to work with content providers. The IAB have concluded blocking and filtering can be done at the endpoint of the communication.
Not all of these regulations apply to the Internet, and the Internet community is not always aware of their existence. Collectively the Internet community can work with GSMA and 3GPP and act collectively to alleviate the risk imposed by encrypted traffic. Some participants expressed the concern that governments might require operators to provide information that they no longer have the ability to provide, because traffic has been encrypted, and this might expose operators to new liability, but no specific examples were given during the workshop. A suggestion from some attendees was that if any new technical solutions are necessary, they should have the ability to be easily switched off.
Some mobile network operators are producing transparency reports covering regulations including lawful intercept. Operators who have done this already are encouraging others to do the same.
Based on the talks and discussions throughout the workshop a set of suggested principles and solutions has been collected. This is not an exhaustive list, and no attempt was made to come to consensus during the workshop, so there are likely participants who would not agree with any particular principle listed below.
A collection of solutions suggested by various participants during the workshop is given below. Inclusion in this list does not imply that other workshop participants agreed.
In the workshop attendees identified other areas where greater understandinf could help the standards process. These were identified as:
Throughout the workshop attendees placed emphasis on the need for better collaboration between the IETF and telecommunications bodies and organisations. The workshop was one such way to achieve this, but the good work and relationships built in the workshop should continue so the two groups can work on solutions which are better for both technologies and users.
Since MaRNEW a number of activities have taken place in various seperate working groups or groups external to IETF. The ACCORD BoF was held at IETF95 which brough the workshop discussion to the wider IETF audiences by providing an account of the discussions within the workshop and highlighting key areas to progress on. Key areas to progress and an update on their current status follows:
The most rewarding output of MaRNEW is perhaps the most intangible. MaRNEW gave two rather divergent industry groups the opportunity to connect and discuss common technologies and issues affecting users and operations. Mobile Network providers and key Internet engineers and experts have developed a greater collaborative relationship to aid development of further standards which work across networks in a secure manner.
Stephen Farrell reviewed this report in draft form and provided copious comments and suggestions.
Barry Leiba provided corrections.