Network Working Group | S. Farrell |
Internet-Draft | Trinity College Dublin |
Intended status: Informational | April 23, 2019 |
Expires: October 25, 2019 |
We're gonna need a bigger threat model
draft-farrell-etm-00
We argue that an expanded threat model is needed for Internet protocol development as protocol endpoints can no longer be considered to be generally trustworthy for any general definition of "trustworthy."
This draft will be a submission to the DEDR IAB workshop.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 25, 2019.
Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
[[There's a github repo for this -- issues and PRs are welcome there. <https://github.com/sftcd/etm> ]]
[RFC3552], Section 3 defines an "Internet Threat Model" which has been commonly used when developing Internet protocols. That assumes that "the end-systems engaging in a protocol exchange have not themselves been compromised." RFC 3552 is a formal part of of the IETF's process as it is also BCP72.
Since RFC 3552 was written, we have seen a greater emphasis on considering privacy and [RFC6973] provides privacy guidance for protocol developers. RFC 6973 is not a formal BCP, but appears to have been useful for protocol developers as it is referenced by 38 later RFCs at the time of writing.
BCP188, [RFC7258] subsequently recognised pervasive monitoring as a particular kind of attack and has also been relatively widely referenced (39 RFCs at the time of writing). To date, perhaps most documents referencing BCP188 have considered state-level or in-network adversaries.
In this document, we argue that we need to epxand our threat model to acknowledge that today many applications are themselves rightly considered potential adversaries for at least some relevant actors. However, those (good) actors cannot in general refuse to communicate and will with non-negligible probability encounter applications that are adversarial.
We also argue that not recognising this reality causes Internet protocol designs to sometimes fail to protect the systems and users who depend on those.
Discussion related to expanding our concept of threat-model ought not (but perhaps inevitably will) involve discussion of weakening how confidentiality is provided in Internet protocols. Whilst it may superficially seem to be the case that encouraging in-network interception could help with detection of adversarial application behaviours, such a position is clearly mistaken once one notes that adding middleboxes that can themselves be adverserial cannot be a solution to the problem of possibly encountering adversarial code on the network. It is also the case that the IETF has rough consensus to provide better, and not weaker, securty and privacy, and has maintained that consensus over three decades, despite repeated (and repetitive;-) debates on the topic. That consensus is represented in [RFC2404], BCP 200 [RFC1984] and more latterly, the above-mentioned BCP 188 as well as in the numerous RFCs referencing those works. The probability that discussion of expanding our threat model leads to a change in that rough consensus seems highly remote.
It is not clear if the IETF will reach rough consensus on a description of such an expanded threat model, but we further argue that ignoring this aspect of deployed reality cannot may not bode well for Internet protocol development.
Absent such an expanded threat model, we expect to see more of a mismatch between expectaions and the deployment reality for some Internet protocols.
This internet-draft is a submission to the IAB's DEDR workshop and is not intended to become an RFC.
We are saddened by, and apologise for, the somewhat dystopian impression that this document may impart - hopefully, there's a bit of hope at the end;-)
In this section we describe a few documented examples of deliberate adversarial behaviour by applications that could affect Internet protocol development. The adversarial behaviours described below involve various kinds of attack, varying from simlple fraud, to credential theft, surveillance and contributing to DDoS attacks. This is not intended to be a comprehensive nor complete survey, but to motivate us to consider deliberate adversarial behaviour by applicaions.
Finally, we note that while we have these examples of deliberate adversarial behaviour, there are also many examples of applciation developers doing their best to protect the security and privacy of their users or customers. That's just the same as the case today where we need to consider in-network actors as potential adversaries despite the many examples of network operators who do act primarily in the best interests of their users.
Despite the best efforts of curators, so-called App-Stores frequently distribute malware of many kinds and one recent study [curated] claims that simple obfuscation enables malware to avoid detection by even sophisticated operators. Given the scale of these deployments, even a small percentage of malware-infected applictions can affect a huge numbers of people.
Virtual private networks (VPNs) are supposed to hide user traffic to various degrees depending on the particular technology chosen by the VPN provider. However, not all VPNs do what they say, some for example misrepresenting the countries in which they provide vantage points. [vpns]
What we normally might consider network devices such a home routers do also run applications that can end up being adversarial, for example DNS and DHCP attacks from home routers, or other devices in the home. One study [home] reports on a 2011 attack that affected 4.5 million DSL modems in Brazil. The absence of software update [RFC8240] has been a major cause of these issues and rises to the level that considering allowing this as intentional behaviour is warranted.
Tracking of users in order to support advertising based business models is ubiquitous on the Internet today. HTTP header fields (such as cookies) are commonly used for such tracking, as are structures within the content of HTTP responses such as links to 1x1 pixel images and (ab)use of Javascript APIs offered by browsers. [tracking]
While some people may be sanguine about this kind of tracking, others consider this behaviour unwelcome, when or if they are informed that it happens, [attitude] though the evidence here seems somewhat harder to interpret and many studies (that we have found to date) involve small numbers of users. Historically, browsers have not made this kind of tracking visible and have enabled it by default, though some recent browser versions are starting to enable visibility and blocking of some kinds of tracking. Browsers are also increasingly imposing more stringent requirements on plug-ins for varied security reasons.
Many web sites today provide some form of privacy policy and terms of service, that are known to be mostly unread. [unread] This implies that, legal fiction aside, users of those sites have not in reality agreed to the specific terms published and so users are therefore highly exposed to being exploited by web sites, for example [cambridge] is a recent well-publicised case where a service provider abused the data of 87 million users via a partnership. While many web site operators claim that they care deeply about privacy, it seems prudent to assume that some (or most?) do not in fact care about user privacy, or at least not in ways with which many of their users would agree. And of course, today's web sites are actually mostly fairly complex web applications and are no longer static sets of HTML files, so calling these "web sites" is perhaps a misnomer, but considered as web applications, it seems clear that many exist that are adversarial.
Some mail user agents (MUAs) render HTML content by default (with a subset not allowing that to be turned off, perhaps particularly on mobile devices) and thus enable the same kind of adversarial tracking seen on the web. Attempts at such intentional tracking are also seen many times per day by email users - in one study [mailbug] the authors estimated that 62% of leakage to third parties was intentional, for example if leaked data included a hash of the recipient email address.
There have been examples of so-called "smart" televisions spying on their owners without permission and one survey of user attitudes [smarttv] found "broad agreement was that it is unacceptable for the data to be repurposed or shared" although the level of user understanding may be questionable. What is clear though is that such devices generally have not provided controls for their owners that would allow them to meaningfully make a decision as to whether or not they want to share such data.
Many so-called Internet of Things (IoT) devices ("so-called" as all devices were already things:-) have been found extremely deficient when their security and privacy aspects were analysed, for example children's toys. [toys] While in some cases this may be due to incompetence rather than being deliberately adversarial behaviour, the levels of incompetence frequently seen imply that it is valid to consider such cases as not being accidental.
Not all adversarial behaviour by applications is deliberate, some is likely due to various levels of carelessness (some quite understandable, others not) and/or due to erroneous assumptions about the environments in which those applications (now) run. We very breifly list some such cases:
As we believe useful conclusions in this space require community consensus, we won't offer definitive descriptions of an expanded threat model but we will call out some potential directions that could be explored at the DEDR workshop and thereafter, if there is interest in this topic.
It may be time for the IETF to develop a BCP for privacy considerations, possibly starting from [RFC6973].
[I-D.nottingham-for-the-users] argues that, in relevant cases where there are conflicting requirements, the "IETF considers end users as its highest priority concern." Doing so seems consistent with the expanded threat model being argued for here, so may indicate that a BCP in that space could also be useful.
Protocol developers and those implementing and deploying Internet technologies are typically most interested in a few specific use-cases for which they need solutions. Expanding our threat model to include adversarial application behaviours [abusecases] seems likely to call for significant attention to be paid to potential abuses of whatever new or re-purposed technology is being considered.
It could be that this discussion demonstrates that it is timely to reconsider some protocol design "lore" as for example is done in [I-D.iab-protocol-maintenance]. More specifically, protocol extensibility mechanisms may inadvertently create vectors for abuse-cases, given that designers cannot fully analyse their impact at the time a new protocol is defined or standardised. One might conclude that a lack of extensibility could be a virtue for some new protocols, in contrast to earlier assumptions.
Sophisticated users can deal with some adversarial behaviours in applications by using different instances of those applications, for example, differently configured web browsers for use in different contexts. Applications (including web browsers) and operating systems are also building in isolation via use of different processes or sandboxing. Protocol artefacts that relate to uses of such isolation mechanisms might be worth considering. To an extent, the IETF has in practice already recognised some of these issues as being in-scope, e.g. when considering the linkability issues with mechanisms such as TLS session tickets, or QUIC connection identifiers.
Certificate transparency (CT) [RFC6962] has been an effective countermeasure for X.509 certificate mis-issuance, which used be a known application layer misbehaviour in the public web PKI. While the context in which CT operates is very constrained (essentially to the public CAs trusted by web browsers), similar approaches could be useful for other protocols or technologies.
In addition, legislative requirements such as those imposed by the GDPR for subject access to data could lead to a desire to handle internal data structures and databases in ways that are reminiscent of CT, though clearly with signifnant authorisation being required and without the append-only nature of a CT log.
As recommended in [RFC6973] data minimisation and additional encryption are likely to be helpful - if applications don't ever see data, or a cleartext form of data, then they should have a harder time misbehaving. Similarly, not adding new long-term identifiers, and not exposing existing ones, would seem helpful.
The Same-Origin Policy (SOP) [RFC6454] perhaps already provides an example of how going beyond the RFC 3552 threat model can be useful. Arguably, the existence of the SOP demonstrates that at least web browsers already consider the 3552 model as being too limited. (Clearly, differentiating between same and not-same origins implicitly assumes that some origins are not as trustworthy as others.)
The TLS protocol [RFC8446] now supports the use of GREASE [I-D.ietf-tls-grease] as a way to mitigate on-path ossification. While this technique is not likely to prevent any deliberate misbehaviours, it may provide a proof-of-concept that network protocol mechanisms can have impact in this space, if we spend the time to try analyse the incentives of the various parties.
At this stage we don't think it approriate to claim that any strong conclusion can be reached based on the above. We do however, claim that the is a topic that could be worth discussion at the DEDR workshop and elsewhere.
This draft is all about security, and privacy.
Encryption is one of the most effective tools in countering network based attackers and will also have a role in protecting against adversarial applications. However, today many existing tools for countering adversarial applications assume they can inspect network traffic to or from potentially adversarial applications. These facts of course cause tensions (e.g. see [RFC8404]). Expanding our threat model could possibly help reduce some of those tensions, if it leads to the development of protocols that make exploitation harder or more transparent for adversarial applications.
There are no IANA considerations.
We'll happily ack anyone who's interested enough to read and comment on this. With no implication that they agree with some or all of the above, thanks to Christian Huitema and Daniel Kahn Gillmor for comments on an earlier version of the text.