HTTP | M. Nottingham |
Internet-Draft | October 21, 2018 |
Obsoletes: 3205 (if approved) | |
Intended status: Best Current Practice | |
Expires: April 24, 2019 |
Building Protocols with HTTP
draft-ietf-httpbis-bcp56bis-07
HTTP is often used as a substrate for other application protocols (a.k.a. HTTP-based APIs). This document specifies best practices for such protocols’ use of HTTP when they are defined for diverse implementation and broad deployment (e.g., in standards efforts).
Discussion of this draft takes place on the HTTP working group mailing list (ietf-http-wg@w3.org), which is archived at https://lists.w3.org/Archives/Public/ietf-http-wg/.
Working Group information can be found at http://httpwg.github.io/; source code and issues list for this draft can be found at https://github.com/httpwg/http-extensions/labels/bcp56bis.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 24, 2019.
Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
HTTP [RFC7230] is often used as a substrate for applications other than Web browsing; this is sometimes referred to as creating “HTTP-based APIs”, or just “HTTP APIs”. This is done for a variety of reasons, including:
These protocols are often ad hoc; they are intended for only deployment by one or a few servers, and consumption by a limited set of clients. Perhaps because of the factors cited above, a body of practices and tools has arisen around defining HTTP-based APIs that favours these conditions.
However, when such an application has multiple, separate implementations of the server component, is deployed on multiple uncoordinated servers, and is consumed by diverse clients – as is often the case for standards efforts to define new HTTP APIs – tools and practices intended for limited deployment can become unsuitable.
For example, because implementations (both client and server) will implement and evolve at different paces, a HTTP-based API might need to more carefully consider how extensibility of the service will be handled, and how different deployment requirements will be accommodated.
More generally, application protocols using HTTP face a number of design decisions, including:
This document contains best current practices regarding the use of HTTP by applications other than Web browsing. Section 2 defines what applications it applies to; Section 3 surveys the properties of HTTP that are important to preserve, and Section 4 conveys best practices for those applications that do use HTTP.
It is written primarily to guide IETF efforts to define application protocols using HTTP for deployment on the Internet, but might be applicable in other situations. Note that the requirements herein do not necessarily apply to the development of generic HTTP extensions.
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Different applications have different goals when using HTTP. In this document, we say an application is “using HTTP” when any of the following conditions are true:
When an application is using HTTP, all of the requirements of the HTTP protocol suite are in force (including but not limited to [RFC7230], [RFC7231], [RFC7232], [RFC7233], [RFC7234], [RFC7235] and [RFC7540]).
An application might not be using HTTP according to this definition, but still relying upon the HTTP specifications in some manner. For example, an application might wish to avoid re-specifying parts of the message format, but change others; or, it might want to use a different set of methods.
Such applications are referred to as “protocols based upon HTTP” in this document. These have more freedom to modify protocol operations, but are also likely to lose at least a portion of the benefits outlined above, as most HTTP implementations won’t be easily adaptable to these changes, and as the protocol diverges from HTTP, the benefit of mindshare will be lost.
Protocols that are based upon HTTP MUST NOT reuse HTTP’s URL schemes, transport ports, ALPN protocol IDs or IANA registries; rather, they are encouraged to establish their own.
There are many ways that applications using HTTP are defined and deployed, and sometimes they are brought to the IETF for standardisation. In that process, what might be workable for deployment in a limited fashion isn’t appropriate for standardisation and the corresponding broader deployment.
This section examines the facets of the protocol that are important to preserve in these situations.
When writing an application’s specification, it’s often tempting to specify exactly how HTTP is to be implemented, supported and used.
However, this can easily lead to an unintended profile of HTTP’s behaviour. For example, it’s common to see specifications with language like this:
A `POST` request MUST result in a `201 Created` response.
This forms an expectation in the client that the response will always be 201 Created, when in fact there are a number of reasons why the status code might differ in a real deployment. If the client does not anticipate this, the application’s deployment is brittle.
Much of the value of HTTP is in its generic semantics – that is, the protocol elements defined by HTTP are potentially applicable to every resource, not specific to a particular context. Application-specific semantics are expressed in the payload; mostly, in the body, but also in header fields.
This allows a HTTP message to be examined by generic HTTP software (e.g., HTTP servers, intermediaries, client implementations), and its handling to be correctly determined. It also allows people to leverage their knowledge of HTTP semantics without special-casing them for a particular application.
Therefore, applications that use HTTP MUST NOT re-define, refine or overlay the semantics of defined protocol elements. Instead, they should focus their specifications on protocol elements that are specific to that application; namely their HTTP resources.
See Section 4.2 for details.
Another common practice is assuming that the HTTP server’s name space (or a portion thereof) is exclusively for the use of a single application. This effectively overlays special, application-specific semantics onto that space, precludes other applications from using it.
As explained in [RFC7320], such “squatting” on a part of the URL space by a standard usurps the server’s authority over its own resources, can cause deployment issues, and is therefore bad practice in standards.
Instead of statically defining URL components like paths, it is RECOMMENDED that applications using HTTP define links in payloads, to allow flexibility in deployment.
Using runtime links in this fashion has a number of other benefits – especially when an application is to have multiple implementations and/or deployments (as is often the case for those that are standardised).
For example, navigating with a link allows a request to be routed to a different server without the overhead of a redirection, thereby supporting deployment across machines well.
It also becomes possible to “mix and match” different applications on the same server, and offers a natural mechanism for extensibility, versioning and capability management, since the document containing the links can also contain information about their targets.
Using links also offers a form of cache invalidation that’s seen on the Web; when a resource’s state changes, the application can change its link to it so that a fresh copy is always fetched.
HTTP offers a number of features to applications, such as:
Applications that use HTTP are encouraged to utilise the various features that the protocol offers, so that their users receive the maximum benefit from it, and to allow it to be deployed in a variety of situations. This document does not require specific features to be used, since the appropriate design tradeoffs are highly specific to a given situation. However, following the practices in Section 4 is a good starting point.
This section contains best practices regarding the use of HTTP by applications, including practices for specific HTTP protocol elements.
When specifying the use of HTTP, an application SHOULD use [RFC7230] as the primary reference; it is not necessary to reference all of the specifications in the HTTP suite unless there are specific reasons to do so (e.g., a particular feature is called out).
Applications using HTTP SHOULD NOT specify a minimum version of HTTP to be used; because it is a hop-by-hop protocol, a HTTP connection can be handled by implementations that are not controlled by the application; for example, proxies, CDNs, firewalls and so on. Requiring a particular version of HTTP makes it difficult to use in these situations, and harms interoperability for little reason (since HTTP’s semantics are stable between protocol versions).
However, if an application’s deployment would benefit from the use of a particular version of HTTP (for example, HTTP/2’s multiplexing), this SHOULD be noted.
Applications using HTTP MUST NOT specify a maximum version, to preserve the protocol’s ability to evolve.
When specifying examples of protocol interactions, applications SHOULD document both the request and response messages, with full headers, preferably in HTTP/1.1 format. For example:
GET /thing HTTP/1.1 Host: example.com Accept: application/things+json User-Agent: Foo/1.0
HTTP/1.1 200 OK Content-Type: application/things+json Content-Length: 500 Server: Bar/2.2 [payload here]
Applications that use HTTP should focus on defining the following application-specific protocol elements:
By composing these protocol elements, an application can define a set of resources, identified by link relations, that implement specified behaviours, including:
For example, an application might specify:
Resources linked to with the "example-widget" link relation type are Widgets. The state of a Widget can be fetched in the "application/example-widget+json" format, and can be updated by PUT to the same link. Widget resources can be deleted. The "Example-Count" response header field on Widget representations indicates how many Widgets are held by the sender. The "application/example-widget+json" format is a JSON [RFC8259] format representing the state of a Widget. It contains links to related information in the link indicated by the Link header field value with the "example-other-info" link relation type.
HTTP does not mandate some behaviours that have nevertheless become very common; if these are not explicitly specified by applications using HTTP, there may be confusion and interoperability problems. This section recommends default handling for these mechanisms.
In general, applications using HTTP ought to align their usage as closely as possible with Web browsers, to avoid interoperability issues when they are used. See Section 4.12.
If an application using HTTP has browser compatibility as a goal, client interaction ought to be defined in terms of [FETCH], since that is the abstraction that browsers use for HTTP; it enforces many of these best practices.
Applications using HTTP MUST NOT require HTTP features that are usually negotiated to be supported. For example, requiring that clients support responses with a certain content-encoding ([RFC7231], Section 3.1.2.2) instead of negotiating for it ([RFC7231], Section 5.3.4) means that otherwise conformant clients cannot interoperate with the application. Applications MAY encourage the implementation of such features, though.
In HTTP, URLs are opaque identifiers under the control of the server. As outlined in [RFC7320], standards cannot usurp this space, since it might conflict with existing resources, and constrain implementation and deployment.
In other words, applications that use HTTP shouldn’t associate application semantics with specific URL paths on arbitrary servers. Doing so inappropriately conflates the identity of the resource (its URL) with the capabilities that resource supports, bringing about many of the same interoperability problems that [RFC4367] warns of.
For example, specifying that a “GET to the URL /foo retrieves a bar document” is bad practice. Likewise, specifying “The widget API is at the path /bar” violates [RFC7320].
Instead, applications that use HTTP are encouraged to ensure that URLs are discovered at runtime, allowing HTTP-based services to describe their own capabilities. One way to do this is to use typed links [RFC8288] to convey the URIs that are in use, as well as the semantics of the resources that they identify. See Section 4.2 for details.
Generally, a client will begin interacting with a given application server by requesting an initial document that contains information about that particular deployment, potentially including links to other relevant resources.
Applications that use HTTP are encouraged to allow an arbitrary URL to be used as that entry point. For example, rather than specifying “the initial document is at “/foo/v1”, they should allow a deployment to use any URL as the entry point for the application.
In cases where doing so is impractical (e.g., it is not possible to convey a whole URL, but only a hostname) standard applications that use HTTP can request a well-known URL [RFC5785] as an entry point.
Applications that use HTTP will typically employ the “http” and/or “https” URL schemes. “https” is RECOMMENDED to provide authentication, integrity and confidentiality, as well as mitigate pervasive monitoring attacks [RFC7258].
However, application-specific schemes can be defined as well.
When defining an URL scheme for an application using HTTP, there are a number of tradeoffs and caveats to keep in mind:
See [RFC7595] for more information about minting new URL schemes.
Applications that use HTTP can use the applicable default port (80 for HTTP, 443 for HTTPS), or they can be deployed upon other ports. This decision can be made at deployment time, or might be encouraged by the application’s specification (e.g., by registering a port for that application).
If a non-default port is used, it needs to be reflected in the authority of all URLs for that resource; the only mechanism for changing a default port is changing the scheme (see Section 4.4.2).
Using a port other than the default has privacy implications (i.e., the protocol can now be distinguished from other traffic), as well as operability concerns (as some networks might block or otherwise interfere with it). Privacy implications should be documented in Security Considerations.
See [RFC7605] for further guidance.
Applications that use HTTP MUST confine themselves to using registered HTTP methods such as GET, POST, PUT, DELETE, and PATCH.
New HTTP methods are rare; they are required to be registered in the HTTP Method Registry with IETF Review (see [RFC7231]), and are also required to be generic. That means that they need to be potentially applicable to all resources, not just those of one application.
While historically some applications (e.g., [RFC4791]) have defined non-generic methods, [RFC7231] now forbids this.
When authors believe that a new method is required, they are encouraged to engage with the HTTP community early, and document their proposal as a separate HTTP extension, rather than as part of an application’s specification.
GET is one of the most common and useful HTTP methods; its retrieval semantics allow caching, side-effect free linking and forms the basis of many of the benefits of using HTTP.
A common use of GET is to perform queries, often using the query component of the URL; this is a familiar pattern from Web browsing, and the results can be cached, improving efficiency of an often expensive process.
In some cases, however, GET might be unwieldy for expressing queries, because of the limited syntax of the URL; in particular, if binary data forms part of the query terms, it needs to be encoded to conform to URL syntax.
While this is not an issue for short queries, it can become one for larger query terms, or ones which need to sustain a high rate of requests. Additionally, some HTTP implementations limit the size of URLs they support – although modern HTTP software has much more generous limits than previously (typically, considerably more than 8000 octets, as required by [RFC7230], Section 3.1.1).
In these cases, an application using HTTP might consider using POST to express queries in the request body; doing so avoids encoding overhead and URL length limits in implementations. However, in doing so it should be noted that the benefits of GET such as caching and linking to query results are lost. Therefore, applications using HTTP that feel a need to allow POST queries ought consider allowing both methods.
Applications that use HTTP SHOULD NOT define GET requests to have side effects, since implementations can and do retry HTTP GET requests that fail.
Finally, note that while HTTP allows GET requests to have a body syntactically, this is done only to allow parsers to be generic; as per [RFC7231], Section 4.3.1, a body on a GET has no meaning, and will be either ignored or rejected by generic HTTP software.
The OPTIONS method was defined for metadata retrieval, and is used both by WebDAV [RFC4918] and CORS [FETCH]. Because HTTP-based APIs often need to retrieve metadata about resources, it is often considered for their use.
However, OPTIONS does have significant limitations:
Instead of OPTIONS, one of these alternative approaches might be more appropriate:
The primary function of a HTTP status code is to convey semantics for the benefit of generic HTTP software, not to convey application-specific semantics.
In particular, status codes are often generated or overwritten by intermediaries, as well as server and client implementations; for example, when network errors are encountered, a captive portal is present, when an implementation is overloaded, or it thinks it is under attack. As a result, the status code that a server-side application generates and the one that the client software receives often differ.
This means that status codes are not a reliable way to carry application-specific signals. Specifying that a particular status code has a specific meaning in the context of an application can have unintended side effects; if that status code is generated by a generic HTTP component can lead clients to believe that the application is in a state that wasn’t intended.
Instead, applications using HTTP should specify the implications of general classes of responses (e.g., “successful response” for 2xx; “client error” for 4xx and “server error” for 5xx), conveying any application-specific information in the message body and/or HTTP header fields, not the status code. [RFC7807] provides one way for applications using HTTP to do so for error conditions.
There are limited exceptions to this; for example, applications might use 201 (Created) or 404 (Not Found) to convey application semantics that are compatible with the generic HTTP semantics of those status codes. In general, though, applications should resist the temptation to map their semantics into fine-grained status codes.
Because the set of registered HTTP status codes can expand, applications using HTTP should explicitly point out that clients ought to be able to handle all applicable status codes gracefully (i.e., falling back to the generic n00 semantics of a given status code; e.g., 499 can be safely handled as 400 by clients that don’t recognise it). This is preferable to creating a “laundry list” of potential status codes, since such a list is never complete.
Applications using HTTP MUST NOT re-specify the semantics of HTTP status codes, even if it is only by copying their definition. They MUST NOT require specific reason phrases to be used; the reason phrase has no function in HTTP, and is not guaranteed to be preserved by implementations, and the reason phrase is not carried at all in the [RFC7540] message format.
Applications that use HTTP MUST only use registered HTTP status codes. As with methods, new HTTP status codes are rare, and required (by [RFC7231]) to be registered with IETF review. Similarly, HTTP status codes are generic; they are required (by [RFC7231]) to be potentially applicable to all resources, not just to those of one application.
When authors believe that a new status code is required, they are encouraged to engage with the HTTP community early, and document their proposal as a separate HTTP extension, rather than as part of an application’s specification.
The 3xx series of status codes specified in [RFC7231], Section 6.4 are used to direct the user agent to another resource to satisfy the request. The most common of these are 301, 302, 307 and 308 ([RFC7538]), all of which use the Location response header field to indicate where the client should send the request to.
There are two ways that this group of status codes differ:
This table summarises their relationships:
Permanent | Temporary | |
---|---|---|
Allows changing the request method from POST to GET | 301 | 302 |
Does not allow changing the request method | 308 | 307 |
As noted in [RFC7231], a user agent is allowed to automatically follow a 3xx redirect that has a Location response header field, even if they don’t understand the semantics of the specific status code. However, they aren’t required to do so; therefore, if an application using HTTP desires redirects to be automatically followed, it needs to explicitly specify the circumstances when this is required.
Applications using HTTP SHOULD specify that 301 and 302 responses change the subsequent request method from POST (but no other method) to GET, to be compatible with browsers.
Generally, when a redirected request is made, its header fields are copied from the original request’s. However, they can be modified by various mechanisms; e.g., sent Authorization ([RFC7235]) and Cookie ([RFC6265]) headers will change if the origin (and sometimes path) of the request changes. Applications using HTTP SHOULD specify if any request headers need to be modified or removed upon a redirect; however, this behaviour cannot be relied upon, since a generic client (like a browser) will be unaware of such requirements.
Applications that use HTTP MAY define new HTTP header fields. Typically, using HTTP header fields is appropriate in a few different situations:
New header fields MUST be registered, as per [RFC7231] and [RFC3864].
See [RFC7231], Section 8.3.1 for guidelines to consider when minting new header fields. [I-D.ietf-httpbis-header-structure] provides a common structure for new header fields, and avoids many issues in their parsing and handling; it is RECOMMENDED that new header fields use it.
It is RECOMMENDED that header field names be short (even when HTTP/2 header compression is in effect, there is an overhead) but appropriately specific. In particular, if a header field is specific to an application, an identifier for that application SHOULD form a prefix to the header field name, separated by a “-“.
For example, if the “example” application needs to create three headers, they might be called “example-foo”, “example-bar” and “example-baz”. Note that the primary motivation here is to avoid consuming more generic header names, not to reserve a portion of the namespace for the application; see [RFC6648] for related considerations.
The semantics of existing HTTP header fields MUST NOT be re-defined without updating their registration or defining an extension to them (if allowed). For example, an application using HTTP cannot specify that the Location header has a special meaning in a certain context.
See Section 4.9 for the interaction between headers and HTTP caching; in particular, request headers that are used to “select” a response have impact there, and need to be carefully considered.
See Section 4.10 for considerations regarding header fields that carry application state (e.g., Cookie).
There are many potential formats for payloads; for example, JSON [RFC8259], XML [XML], and CBOR [RFC7049]. Best practices for their use are out of scope for this document.
Applications SHOULD register distinct media types for each format they define; this makes it possible to identify them unambiguously and negotiate for their use. See [RFC6838] for more information.
HTTP caching [RFC7234] is one of the primary benefits of using HTTP for applications; it provides scalability, reduces latency and improves reliability. Furthermore, HTTP caches are readily available in browsers and other clients, networks as forward and reverse proxies, Content Delivery Networks and as part of server software.
Assigning even a short freshness lifetime ([RFC7234], Section 4.2) – e.g., 5 seconds – allows a response to be reused to satisfy multiple clients, and/or a single client making the same request repeatedly. In general, if it is safe to reuse something, consider assigning a freshness lifetime; cache implementations take active measures to remove content intelligently when they are out of space, so “it will fill up the cache” is not a valid concern.
The most common method for specifying freshness is the max-age response directive ([RFC7234], Section 5.2.2.8). The Expires header ([RFC7234], Section 5.3) can also be used, but it is not necessary to specify it; all modern cache implementations support Cache-Control, and specifying freshness as a delta is both more convenient in most cases, and less error-prone.
Understand that stale responses (e.g., one with “Cache-Control: max-age=0”) can be reused when the cache is disconnected from the origin server; this can be useful for handling network issues. See [RFC7234], Section 4.2.4, and also [RFC5861] for additional controls over stale content.
Stale responses can be refreshed by assigning a validator, saving both transfer bandwidth and latency for large responses; see [RFC7232].
If an application defines a request header field that might be used by a server to change the response’s headers or body, authors should point out that this has implications for caching; in general, such resources need to either make their responses uncacheable (e.g., with the “no-store” cache-control directive defined in [RFC7234], Section 5.2.2.3) or consistently send the Vary response header ([RFC7231], Section 7.1.4).
For example, this response:
HTTP/1.1 200 OK Content-Type: application/example+xml Cache-Control: max-age=60 ETag: "sa0f8wf20fs0f" Vary: Accept-Encoding [content]
can be stored for 60 seconds by both private and shared caches, can be revalidated with If-None-Match, and varies on the Accept-Encoding request header field.
In some situations, responses without explicit cache directives (e.g., Cache-Control or Expires) will be stored and served using a heuristic freshness lifetime; see [RFC7234], Section 4.2.2. As the heuristic is not under control of the application, it is generally preferable to set an explicit freshness lifetime.
If caching of a response is not desired, the appropriate response directive is “Cache-Control: no-store”. This only need be sent in situations where the response might be cached; see [RFC7234], Section 3. Note that “Cache-Control: no-cache” allows a response to be stored, just not reused by a cache; it does not prevent caching (despite its name).
For example, this response cannot be stored or reused by a cache:
HTTP/1.1 200 OK Content-Type: application/example+xml Cache-Control: no-store [content]
When an application has a need to express a lifetime that’s separate from the freshness lifetime, this should be expressed separately, either in the response’s body or in a separate header field. When this happens, the relationship between HTTP caching and that lifetime need to be carefully considered, since the response will be used as long as it is considered fresh.
Like other functions, HTTP caching is generic; it does not have knowledge of the application in use. Therefore, caching extensions need to be backwards-compatible, as per [RFC7234], Section 5.2.3.
Applications that use HTTP MAY use stateful cookies [RFC6265] to identify a client and/or store client-specific data to contextualise requests.
When used, it is important to carefully specify the scoping and use of cookies; if the application exposes sensitive data or capabilities (e.g., by acting as an ambient authority), exploits are possible. Mitigations include using a request-specific token to assure the intent of the client.
Applications MUST NOT make assumptions about the relationship between separate requests on a single transport connection; doing so breaks many of the assumptions of HTTP as a stateless protocol, and will cause problems in interoperability, security, operability and evolution.
Applications that use HTTP MAY use HTTP authentication [RFC7235] to identify clients. The Basic authentication scheme [RFC7617] MUST NOT be used unless the underlying transport is authenticated, integrity-protected and confidential (e.g., as provided the “HTTPS” URL scheme, or another using TLS). The Digest scheme [RFC7616] MUST NOT be used unless the underlying transport is similarly secure, or the chosen hash algorithm is not “MD5”.
With HTTPS, clients might also be authenticated using certificates [RFC5246].
When used, it is important to carefully specify the scoping and use of authentication; if the application exposes sensitive data or capabilities (e.g., by acting as an ambient authority), exploits are possible. Mitigations include using a request-specific token to assure the intent of the client.
Even if there is not an intent for an application that uses HTTP to be used with a Web browser, its resources will remain available to browsers and other HTTP clients.
This means that all such applications need to consider how browsers will interact with them, particularly regarding security.
For example, if an application’s state can be changed using a POST request, a Web browser can easily be coaxed into cross-site request forgery (CSRF) from arbitrary Web sites.
Or, If content returned from the application’s resources is under control of an attacker (for example, part of the request is reflected in the response, or the response contains external information that might be under the control of the attacker), a cross-site scripting (XSS) attack is possible, whereby an attacker can inject code into the browser and access data and capabilities on that origin.
This is only a small sample of the kinds of issues that applications using HTTP must consider. Generally, the best approach is to consider the application actually as a Web application, and to follow best practices for their secure development.
A complete enumeration of such practices is out of scope for this document, but some considerations include:
Depending on how they are intended to be deployed, specifications for applications using HTTP might require the use of these mechanisms in specific ways, or might merely point them out in Security Considerations.
An example of a HTTP response from an application that does not intend for its content to be treated as active by browsers might look like this:
HTTP/1.1 200 OK Content-Type: application/example+json X-Content-Type-Options: nosniff Content-Security-Policy: default-src 'none' Cache-Control: max-age=3600 Referrer-Policy: no-referrer [content]
If an application using HTTP has browser compatibility as a goal, client interaction ought to be defined in terms of [FETCH], since that is the abstraction that browsers use for HTTP; it enforces many of these best practices.
Because the origin [RFC6454] is how many HTTP capabilities are scoped, applications also need to consider how deployments might interact with other applications (including Web browsing) on the same origin.
For example, if Cookies [RFC6265] are used to carry application state, they will be sent with all requests to the origin by default, unless scoped by path, and the application might receive cookies from other applications on the origin. This can lead to security issues, as well as collision in cookie names.
One solution to these issues is to require a dedicated hostname for the application, so that it has a unique origin. However, it is often desirable to allow multiple applications to be deployed on a single hostname; doing so provides the most deployment flexibility and enables them to be “mixed” together (See [RFC7320] for details). Therefore, applications using HTTP should strive to allow multiple applications on an origin.
To enable this, when specifying the use of Cookies, HTTP authentication realms [RFC7235], or other origin-wide HTTP mechanisms, applications using HTTP SHOULD NOT mandate the use of a particular name, but instead let deployments configure them. Consideration SHOULD be given to scoping them to part of the origin, using their specified mechanisms for doing so.
Modern Web browsers constrain the ability of content from one origin to access resources from another, to avoid leaking private information. As a result, applications that wish to expose cross-origin data to browsers will need to implement the CORS protocol; see [FETCH].
HTTP/2 adds the ability for servers to “push” request/response pairs to clients in [RFC7540], Section 8.2. While server push seems like a natural fit for many common application semantics (e.g., “fanout” and publish/subscribe), a few caveats should be noted:
Applications wishing to optimise cases where the client can perform work related to requests before the full response is available (e.g., fetching links for things likely to be contained within) might benefit from using the 103 (Early Hints) status code; see [RFC8297].
Applications using server push directly need to enforce the requirements regarding authority in [RFC7540], Section 8.2, to avoid cross-origin push attacks.
It’s often necessary to introduce new features into application protocols, and change existing ones.
In HTTP, backwards-incompatible changes are possible using a number of mechanisms:
This document has no requirements for IANA.
Section 4.10 discusses the impact of using stateful mechanisms in the protocol as ambient authority, and suggests a mitigation.
Section 4.4.2 requires support for ‘https’ URLs, and discourages the use of ‘http’ URLs, to provide authentication, integrity and confidentiality, as well as mitigate pervasive monitoring attacks.
Section 4.12 highlights the implications of Web browsers’ capabilities on applications that use HTTP.
Section 4.13 discusses the issues that arise when applications are deployed on the same origin as Web sites (and other applications).
Section 4.14 highlights risks of using HTTP/2 server push in a manner other than specified.
Applications that use HTTP in a manner that involves modification of implementations – for example, requiring support for a new URL scheme, or a non-standard method – risk having those implementations “fork” from their parent HTTP implementations, with the possible result that they do not benefit from patches and other security improvements incorporated upstream.
HTTP clients can expose a variety of information to servers. Besides information that’s explicitly sent as part of an application’s operation (for example, names and other user-entered data), and “on the wire” (which is one of the reasons https is recommended in Section 4.4.2), other information can be gathered through less obvious means – often by connecting activities of a user over time.
This includes session information, tracking the client through fingerprinting, and mobile code.
Session information includes things like the IP address of the client, TLS session tickets, Cookies, ETags stored in the client’s cache, and other stateful mechanisms. Applications are advised to avoid using session mechanisms unless they are unavoidable or necessary for operation, in which case these risks needs to be documented. When they are used, implementations should be encouraged to allow clearing such state.
Fingerprinting uses unique aspects of a client’s messages and behaviours to connect disparate requests and connections. For example, the User-Agent request header conveys specific information about the implementation; the Accept-Language request header conveys the users’ preferred language. In combination, a number of these markers can be used to uniquely identify a client, impacting its control over its data. As a result, applications are advised to specify that clients should only emit the information they need to function in requests.
Finally, if an application exposes the ability to run mobile code, great care needs to be taken, since any ability to observe its environment can be used as an opportunity to both fingerprint the client and to obtain and manipulate private data (including session information). For example, access to high-resolution timers (even indirectly) can be used to profile the underlying hardware, creating a unique identifier for the system. Applications are advised avoid allowing the use of mobile code where possible; when it cannot be avoided, the resulting system’s security properties need be carefully scrutinised.
[CSP] | West, M., "Content Security Policy Level 3", World Wide Web Consortium WD WD-CSP3-20160913, September 2016. |
[FETCH] | WHATWG, "Fetch - Living Standard", n.d.. |
[HTML5] | WHATWG, "HTML - Living Standard", n.d.. |
[I-D.ietf-httpbis-header-structure] | Nottingham, M. and P. Kamp, "Structured Headers for HTTP", Internet-Draft draft-ietf-httpbis-header-structure-07, July 2018. |
[REFERRER-POLICY] | Eisinger, J. and E. Stark, "Referrer Policy", World Wide Web Consortium CR CR-referrer-policy-20170126, January 2017. |
[RFC3205] | Moore, K., "On the use of HTTP as a Substrate", BCP 56, RFC 3205, DOI 10.17487/RFC3205, February 2002. |
[RFC4367] | Rosenberg, J. and IAB, "What's in a Name: False Assumptions about DNS Names", RFC 4367, DOI 10.17487/RFC4367, February 2006. |
[RFC4791] | Daboo, C., Desruisseaux, B. and L. Dusseault, "Calendaring Extensions to WebDAV (CalDAV)", RFC 4791, DOI 10.17487/RFC4791, March 2007. |
[RFC4918] | Dusseault, L., "HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV)", RFC 4918, DOI 10.17487/RFC4918, June 2007. |
[RFC5246] | Dierks, T. and E. Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.2", RFC 5246, DOI 10.17487/RFC5246, August 2008. |
[RFC5785] | Nottingham, M. and E. Hammer-Lahav, "Defining Well-Known Uniform Resource Identifiers (URIs)", RFC 5785, DOI 10.17487/RFC5785, April 2010. |
[RFC5861] | Nottingham, M., "HTTP Cache-Control Extensions for Stale Content", RFC 5861, DOI 10.17487/RFC5861, May 2010. |
[RFC6265] | Barth, A., "HTTP State Management Mechanism", RFC 6265, DOI 10.17487/RFC6265, April 2011. |
[RFC6415] | Hammer-Lahav, E. and B. Cook, "Web Host Metadata", RFC 6415, DOI 10.17487/RFC6415, October 2011. |
[RFC6797] | Hodges, J., Jackson, C. and A. Barth, "HTTP Strict Transport Security (HSTS)", RFC 6797, DOI 10.17487/RFC6797, November 2012. |
[RFC7049] | Bormann, C. and P. Hoffman, "Concise Binary Object Representation (CBOR)", RFC 7049, DOI 10.17487/RFC7049, October 2013. |
[RFC7258] | Farrell, S. and H. Tschofenig, "Pervasive Monitoring Is an Attack", BCP 188, RFC 7258, DOI 10.17487/RFC7258, May 2014. |
[RFC7538] | Reschke, J., "The Hypertext Transfer Protocol Status Code 308 (Permanent Redirect)", RFC 7538, DOI 10.17487/RFC7538, April 2015. |
[RFC7595] | Thaler, D., Hansen, T. and T. Hardie, "Guidelines and Registration Procedures for URI Schemes", BCP 35, RFC 7595, DOI 10.17487/RFC7595, June 2015. |
[RFC7605] | Touch, J., "Recommendations on Using Assigned Transport Port Numbers", BCP 165, RFC 7605, DOI 10.17487/RFC7605, August 2015. |
[RFC7616] | Shekh-Yusef, R., Ahrens, D. and S. Bremer, "HTTP Digest Access Authentication", RFC 7616, DOI 10.17487/RFC7616, September 2015. |
[RFC7617] | Reschke, J., "The 'Basic' HTTP Authentication Scheme", RFC 7617, DOI 10.17487/RFC7617, September 2015. |
[RFC7807] | Nottingham, M. and E. Wilde, "Problem Details for HTTP APIs", RFC 7807, DOI 10.17487/RFC7807, March 2016. |
[RFC8259] | Bray, T., "The JavaScript Object Notation (JSON) Data Interchange Format", STD 90, RFC 8259, DOI 10.17487/RFC8259, December 2017. |
[RFC8297] | Oku, K., "An HTTP Status Code for Indicating Hints", RFC 8297, DOI 10.17487/RFC8297, December 2017. |
[SECCTXT] | West, M., "Secure Contexts", World Wide Web Consortium CR CR-secure-contexts-20160915, September 2016. |
[XML] | Bray, T., Paoli, J., Sperberg-McQueen, M., Maler, E. and F. Yergeau, "Extensible Markup Language (XML) 1.0 (Fifth Edition)", World Wide Web Consortium Recommendation REC-xml-20081126, November 2008. |
[RFC3205] captured the Best Current Practice in the early 2000’s, based on the concerns facing protocol designers at the time. Use of HTTP has changed considerably since then, and as a result this document is substantially different. As a result, the changes are too numerous to list individually.