HTTPbis Working Group | M. Belshe |
Internet-Draft | Twist |
Intended status: Standards Track | R. Peon |
Expires: October 05, 2014 | Google, Inc |
M. Thomson, Ed. | |
Mozilla | |
April 03, 2014 |
Hypertext Transfer Protocol version 2
draft-ietf-httpbis-http2-11
This specification describes an optimized expression of the syntax of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent messages on the same connection. It also introduces unsolicited push of representations from servers to clients.
This document is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.
Discussion of this draft takes place on the HTTPBIS working group mailing list (ietf-http-wg@w3.org), which is archived at http://lists.w3.org/Archives/Public/ietf-http-wg/.
Working Group information can be found at http://tools.ietf.org/wg/httpbis/; that specific to HTTP/2 are at http://http2.github.io/.
The changes in this draft are summarized in Appendix A.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on October 05, 2014.
Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
The Hypertext Transfer Protocol (HTTP) is a wildly successful protocol. However, the HTTP/1.1 message format ([HTTP-p1], Section 3) was designed to be implemented with the tools at hand in the 1990s, not modern Web application performance. As such it has several characteristics that have a negative overall effect on application performance today.
In particular, HTTP/1.0 only allows one request to be outstanding at a time on a given connection. HTTP/1.1 pipelining only partially addressed request concurrency and suffers from head-of-line blocking. Therefore, clients that need to make many requests typically use multiple connections to a server in order to reduce latency.
Furthermore, HTTP/1.1 header fields are often repetitive and verbose, which, in addition to generating more or larger network packets, can cause the small initial TCP congestion window to quickly fill. This can result in excessive latency when multiple requests are made on a single new TCP connection.
This document addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection. Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.
The resulting protocol is designed to be more friendly to the network, because fewer TCP connections can be used in comparison to HTTP/1.x. This means less competition with other flows, and longer-lived connections, which in turn leads to better utilization of available network capacity.
Finally, this encapsulation also enables more scalable processing of messages through use of binary message framing.
HTTP/2 provides an optimized transport for HTTP semantics. HTTP/2 supports all of the core features of HTTP/1.1, but aims to be more efficient in several ways.
The basic protocol unit in HTTP/2 is a frame [FrameHeader]. Each frame has a different type and purpose. For example, HEADERS [HEADERS] and DATA [DATA] frames form the basis of HTTP requests and responses [HttpSequence]; other frame types like SETTINGS [SETTINGS], WINDOW_UPDATE [WINDOW_UPDATE], and PUSH_PROMISE [PUSH_PROMISE] are used in support of other HTTP/2 features.
Multiplexing of requests is achieved by having each HTTP request-response exchanged assigned to a single stream [StreamsLayer]. Streams are largely independent of each other, so a blocked or stalled request does not prevent progress on other requests.
Flow control and prioritization ensure that it is possible to properly use multiplexed streams. Flow control [FlowControl] helps to ensure that only data that can be used by a receiver is transmitted. Prioritization [StreamPriority] ensures that limited resources can be directed to the most important requests first.
HTTP/2 adds a new interaction mode, whereby a server can push responses to a client [PushResources]. Server push allows a server to speculatively send a client data that the server anticipates the client will need, trading off some network usage against a potential latency gain. The server does this by synthesizing a request, which it sends as a PUSH_PROMISE [PUSH_PROMISE] frame. The server is then able to send a response to the synthetic request on an separate stream.
Frames that contain HTTP header fields are compressed [HeaderBlock]. HTTP requests can be highly redundant, so compression can reduce the size of requests and responses significantly.
HTTP/2 also supports HTTP Alternative Services (see [ALT-SVC]) using the ALTSVC frame type [ALTSVC], to allow servers more control over traffic to them.
The HTTP/2 specification is split into four parts:
While some of the frame and stream layer concepts are isolated from HTTP, the intent is not to define a completely generic framing layer. The framing and streams layers are tailored to the needs of the HTTP protocol and server push.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
All numeric values are in network byte order. Values are unsigned unless otherwise indicated. Literal values are provided in decimal or hexadecimal as appropriate. Hexadecimal literals are prefixed with 0x to distinguish them from decimal literals.
The following terms are used:
An HTTP/2 connection is an application level protocol running on top of a TCP connection ([TCP]). The client is the TCP connection initiator.
HTTP/2 uses the same "http" and "https" URI schemes used by HTTP/1.1. HTTP/2 shares the same default port numbers: 80 for "http" URIs and 443 for "https" URIs. As a result, implementations processing requests for target resource URIs like http://example.org/foo or https://example.com/bar are required to first discover whether the upstream server (the immediate peer to which the client wishes to establish a connection) supports HTTP/2.
The means by which support for HTTP/2 is determined is different for "http" and "https" URIs. Discovery for "http" URIs is described in Section 3.2. Discovery for "https" URIs is described in Section 3.3.
The protocol defined in this document has two identifiers.
Negotiating "h2" or "h2c" implies the use of the transport, security, framing and message semantics described in this document.
Only implementations of the final, published RFC can identify themselves as "h2" or "h2c". Until such an RFC exists, implementations MUST NOT identify themselves using these strings.
Examples and text throughout the rest of this document use "h2" as a matter of editorial convenience only. Implementations of draft versions MUST NOT identify using this string.
Implementations of draft versions of the protocol MUST add the string "-" and the corresponding draft number to the identifier. For example, draft-ietf-httpbis-http2-11 over TLS is identified using the string "h2-11".
Non-compatible experiments that are based on these draft versions MUST append the string "-" and an experiment name to the identifier. For example, an experimental implementation of packet mood-based encoding based on draft-ietf-httpbis-http2-09 might identify itself as "h2-09-emo". Note that any label MUST conform to the "token" syntax defined in Section 3.2.6 of [HTTP-p1]. Experimenters are encouraged to coordinate their experiments on the ietf-http-wg@w3.org mailing list.
A client that makes a request to an "http" URI without prior knowledge about support for HTTP/2 uses the HTTP Upgrade mechanism (Section 6.7 of [HTTP-p1]). The client makes an HTTP/1.1 request that includes an Upgrade header field identifying HTTP/2 with the "h2c" token. The HTTP/1.1 request MUST include exactly one HTTP2-Settings [Http2SettingsHeader] header field.
For example:
GET /default.htm HTTP/1.1 Host: server.example.com Connection: Upgrade, HTTP2-Settings Upgrade: h2c HTTP2-Settings: <base64url encoding of HTTP/2 SETTINGS payload>
Requests that contain an entity body MUST be sent in their entirety before the client can send HTTP/2 frames. This means that a large request entity can block the use of the connection until it is completely sent.
If concurrency of an initial request with subsequent requests is important, a small request can be used to perform the upgrade to HTTP/2, at the cost of an additional round-trip.
A server that does not support HTTP/2 can respond to the request as though the Upgrade header field were absent:
HTTP/1.1 200 OK Content-Length: 243 Content-Type: text/html ...
A server that supports HTTP/2 can accept the upgrade with a 101 (Switching Protocols) response. After the empty line that terminates the 101 response, the server can begin sending HTTP/2 frames. These frames MUST include a response to the request that initiated the Upgrade.
HTTP/1.1 101 Switching Protocols Connection: Upgrade Upgrade: h2 [ HTTP/2 connection ...
The first HTTP/2 frame sent by the server is a SETTINGS [SETTINGS] frame (Section 6.5). Upon receiving the 101 response, the client sends a connection preface [ConnectionHeader], which includes a SETTINGS [SETTINGS] frame.
The HTTP/1.1 request that is sent prior to upgrade is assigned stream identifier 1 and is assigned default priority values [pri-default]. Stream 1 is implicitly half closed from the client toward the server, since the request is completed as an HTTP/1.1 request. After commencing the HTTP/2 connection, stream 1 is used for the response.
A request that upgrades from HTTP/1.1 to HTTP/2 MUST include exactly one HTTP2-Settings header field. The HTTP2-Settings header field is a hop-by-hop header field that includes parameters that govern the HTTP/2 connection, provided in anticipation of the server accepting the request to upgrade. A server MUST reject an attempt to upgrade if this header field is not present.
HTTP2-Settings = token68
The content of the HTTP2-Settings header field is the payload of a SETTINGS [SETTINGS] frame (Section 6.5), encoded as a base64url string (that is, the URL- and filename-safe Base64 encoding described in Section 5 of [RFC4648], with any trailing '=' characters omitted). The ABNF [RFC5234] production for token68 is defined in Section 2.1 of [HTTP-p7].
As a hop-by-hop header field, the Connection header field MUST include a value of HTTP2-Settings in addition to Upgrade when upgrading to HTTP/2.
A server decodes and interprets these values as it would any other SETTINGS [SETTINGS] frame. Acknowledgement of the SETTINGS parameters [SettingsSync] is not necessary, since a 101 response serves as implicit acknowledgment. Providing these values in the Upgrade request ensures that the protocol does not require default values for the above SETTINGS parameters, and gives a client an opportunity to provide other parameters prior to receiving any frames from the server.
A client that makes a request to an "https" URI without prior knowledge about support for HTTP/2 uses TLS [TLS12] with the application layer protocol negotiation extension [TLSALPN].
Once TLS negotiation is complete, both the client and the server send a connection preface [ConnectionHeader].
A client can learn that a particular server supports HTTP/2 by other means. For example, [ALT-SVC] describes a mechanism for advertising this capability in an HTTP header field; the ALTSVC frame [ALTSVC] describes a similar mechanism in HTTP/2.
A client MAY immediately send HTTP/2 frames to a server that is known to support HTTP/2, after the connection preface [ConnectionHeader]. A server can identify such a connection by the use of the "PRI" method in the connection preface. This only affects the resolution of "http" URIs; servers supporting HTTP/2 are required to support protocol negotiation in TLS [TLSALPN] for "https" URIs.
Prior support for HTTP/2 is not a strong signal that a given server will support HTTP/2 for future connections. It is possible for server configurations to change; for configurations to differ between instances in clustered server; or network conditions to change.
Upon establishment of a TCP connection and determination that HTTP/2 will be used by both peers, each endpoint MUST send a connection preface as a final confirmation and to establish the initial SETTINGS parameters for the HTTP/2 connection.
The client connection preface starts with a sequence of 24 octets, which in hex notation are:
0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
(the string PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n). This sequence is followed by a SETTINGS [SETTINGS] frame (Section 6.5). The SETTINGS [SETTINGS] frame MAY be empty. The client sends the client connection preface immediately upon receipt of a 101 Switching Protocols response (indicating a successful upgrade), or as the first application data octets of a TLS connection. If starting an HTTP/2 connection with prior knowledge of server support for the protocol, the client connection preface is sent upon connection establishment.
The server connection preface consists of a potentially empty SETTINGS [SETTINGS] frame (Section 6.5) that MUST be the first frame the server sends in the HTTP/2 connection.
To avoid unnecessary latency, clients are permitted to send additional frames to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS [SETTINGS] frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS [SETTINGS] frame, the client is expected to honor any parameters established.
Clients and servers MUST terminate the TCP connection if either peer does not begin with a valid connection preface. A GOAWAY [GOAWAY] frame (Section 6.8) MAY be omitted if it is clear that the peer is not using HTTP/2.
Once the HTTP/2 connection is established, endpoints can begin exchanging frames.
All frames begin with an 8-octet header followed by a payload of between 0 and 16,383 octets.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | R | Length (14) | Type (8) | Flags (8) | +-+-+-----------+---------------+-------------------------------+ |R| Stream Identifier (31) | +-+-------------------------------------------------------------+ | Frame Payload (0...) ... +---------------------------------------------------------------+
Frame Header
The fields of the frame header are defined as:
The structure and content of the frame payload is dependent entirely on the frame type.
The maximum size of a frame payload varies by frame type. The absolute maximum size of a frame payload is 2^14-1 (16,383) octets, meaning that the maximum frame size is 16,391 octets. All implementations SHOULD be capable of receiving and minimally processing frames up to this maximum size.
Certain frame types, such as PING [PING] (see Section 6.7), impose additional limits on the amount of payload data allowed. Likewise, additional size limits can be set by specific application uses (see Section 9).
If a frame size exceeds any defined limit, or is too small to contain mandatory frame data, the endpoint MUST send a FRAME_SIZE_ERROR [FRAME_SIZE_ERROR] error. A frame size error in a frame that could alter the state of the entire connection MUST be treated as a connection error [ConnectionErrorHandler]; this includes any frame carrying a header block [HeaderBlock] (that is, HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE], and CONTINUATION [CONTINUATION]), SETTINGS [SETTINGS], and any WINDOW_UPDATE [WINDOW_UPDATE] frame with a stream identifier of 0.
A header field in HTTP/2 is a name-value pair with one or more associated values. They are used within HTTP request and response messages as well as server push operations (see Section 8.2).
Header sets are collections of zero or more header fields. When transmitted over a connection, a header set is serialized into a header block using HTTP Header Compression [COMPRESSION]. The serialized header block is then divided into one or more octet sequences, called header block fragments, and transmitted within the payload of HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION [CONTINUATION] frames.
HTTP Header Compression does not preserve the relative ordering of header fields. Header fields with multiple values are encoded into a single header field using a special delimiter; see Section 8.1.3.3.
The Cookie header field [COOKIE] is treated specially by the HTTP mapping; see Section 8.1.3.4.
A receiving endpoint reassembles the header block by concatenating its fragments, then decompresses the block to reconstruct the header set.
A complete header block consists of either:
Header compression is stateful, using a single compression context for the entire connection. Each header block is processed as a discrete unit. Header blocks MUST be transmitted as a contiguous sequence of frames, with no interleaved frames of any other type or from any other stream. The last frame in a sequence of HEADERS [HEADERS] or CONTINUATION [CONTINUATION] frames MUST have the END_HEADERS flag set. The last frame in a sequence of PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION [CONTINUATION] frames MUST have the END_HEADERS flag set.
Header block fragments can only be sent as the payload of HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION [CONTINUATION] frames, because these frames carry data that can modify the compression context maintained by a receiver. An endpoint receiving HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION [CONTINUATION] frames MUST reassemble header blocks and perform decompression even if the frames are to be discarded. A receiver MUST terminate the connection with a connection error [ConnectionErrorHandler] of type COMPRESSION_ERROR [COMPRESSION_ERROR] if it does not decompress a header block.
A "stream" is an independent, bi-directional sequence of frames exchanged between the client and server within an HTTP/2 connection. Streams have several important characteristics:
The lifecycle of a stream is shown in Figure 1.
+--------+ PP | | PP ,--------| idle |--------. / | | \ v +--------+ v +----------+ | +----------+ | | | H | | ,---| reserved | | | reserved |---. | | (local) | v | (remote) | | | +----------+ +--------+ +----------+ | | | ES | | ES | | | | H ,-------| open |-------. | H | | | / | | \ | | | v v +--------+ v v | | +----------+ | +----------+ | | | half | | | half | | | | closed | | R | closed | | | | (remote) | | | (local) | | | +----------+ | +----------+ | | | v | | | | ES / R +--------+ ES / R | | | `----------->| |<-----------' | | R | closed | R | `-------------------->| |<--------------------' +--------+ H: HEADERS frame (with implied CONTINUATIONs) PP: PUSH_PROMISE frame (with implied CONTINUATIONs) ES: END_STREAM flag R: RST_STREAM frame
Figure 1: Stream States
Both endpoints have a subjective view of the state of a stream that could be different when frames are in transit. Endpoints do not coordinate the creation of streams; they are created unilaterally by either endpoint. The negative consequences of a mismatch in states are limited to the "closed" state after sending RST_STREAM [RST_STREAM], where frames might be received for some time after closing.
Streams have the following states:
In the absence of more specific guidance elsewhere in this document, implementations SHOULD treat the receipt of a message that is not expressly permitted in the description of a state as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
Streams are identified with an unsigned 31-bit integer. Streams initiated by a client MUST use odd-numbered stream identifiers; those initiated by the server MUST use even-numbered stream identifiers. A stream identifier of zero (0x0) is used for connection control messages; the stream identifier zero MUST NOT be used to establish a new stream.
HTTP/1.1 requests that are upgraded to HTTP/2 (see Section 3.2) are responded to with a stream identifier of one (0x1). After the upgrade completes, stream 0x1 is "half closed (local)" to the client. Therefore, stream 0x1 cannot be selected as a new stream identifier by a client that upgrades from HTTP/1.1.
The identifier of a newly established stream MUST be numerically greater than all streams that the initiating endpoint has opened or reserved. This governs streams that are opened using a HEADERS [HEADERS] frame and streams that are reserved using PUSH_PROMISE [PUSH_PROMISE]. An endpoint that receives an unexpected stream identifier MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The first use of a new stream identifier implicitly closes all streams in the "idle" state that might have been initiated by that peer with a lower-valued stream identifier. For example, if a client sends a HEADERS [HEADERS] frame on stream 7 without ever sending a frame on stream 5, then stream 5 transitions to the "closed" state when the first frame for stream 7 is sent or received.
Stream identifiers cannot be reused. Long-lived connections can result in endpoint exhausting the available range of stream identifiers. A client that is unable to establish a new stream identifier can establish a new connection for new streams.
A peer can limit the number of concurrently active streams using the SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS] parameters within a SETTINGS [SETTINGS] frame. The maximum concurrent streams setting is specific to each endpoint and applies only to the peer that receives the setting. That is, clients specify the maximum number of concurrent streams the server can initiate, and servers specify the maximum number of concurrent streams the client can initiate. Endpoints MUST NOT exceed the limit set by their peer.
Streams that are in the "open" state, or either of the "half closed" states count toward the maximum number of streams that an endpoint is permitted to open. Streams in any of these three states count toward the limit advertised in the SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS] setting (see Section 6.5.2).
An endpoint that receives a HEADERS [HEADERS] frame that causes their advertised concurrent stream limit to be exceeded MUST treat this as a stream error [StreamErrorHandler].
Streams in either of the "reserved" states do not count as open.
Using streams for multiplexing introduces contention over use of the TCP connection, resulting in blocked streams. A flow control scheme ensures that streams on the same connection do not destructively interfere with each other. Flow control is used for both individual streams and for the connection as a whole.
HTTP/2 provides for flow control through use of the WINDOW_UPDATE [WINDOW_UPDATE] frame type.
HTTP/2 stream flow control aims to allow for future improvements to flow control algorithms without requiring protocol changes. Flow control in HTTP/2 has the following characteristics:
Implementations are also responsible for managing how requests and responses are sent based on priority; choosing how to avoid head of line blocking for requests; and managing the creation of new streams. Algorithm choices for these could interact with any flow control algorithm.
Flow control is defined to protect endpoints that are operating under resource constraints. For example, a proxy needs to share memory between many connections, and also might have a slow upstream connection and a fast downstream one. Flow control addresses cases where the receiver is unable process data on one stream, yet wants to continue to process other streams in the same connection.
Deployments that do not require this capability can advertise a flow control window of the maximum size, incrementing the available space when new data is received. Sending data is always subject to the flow control window advertised by the receiver.
Deployments with constrained resources (for example, memory) MAY employ flow control to limit the amount of memory a peer can consume. Note, however, that this can lead to suboptimal use of available network resources if flow control is enabled without knowledge of the bandwidth-delay product (see [RFC1323]).
Even with full awareness of the current bandwidth-delay product, implementation of flow control can be difficult. When using flow control, the receiver MUST read from the TCP receive buffer in a timely fashion. Failure to do so could lead to a deadlock when critical frames, such as WINDOW_UPDATE [WINDOW_UPDATE], are not available to HTTP/2. However, flow control can ensure that constrained resources are protected without any reduction in connection utilization.
A client can assign a priority for a new stream by including prioritization information in the HEADERS frame [HEADERS] that opens the stream. For an existing stream, the PRIORITY frame [PRIORITY] can be used to change the priority.
The purpose of prioritization is to allow an endpoint to express how it would prefer its peer allocate resources when managing concurrent streams. Most importantly, priority can be used to select streams for transmitting frames when there is limited capacity for sending.
Each stream is prioritized into a group. Each group is identified using an identifier that is selected by the client. Each group is assigned a relative weight, a number that is used to determine the relative proportion of available resources that are assigned to that group.
Within a priority group, streams can also be marked as being dependent on the completion of other streams.
Explicitly setting the priority for a stream is input to a prioritization process. It does not guarantee any particular processing or transmission order for the stream relative to any other stream. An endpoint cannot force a peer to process concurrent streams in a particular order using priority. Expressing priority is therefore only ever a suggestion.
Prioritization information can be specified explicitly for streams as they are created using the HEADERS [HEADERS] frame, or changed using the PRIORITY [PRIORITY] frame. Providing prioritization information is optional, so default values are used if no explicit indicator is provided (Section 5.3.5).
Explicit prioritization information can be provided for a stream to either allocate the stream to a priority group (Section 5.3.1), or to create a dependency on another stream (Section 5.3.2).
All streams are assigned a priority group. Each priority group is allocated a 31-bit identifier and an integer weight between 1 to 256 (inclusive).
Specifying a priority group and weight for a stream causes the stream to be assigned to the identified priority group and for the weight for the group to be changed to the new value.
Resources are divided proportionally between priority groups based on their weight. For example, a priority group with weight 4 ideally receives one third of the resources allocated to a stream with weight 12.
Each stream can be given an explicit dependency on another stream. Including a dependency expresses a preference to allocate resources to the identified stream rather than to the dependent stream.
A stream that is dependent on another stream becomes part of the priority group of the stream it depends on. It belongs to the same dependency tree as the stream it depends on.
A stream that is assigned directly to a priority group is not dependent on any other stream. It is the root of a dependency tree inside its priority group.
When assigning a dependency on another stream, by default, the stream is added as a new dependency of the stream it depends on. For example, if streams B and C are dependent on stream A, and if stream D is created with a dependency on stream A, this results in a dependency order of A followed by B, C, and D.
A A / \ ==> /|\ B C B D C
Example of Default Dependency Creation
An exclusive flag allows for the insertion of a new level of dependencies. The exclusive flag causes the stream to become the sole dependency of the stream it depends on, causing other dependencies to become dependencies of the stream. In the previous example, if stream D is created with an exclusive dependency on stream A, this results in a dependency order of A followed by D followed by B and C.
A A | / \ ==> D B C / \ B C
Example of Exclusive Dependency Creation
Streams are ordered into several dependency trees within their priority group. Each dependency tree within a priority group SHOULD be allocated the same amount of resources.
Inside a dependency tree, a dependent stream SHOULD only be allocated resources if the streams that it depends on are either closed, or it is not possible to make progress on them.
Streams with the same dependencies SHOULD be allocated the same amount of resources. Thus, if streams B and C depend on stream A, and if no progress can be made on A, streams B and C are given an equal share of resources.
A stream MUST NOT depend on itself. An endpoint MAY either treat this as a stream error [StreamErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR], or assign default priority values [pri-default] to the stream.
Stream priorities are changed using the PRIORITY [PRIORITY] frame. Setting a priority group and weight causes a stream to become part of the identified group, and not dependent on any other stream. Setting a dependency causes a stream to become dependent on the identified stream, which can cause the reprioritized stream to move to a new priority group.
All streams that are dependent on a reprioritized stream move with it. Setting a dependency with the exclusive flag for a reprioritized stream moves all the dependencies of the stream it depends on to become dependencies of the reprioritized stream.
When a stream is closed, its dependencies can be moved to become dependent on the stream the closed stream depends on, if any, or to become new dependency tree roots otherwise.
It is possible for a stream to become closed while prioritization information that creates a dependency on that stream is in transit. If a stream identified in a dependency has been closed and any associated priority information destroyed then the dependent stream is instead assigned a default priority. This potentially creates suboptimal prioritization, since the stream can be given an effective priority that is higher than expressed by a peer.
To avoid this problem, endpoints SHOULD maintain prioritization state for closed streams for a period after streams close. This could create an large state burden for an endpoint, so this state MAY be limited. The amount of additional state an endpoint maintains could be dependent on load; under high load, prioritization state can be discarded to limit resource commitments. In extreme cases, an endpoint could even discard prioritization state for active or reserved streams.
An endpoint SHOULD retain stream prioritization state for at least one round trip, though maintaining state over longer periods reduces the chance that default values have to be assigned to streams. An endpoint MAY apply a fixed upper limit on the number of closed streams for which prioritization state is tracked to limit state exposure. If a fixed limit is applied, endpoints SHOULD maintain state for at least as many streams as allowed by their setting for SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS].
An endpoint receiving a PRIORITY [PRIORITY] frame that changes the priority of a closed stream SHOULD alter the weight of the priority group, or the dependencies of the streams that depend on it, if it has retained enough state to do so.
Priority group information is part of the priority state of a stream. Priority groups that contain only closed streams can be assigned a weight of zero.
The number of priority groups cannot exceed the number of non-closed streams. This includes streams in the "reserved" state. Priority state size for peer-initiated streams is limited by the value of SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS]. Reserved streams do not count toward the concurrent stream limit of either peer, but only the endpoint that creates the reservation needs to maintain priority information. Thus, the total amount of priority state for non-closed streams can be limited by an endpoint.
Providing priority information is optional. Streams are assigned to a priority group with an identifier equal to the stream identifier and a weight of 16.
Pushed streams [PushResources] initially depend on their associated stream.
HTTP/2 framing permits two classes of error:
A list of error codes is included in Section 7.
A connection error is any error which prevents further processing of the framing layer, or which corrupts any connection state.
An endpoint that encounters a connection error SHOULD first send a GOAWAY [GOAWAY] frame (Section 6.8) with the stream identifier of the last stream that it successfully received from its peer. The GOAWAY [GOAWAY] frame includes an error code that indicates why the connection is terminating. After sending the GOAWAY [GOAWAY] frame, the endpoint MUST close the TCP connection.
It is possible that the GOAWAY [GOAWAY] will not be reliably received by the receiving endpoint. In the event of a connection error, GOAWAY [GOAWAY] only provides a best-effort attempt to communicate with the peer about why the connection is being terminated.
An endpoint can end a connection at any time. In particular, an endpoint MAY choose to treat a stream error as a connection error. Endpoints SHOULD send a GOAWAY [GOAWAY] frame when ending a connection, as long as circumstances permit it.
A stream error is an error related to a specific stream identifier that does not affect processing of other streams.
An endpoint that detects a stream error sends a RST_STREAM [RST_STREAM] frame (Section 6.4) that contains the stream identifier of the stream where the error occurred. The RST_STREAM [RST_STREAM] frame includes an error code that indicates the type of error.
A RST_STREAM [RST_STREAM] is the last frame that an endpoint can send on a stream. The peer that sends the RST_STREAM [RST_STREAM] frame MUST be prepared to receive any frames that were sent or enqueued for sending by the remote peer. These frames can be ignored, except where they modify connection state (such as the state maintained for header compression [HeaderBlock]).
Normally, an endpoint SHOULD NOT send more than one RST_STREAM [RST_STREAM] frame for any stream. However, an endpoint MAY send additional RST_STREAM [RST_STREAM] frames if it receives frames on a closed stream after more than a round-trip time. This behavior is permitted to deal with misbehaving implementations.
An endpoint MUST NOT send a RST_STREAM [RST_STREAM] in response to an RST_STREAM [RST_STREAM] frame, to avoid looping.
If the TCP connection is torn down while streams remain in open or half closed states, then the endpoint MUST assume that those streams were abnormally interrupted and could be incomplete.
This specification defines a number of frame types, each identified by a unique 8-bit type code. Each frame type serves a distinct purpose either in the establishment and management of the connection as a whole, or of individual streams.
The transmission of specific frame types can alter the state of a connection. If endpoints fail to maintain a synchronized view of the connection state, successful communication within the connection will no longer be possible. Therefore, it is important that endpoints have a shared comprehension of how the state is affected by the use any given frame.
DATA frames (type=0x0) convey arbitrary, variable-length sequences of octets associated with a stream. One or more DATA frames are used, for instance, to carry HTTP request or response payloads.
DATA frames MAY also contain arbitrary padding. Padding can be added to DATA frames to hide the size of messages.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Pad High? (8) | Pad Low? (8) | +---------------+---------------+-------------------------------+ | Data (*) ... +---------------------------------------------------------------+ | Padding (*) ... +---------------------------------------------------------------+
DATA Frame Payload
The DATA frame contains the following fields:
The DATA frame defines the following flags:
DATA frames MUST be associated with a stream. If a DATA frame is received whose stream identifier field is 0x0, the recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
DATA frames are subject to flow control and can only be sent when a stream is in the "open" or "half closed (remote)" states. Padding is included in flow control. If a DATA frame is received whose stream is not in "open" or "half closed (local)" state, the recipient MUST respond with a stream error [StreamErrorHandler] of type STREAM_CLOSED [STREAM_CLOSED].
The total number of padding octets is determined by multiplying the value of the Pad High field by 256 and adding the value of the Pad Low field. Both Pad High and Pad Low fields assume a value of zero if absent. If the length of the padding is greater than the length of the remainder of the frame payload, the recipient MUST treat this as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
Use of padding is a security feature; as such, its use demands some care, see Section 10.7.
The HEADERS frame (type=0x1) carries name-value pairs. It is used to open a stream [StreamStates]. HEADERS frames can be sent on a stream in the "open" or "half closed (remote)" states.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Pad High? (8) | Pad Low? (8) | +-+-------------+---------------+-------------------------------+ |R| Priority Group Identifier? (31) | +-+-------------+-----------------------------------------------+ | Weight? (8) | +-+-------------+-----------------------------------------------+ |E| Stream Dependency? (31) | +-+-------------------------------------------------------------+ | Header Block Fragment (*) ... +---------------------------------------------------------------+ | Padding (*) ... +---------------------------------------------------------------+
HEADERS Frame Payload
The HEADERS frame payload has the following fields:
The HEADERS frame defines the following flags:
The payload of a HEADERS frame contains a header block fragment [HeaderBlock]. A header block that does not fit within a HEADERS frame is continued in a CONTINUATION frame [CONTINUATION].
HEADERS frames MUST be associated with a stream. If a HEADERS frame is received whose stream identifier field is 0x0, the recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
A HEADERS frame MUST NOT have both the PRIORITY_GROUP and PRIORITY_DEPENDENCY flags set. Receipt of a HEADERS frame with both these flags set MUST be treated as a stream error [StreamErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The HEADERS frame changes the connection state as described in Section 4.3.
The HEADERS frame includes optional padding. Padding fields and flags are identical to those defined for DATA frames [DATA].
The PRIORITY frame (type=0x2) specifies the sender-advised priority of a stream [StreamPriority]. It can be sent at any time for an existing stream. This enables reprioritisation of existing streams.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |R| Priority Group Identifier? (31) | +-+-------------+-----------------------------------------------+ | Weight? (8) | +-+-------------+-----------------------------------------------+ |E| Stream Dependency? (31) | +-+-------------------------------------------------------------+
PRIORITY Frame Payload
The payload of a PRIORITY frame contains the following fields:
The PRIORITY frame defines the following flags:
A PRIORITY frame MUST have exactly one of the PRIORITY_GROUP and PRIORITY_DEPENDENCY flags set. Receipt of a PRIORITY frame with either none or both these flags set MUST be treated as a stream error [StreamErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The PRIORITY frame is associated with an existing stream. If a PRIORITY frame is received with a stream identifier of 0x0, the recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The PRIORITY frame can be sent on a stream in any of the "reserved (remote)", "open", "half-closed (local)", or "half closed (remote)" states, though it cannot be sent between consecutive frames that comprise a single header block [HeaderBlock]. Note that this frame could arrive after processing or frame sending has completed, which would cause it to have no effect. For a stream that is in the "half closed (remote)" state, this frame can only affect processing of the stream and not frame transmission.
The RST_STREAM frame (type=0x3) allows for abnormal termination of a stream. When sent by the initiator of a stream, it indicates that they wish to cancel the stream or that an error condition has occurred. When sent by the receiver of a stream, it indicates that either the receiver is rejecting the stream, requesting that the stream be cancelled or that an error condition has occurred.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Error Code (32) | +---------------------------------------------------------------+
RST_STREAM Frame Payload
The RST_STREAM frame contains a single unsigned, 32-bit integer identifying the error code [ErrorCodes]. The error code indicates why the stream is being terminated.
The RST_STREAM frame does not define any flags.
The RST_STREAM frame fully terminates the referenced stream and causes it to enter the closed state. After receiving a RST_STREAM on a stream, the receiver MUST NOT send additional frames for that stream. However, after sending the RST_STREAM, the sending endpoint MUST be prepared to receive and process additional frames sent on the stream that might have been sent by the peer prior to the arrival of the RST_STREAM.
RST_STREAM frames MUST be associated with a stream. If a RST_STREAM frame is received with a stream identifier of 0x0, the recipient MUST treat this as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
RST_STREAM frames MUST NOT be sent for a stream in the "idle" state. If a RST_STREAM frame identifying an idle stream is received, the recipient MUST treat this as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The SETTINGS frame (type=0x4) conveys configuration parameters (such as preferences and constraints on peer behavior) that affect how endpoints communicate, and is also used to acknowledge the receipt of those parameters. Individually, a SETTINGS parameter can also be referred to as a "setting".
SETTINGS parameters are not negotiated; they describe characteristics of the sending peer, which are used by the receiving peer. Different values for the same parameter can be advertised by each peer. For example, a client might set a high initial flow control window, whereas a server might set a lower value to conserve resources.
A SETTINGS frame MUST be sent by both endpoints at the start of a connection, and MAY be sent at any other time by either endpoint over the lifetime of the connection. Implementations MUST support all of the parameters defined by this specification.
Each parameter in a SETTINGS frame replaces any existing value for that parameter. Parameters are processed in the order in which they appear, and a receiver of a SETTINGS frame does not need to maintain any state other than the current value of its parameters. Therefore, the value of a SETTINGS parameter is the last value that is seen by a receiver.
SETTINGS parameters are acknowledged by the receiving peer. To enable this, the SETTINGS frame defines the following flag:
SETTINGS frames always apply to a connection, never a single stream. The stream identifier for a SETTINGS frame MUST be zero. If an endpoint receives a SETTINGS frame whose stream identifier field is anything other than 0x0, the endpoint MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The SETTINGS frame affects connection state. A badly formed or incomplete SETTINGS frame MUST be treated as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The payload of a SETTINGS frame consists of zero or more parameters, each consisting of an unsigned 8-bit identifier and an unsigned 32-bit value.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identifier (8)| +---------------+-----------------------------------------------+ | Value (32) | +---------------------------------------------------------------+
Setting Format
The following parameters are defined:
An endpoint that receives a SETTINGS frame with any other identifier MUST treat this as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
Most values in SETTINGS benefit from or require an understanding of when the peer has received and applied the changed the communicated parameter values. In order to provide such synchronization timepoints, the recipient of a SETTINGS frame in which the ACK flag is not set MUST apply the updated parameters as soon as possible upon receipt.
The values in the SETTINGS frame MUST be applied in the order they appear, with no other frame processing between values. Once all values have been applied, the recipient MUST immediately emit a SETTINGS frame with the ACK flag set. Upon receiving a SETTINGS frame with the ACK flag set, the sender of the altered parameters can rely upon their application.
If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error [ConnectionErrorHandler] of type SETTINGS_TIMEOUT [SETTINGS_TIMEOUT].
The PUSH_PROMISE frame (type=0x5) is used to notify the peer endpoint in advance of streams the sender intends to initiate. The PUSH_PROMISE frame includes the unsigned 31-bit identifier of the stream the endpoint plans to create along with a set of headers that provide additional context for the stream. Section 8.2 contains a thorough description of the use of PUSH_PROMISE frames.
PUSH_PROMISE MUST NOT be sent if the SETTINGS_ENABLE_PUSH [SETTINGS_ENABLE_PUSH] setting of the peer endpoint is set to 0.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Pad High? (8) | Pad Low? (8) | +-+-------------+---------------+-------------------------------+ |R| Promised Stream ID (31) | +-+-----------------------------+-------------------------------+ | Header Block Fragment (*) ... +---------------------------------------------------------------+ | Padding (*) ... +---------------------------------------------------------------+
PUSH_PROMISE Payload Format
The PUSH_PROMISE frame payload has the following fields:
The PUSH_PROMISE frame defines the following flags:
PUSH_PROMISE frames MUST be associated with an existing, peer-initiated stream. The stream identifier of a PUSH_PROMISE frame indicates the stream it is associated with. If the stream identifier field specifies the value 0x0, a recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
Promised streams are not required to be used in order promised. The PUSH_PROMISE only reserves stream identifiers for later use.
Recipients of PUSH_PROMISE frames can choose to reject promised streams by returning a RST_STREAM [RST_STREAM] referencing the promised stream identifier back to the sender of the PUSH_PROMISE.
The PUSH_PROMISE frame modifies the connection state as defined in Section 4.3.
A PUSH_PROMISE frame modifies the connection state in two ways. The inclusion of a header block [HeaderBlock] potentially modifies the state maintained for header compression. PUSH_PROMISE also reserves a stream for later use, causing the promised stream to enter the "reserved" state. A sender MUST NOT send a PUSH_PROMISE on a stream unless that stream is either "open" or "half closed (remote)"; the sender MUST ensure that the promised stream is a valid choice for a new stream identifier [StreamIdentifiers] (that is, the promised stream MUST be in the "idle" state).
Since PUSH_PROMISE reserves a stream, ignoring a PUSH_PROMISE frame causes the stream state to become indeterminate. A receiver MUST treat the receipt of a PUSH_PROMISE on a stream that is neither "open" nor "half-closed (local)" as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR]. Similarly, a receiver MUST treat the receipt of a PUSH_PROMISE that promises an illegal stream identifier [StreamIdentifiers] (that is, an identifier for a stream that is not currently in the "idle" state) as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The PUSH_PROMISE frame includes optional padding. Padding fields and flags are identical to those defined for DATA frames [DATA].
The PING frame (type=0x6) is a mechanism for measuring a minimal round-trip time from the sender, as well as determining whether an idle connection is still functional. PING frames can be sent from any endpoint.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | Opaque Data (64) | | | +---------------------------------------------------------------+
PING Payload Format
In addition to the frame header, PING frames MUST contain 8 octets of data in the payload. A sender can include any value it chooses and use those bytes in any fashion.
Receivers of a PING frame that does not include a ACK flag MUST send a PING frame with the ACK flag set in response, with an identical payload. PING responses SHOULD be given higher priority than any other frame.
The PING frame defines the following flags:
PING frames are not associated with any individual stream. If a PING frame is received with a stream identifier field value other than 0x0, the recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
Receipt of a PING frame with a length field value other than 8 MUST be treated as a connection error [ConnectionErrorHandler] of type FRAME_SIZE_ERROR [FRAME_SIZE_ERROR].
The GOAWAY frame (type=0x7) informs the remote peer to stop creating streams on this connection. GOAWAY can be sent by either the client or the server. Once sent, the sender will ignore frames sent on new streams for the remainder of the connection. Receivers of a GOAWAY frame MUST NOT open additional streams on the connection, although a new connection can be established for new streams. The purpose of this frame is to allow an endpoint to gracefully stop accepting new streams (perhaps for a reboot or maintenance), while still finishing processing of previously established streams.
There is an inherent race condition between an endpoint starting new streams and the remote sending a GOAWAY frame. To deal with this case, the GOAWAY contains the stream identifier of the last stream which was processed on the sending endpoint in this connection. If the receiver of the GOAWAY used streams that are newer than the indicated stream identifier, they were not processed by the sender and the receiver may treat the streams as though they had never been created at all (hence the receiver may want to re-create the streams later on a new connection).
Endpoints SHOULD always send a GOAWAY frame before closing a connection so that the remote can know whether a stream has been partially processed or not. For example, if an HTTP client sends a POST at the same time that a server closes a connection, the client cannot know if the server started to process that POST request if the server does not send a GOAWAY frame to indicate where it stopped working. An endpoint might choose to close a connection without sending GOAWAY for misbehaving peers.
After sending a GOAWAY frame, the sender can discard frames for new streams. However, any frames that alter connection state cannot be completely ignored. For instance, HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] and CONTINUATION [CONTINUATION] frames MUST be minimally processed to ensure the state maintained for header compression is consistent (see Section 4.3); similarly DATA frames MUST be counted toward the connection flow control window.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |R| Last-Stream-ID (31) | +-+-------------------------------------------------------------+ | Error Code (32) | +---------------------------------------------------------------+ | Additional Debug Data (*) | +---------------------------------------------------------------+
GOAWAY Payload Format
The GOAWAY frame does not define any flags.
The GOAWAY frame applies to the connection, not a specific stream. An endpoint MUST treat a GOAWAY [GOAWAY] frame with a stream identifier other than 0x0 as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The last stream identifier in the GOAWAY frame contains the highest numbered stream identifier for which the sender of the GOAWAY frame has received frames and might have taken some action on. All streams up to and including the identified stream might have been processed in some way. The last stream identifier is set to 0 if no streams were processed.
If a connection terminates without a GOAWAY frame, this value is effectively the highest stream identifier.
On streams with lower or equal numbered identifiers that were not closed completely prior to the connection being closed, re-attempting requests, transactions, or any protocol activity is not possible (with the exception of idempotent actions like HTTP GET, PUT, or DELETE). Any protocol activity that uses higher numbered streams can be safely retried using a new connection.
Activity on streams numbered lower or equal to the last stream identifier might still complete successfully. The sender of a GOAWAY frame might gracefully shut down a connection by sending a GOAWAY frame, maintaining the connection in an open state until all in-progress streams complete.
The last stream ID MUST be 0 if no streams were acted upon.
If an endpoint maintains the connection and continues to exchange frames, ignored frames MUST be counted toward flow control limits [FlowControl] or update header compression state [HeaderBlock]. Otherwise, flow control or header compression state can become unsynchronized.
The GOAWAY frame also contains a 32-bit error code [ErrorCodes] that contains the reason for closing the connection.
Endpoints MAY append opaque data to the payload of any GOAWAY frame. Additional debug data is intended for diagnostic purposes only and carries no semantic value. Debug information could contain security- or privacy-sensitive data. Logged or otherwise persistently stored debug data MUST have adequate safeguards to prevent unauthorized access.
The WINDOW_UPDATE frame (type=0x8) is used to implement flow control; see Section 5.2 for an overview.
Flow control operates at two levels: on each individual stream and on the entire connection.
Both types of flow control are hop-by-hop; that is, only between the two endpoints. Intermediaries do not forward WINDOW_UPDATE frames between dependent connections. However, throttling of data transfer by any receiver can indirectly cause the propagation of flow control information toward the original sender.
Flow control only applies to frames that are identified as being subject to flow control. Of the frame types defined in this document, this includes only DATA [DATA] frame. Frames that are exempt from flow control MUST be accepted and processed, unless the receiver is unable to assign resources to handling the frame. A receiver MAY respond with a stream error [StreamErrorHandler] or connection error [ConnectionErrorHandler] of type FLOW_CONTROL_ERROR [FLOW_CONTROL_ERROR] if it is unable accept a frame.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |R| Window Size Increment (31) | +-+-------------------------------------------------------------+
WINDOW_UPDATE Payload Format
The payload of a WINDOW_UPDATE frame is one reserved bit, plus an unsigned 31-bit integer indicating the number of bytes that the sender can transmit in addition to the existing flow control window. The legal range for the increment to the flow control window is 1 to 2^31 - 1 (0x7fffffff) bytes.
The WINDOW_UPDATE frame does not define any flags.
The WINDOW_UPDATE frame can be specific to a stream or to the entire connection. In the former case, the frame's stream identifier indicates the affected stream; in the latter, the value "0" indicates that the entire connection is the subject of the frame.
WINDOW_UPDATE can be sent by a peer that has sent a frame bearing the END_STREAM flag. This means that a receiver could receive a WINDOW_UPDATE frame on a "half closed (remote)" or "closed" stream. A receiver MUST NOT treat this as an error, see Section 5.1.
A receiver that receives a flow controlled frame MUST always account for its contribution against the connection flow control window, unless the receiver treats this as a connection error [ConnectionErrorHandler]. This is necessary even if the frame is in error. Since the sender counts the frame toward the flow control window, if the receiver does not, the flow control window at sender and receiver can become different.
Flow control in HTTP/2 is implemented using a window kept by each sender on every stream. The flow control window is a simple integer value that indicates how many bytes of data the sender is permitted to transmit; as such, its size is a measure of the buffering capability of the receiver.
Two flow control windows are applicable: the stream flow control window and the connection flow control window. The sender MUST NOT send a flow controlled frame with a length that exceeds the space available in either of the flow control windows advertised by the receiver. Frames with zero length with the END_STREAM flag set (for example, an empty data frame) MAY be sent if there is no available space in either flow control window.
For flow control calculations, the 8 byte frame header is not counted.
After sending a flow controlled frame, the sender reduces the space available in both windows by the length of the transmitted frame.
The receiver of a frame sends a WINDOW_UPDATE frame as it consumes data and frees up space in flow control windows. Separate WINDOW_UPDATE frames are sent for the stream and connection level flow control windows.
A sender that receives a WINDOW_UPDATE frame updates the corresponding window by the amount specified in the frame.
A sender MUST NOT allow a flow control window to exceed 2^31 - 1 bytes. If a sender receives a WINDOW_UPDATE that causes a flow control window to exceed this maximum it MUST terminate either the stream or the connection, as appropriate. For streams, the sender sends a RST_STREAM [RST_STREAM] with the error code of FLOW_CONTROL_ERROR [FLOW_CONTROL_ERROR] code; for the connection, a GOAWAY [GOAWAY] frame with a FLOW_CONTROL_ERROR [FLOW_CONTROL_ERROR] code.
Flow controlled frames from the sender and WINDOW_UPDATE frames from the receiver are completely asynchronous with respect to each other. This property allows a receiver to aggressively update the window size kept by the sender to prevent streams from stalling.
When an HTTP/2 connection is first established, new streams are created with an initial flow control window size of 65,535 bytes. The connection flow control window is 65,535 bytes. Both endpoints can adjust the initial window size for new streams by including a value for SETTINGS_INITIAL_WINDOW_SIZE [SETTINGS_INITIAL_WINDOW_SIZE] in the SETTINGS [SETTINGS] frame that forms part of the connection preface. The connection flow control window initial size cannot be changed.
Prior to receiving a SETTINGS [SETTINGS] frame that sets a value for SETTINGS_INITIAL_WINDOW_SIZE [SETTINGS_INITIAL_WINDOW_SIZE], an endpoint can only use the default initial window size when sending flow controlled frames. Similarly, the connection flow control window is set to the default initial window size until a WINDOW_UPDATE frame is received.
A SETTINGS [SETTINGS] frame can alter the initial flow control window size for all current streams. When the value of SETTINGS_INITIAL_WINDOW_SIZE [SETTINGS_INITIAL_WINDOW_SIZE] changes, a receiver MUST adjust the size of all stream flow control windows that it maintains by the difference between the new value and the old value. A SETTINGS [SETTINGS] frame cannot alter the connection flow control window.
An endpoint MUST treat a change to SETTINGS_INITIAL_WINDOW_SIZE [SETTINGS_INITIAL_WINDOW_SIZE] that causes any flow control window to exceed the maximum size as a connection error [ConnectionErrorHandler] of type FLOW_CONTROL_ERROR [FLOW_CONTROL_ERROR].
A change to SETTINGS_INITIAL_WINDOW_SIZE [SETTINGS_INITIAL_WINDOW_SIZE] can cause the available space in a flow control window to become negative. A sender MUST track the negative flow control window, and MUST NOT send new flow controlled frames until it receives WINDOW_UPDATE frames that cause the flow control window to become positive.
For example, if the client sends 60KB immediately on connection establishment, and the server sets the initial window size to be 16KB, the client will recalculate the available flow control window to be -44KB on receipt of the SETTINGS [SETTINGS] frame. The client retains a negative flow control window until WINDOW_UPDATE frames restore the window to being positive, after which the client can resume sending.
A receiver that wishes to use a smaller flow control window than the current size can send a new SETTINGS [SETTINGS] frame. However, the receiver MUST be prepared to receive data that exceeds this window size, since the sender might send data that exceeds the lower limit prior to processing the SETTINGS [SETTINGS] frame.
After sending a SETTINGS frame that reduces the initial flow control window size, a receiver has two options for handling streams that exceed flow control limits:
The CONTINUATION frame (type=0x9) is used to continue a sequence of header block fragments [HeaderBlock]. Any number of CONTINUATION frames can be sent on an existing stream, as long as the preceding frame is on the same stream and is a HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION frame without the END_HEADERS flag set.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Pad High? (8) | Pad Low? (8) | +---------------+---------------+-------------------------------+ | Header Block Fragment (*) ... +---------------------------------------------------------------+ | Padding (*) ... +---------------------------------------------------------------+
CONTINUATION Frame Payload
The CONTINUATION frame payload has the following fields:
The CONTINUATION frame defines the following flags:
The payload of a CONTINUATION frame contains a header block fragment [HeaderBlock].
The CONTINUATION frame changes the connection state as defined in Section 4.3.
CONTINUATION frames MUST be associated with a stream. If a CONTINUATION frame is received whose stream identifier field is 0x0, the recipient MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR.
A CONTINUATION frame MUST be preceded by a HEADERS [HEADERS], PUSH_PROMISE [PUSH_PROMISE] or CONTINUATION frame without the END_HEADERS flag set. A recipient that observes violation of this rule MUST respond with a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The CONTINUATION frame includes optional padding. Padding fields and flags are identical to those defined for DATA frames [DATA].
The ALTSVC frame (type=0xA) advertises the availability of an alternative service to the client. It can be sent at any time for an existing client-initiated stream or stream 0, and is intended to allow servers to load balance or otherwise segment traffic; see [ALT-SVC] for details (in particular, Section 2.4, which outlines client handling of alternative services).
An ALTSVC frame on a client-initiated stream indicates that the conveyed alternative service is associated with the origin of that stream.
An ALTSVC frame on stream 0 indicates that the conveyed alternative service is associated with the origin contained in the Origin field of the frame. An association with an origin that the client does not consider authoritative for the current connection MUST be ignored.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Max-Age (32) | +-------------------------------+----------------+--------------+ | Port (16) | Reserved (8) | PID_LEN (8) | +-------------------------------+----------------+--------------+ | Protocol-ID (*) | +---------------+-----------------------------------------------+ | HOST_LEN (8) | Host (*) ... +---------------+-----------------------------------------------+ | Origin? (*) ... +---------------------------------------------------------------+
The ALTSVC frame contains the following fields:
The ALTSVC frame does not define any flags.
The ALTSVC frame is intended for receipt by clients; a server that receives an ALTSVC frame MUST treat it as a connection error of type PROTOCOL_ERROR.
The ALTSVC frame is processed hop-by-hop. An intermediary MUST NOT forward ALTSVC frames, though it can use the information contained in ALTSVC frames in forming new ALTSVC frames to send to its own clients.
Error codes are 32-bit fields that are used in RST_STREAM [RST_STREAM] and GOAWAY [GOAWAY] frames to convey the reasons for the stream or connection error.
Error codes share a common code space. Some error codes only apply to specific conditions and have no defined semantics in certain frame types.
The following error codes are defined:
HTTP/2 is intended to be as compatible as possible with current uses of HTTP. This means that, from the perspective of the server and client applications, the features of the protocol are unchanged. To achieve this, all request and response semantics are preserved, although the syntax of conveying those semantics has changed.
Thus, the specification and requirements of HTTP/1.1 Semantics and Content [HTTP-p2], Conditional Requests [HTTP-p4], Range Requests [HTTP-p5], Caching [HTTP-p6] and Authentication [HTTP-p7] are applicable to HTTP/2. Selected portions of HTTP/1.1 Message Syntax and Routing [HTTP-p1], such as the HTTP and HTTPS URI schemes, are also applicable in HTTP/2, but the expression of those semantics for this protocol are defined in the sections below.
A client sends an HTTP request on a new stream, using a previously unused stream identifier [StreamIdentifiers]. A server sends an HTTP response on the same stream as the request.
An HTTP message (request or response) consists of: HEADERS [HEADERS] frame bearing the END_STREAM flag can be followed by CONTINUATION [CONTINUATION] frames that carry any remaining portions of the header block.
The last frame in the sequence bears an END_STREAM flag, though a
Other frames (from any stream) MUST NOT occur between either HEADERS [HEADERS] frame and the following CONTINUATION [CONTINUATION] frames (if present), nor between CONTINUATION [CONTINUATION] frames.
Otherwise, frames MAY be interspersed on the stream between these frames, but those frames do not carry HTTP semantics. In particular, HEADERS [HEADERS] frames (and any CONTINUATION [CONTINUATION] frames that follow) other than the first and optional last frames in this sequence do not carry HTTP semantics.
Trailing header fields are carried in a header block that also terminates the stream. That is, a sequence starting with a HEADERS [HEADERS] frame, followed by zero or more CONTINUATION [CONTINUATION] frames, where the HEADERS [HEADERS] frame bears an END_STREAM flag. Header blocks after the first that do not terminate the stream are not part of an HTTP request or response.
An HTTP request/response exchange fully consumes a single stream. A request starts with the HEADERS [HEADERS] frame that puts the stream into an "open" state and ends with a frame bearing END_STREAM, which causes the stream to become "half closed" for the client. A response starts with a HEADERS [HEADERS] frame and ends with a frame bearing END_STREAM, optionally followed by CONTINUATION [CONTINUATION] frames, which places the stream in the "closed" state.
The 1xx series of HTTP response status codes ([HTTP-p2], Section 6.2) are not supported in HTTP/2.
The most common use case for 1xx is using an Expect header field with a 100-continue token (colloquially, "Expect/continue") to indicate that the client expects a 100 (Continue) non-final response status code, receipt of which indicates that the client should continue sending the request body if it has not already done so.
Typically, Expect/continue is used by clients wishing to avoid sending a large amount of data in a request body, only to have the request rejected by the origin server (thus leaving the connection potentially unusable).
HTTP/2 does not enable the Expect/continue mechanism; if the server sends a final status code to reject the request, it can do so without making the underlying connection unusable.
Note that this means HTTP/2 clients sending requests with bodies may waste at least one round trip of sent data when the request is rejected. This can be mitigated by restricting the amount of data sent for the first round trip by bandwidth-constrained clients, in anticipation of a final status code.
Other defined 1xx status codes are not applicable to HTTP/2. For example, the semantics of 101 (Switching Protocols) aren't suitable to a multiplexed protocol. Likewise, 102 (Processing) is no longer necessary, because HTTP/2 has a separate means of keeping the connection alive.
This difference between protocol versions necessitates special handling by intermediaries that translate between them:
This section shows HTTP/1.1 requests and responses, with illustrations of equivalent HTTP/2 requests and responses.
An HTTP GET request includes request header fields and no body and is therefore transmitted as a single HEADERS [HEADERS] frame, followed by zero or more CONTINUATION [CONTINUATION] frames containing the serialized block of request header fields. The last HEADERS [HEADERS] frame in the sequence has both the END_HEADERS and END_STREAM flags set:
GET /resource HTTP/1.1 HEADERS Host: example.org ==> + END_STREAM Accept: image/jpeg + END_HEADERS :method = GET :scheme = https :path = /resource host = example.org accept = image/jpeg
Similarly, a response that includes only response header fields is transmitted as a HEADERS [HEADERS] frame (again, followed by zero or more CONTINUATION [CONTINUATION] frames) containing the serialized block of response header fields. The last HEADERS [HEADERS] frame in the sequence has both the END_HEADERS and END_STREAM flag set:
HTTP/1.1 304 Not Modified HEADERS ETag: "xyzzy" ==> + END_STREAM Expires: Thu, 23 Jan ... + END_HEADERS :status = 304 etag: "xyzzy" expires: Thu, 23 Jan ...
An HTTP POST request that includes request header fields and payload data is transmitted as one HEADERS [HEADERS] frame, followed by zero or more CONTINUATION [CONTINUATION] frames containing the request header fields, followed by one or more DATA [DATA] frames, with the last CONTINUATION [CONTINUATION] (or HEADERS [HEADERS]) frame having the END_HEADERS flag set and the final DATA [DATA] frame having the END_STREAM flag set:
POST /resource HTTP/1.1 HEADERS Host: example.org ==> - END_STREAM Content-Type: image/jpeg + END_HEADERS Content-Length: 123 :method = POST :scheme = https {binary data} :path = /resource :authority = example.org content-type = image/jpeg content-length = 123 DATA + END_STREAM {binary data}
A response that includes header fields and payload data is transmitted as a HEADERS [HEADERS] frame, followed by zero or more CONTINUATION [CONTINUATION] frames, followed by one or more DATA [DATA] frames, with the last DATA [DATA] frame in the sequence having the END_STREAM flag set:
HTTP/1.1 200 OK HEADERS Content-Type: image/jpeg ==> - END_STREAM Content-Length: 123 + END_HEADERS :status = 200 {binary data} content-type = image/jpeg content-length = 123 DATA + END_STREAM {binary data}
Trailing header fields are sent as a header block after both the request or response header block and all the DATA [DATA] frames have been sent. The sequence of HEADERS [HEADERS]/CONTINUATION [CONTINUATION] frames that bears the trailers includes a terminal frame that has both END_HEADERS and END_STREAM flags set.
HTTP/1.1 200 OK HEADERS Content-Type: image/jpeg ==> - END_STREAM Transfer-Encoding: chunked + END_HEADERS Trailer: Foo :status = 200 content-length = 123 123 content-type = image/jpeg {binary data} trailer = Foo 0 Foo: bar DATA - END_STREAM {binary data} HEADERS + END_STREAM + END_HEADERS foo: bar
HTTP header fields carry information as a series of key-value pairs. For a listing of registered HTTP headers, see the Message Header Field Registry maintained at http://www.iana.org/assignments/message-headers.
While HTTP/1.x used the message start-line (see [HTTP-p1], Section 3.1) to convey the target URI and method of the request, and the status code for the response, HTTP/2 uses special pseudo-headers beginning with ":" for these tasks.
Just as in HTTP/1.x, header field names are strings of ASCII characters that are compared in a case-insensitive fashion. However, header field names MUST be converted to lowercase prior to their encoding in HTTP/2. A request or response containing uppercase header field names MUST be treated as malformed [malformed].
HTTP/2 does not use the Connection header field to indicate "hop-by-hop" header fields; in this protocol, connection-specific metadata is conveyed by other means. As such, a HTTP/2 message containing Connection MUST be treated as malformed [malformed].
This means that an intermediary transforming an HTTP/1.x message to HTTP/2 will need to remove any header fields nominated by the Connection header field, along with the Connection header field itself. Such intermediaries SHOULD also remove other connection-specific header fields, such as Keep-Alive, Proxy-Connection, Transfer-Encoding and Upgrade, even if they are not nominated by Connection.
One exception to this is the TE header field, which MAY be present in an HTTP/2 request, but when it is MUST NOT contain any value other than "trailers".
HTTP/2 defines a number of header fields starting with a colon ':' character that carry information about the request target: :method, :scheme, and :path header fields, unless this is a CONNECT request [CONNECT]. An HTTP request that omits mandatory header fields is malformed [malformed].
All HTTP/2 requests MUST include exactly one valid value for the
Header field names that start with a colon are only valid in the HTTP/2 context. These are not HTTP header fields. Implementations MUST NOT generate header fields that start with a colon, but they MUST ignore any header field that starts with a colon. In particular, header fields with names starting with a colon MUST NOT be exposed as HTTP header fields.
HTTP/2 does not define a way to carry the version identifier that is included in the HTTP/1.1 request line.
A single :status header field is defined that carries the HTTP status code field (see [HTTP-p2], Section 6). This header field MUST be included in all responses, otherwise the response is malformed [malformed].
HTTP/2 does not define a way to carry the version or reason phrase that is included in an HTTP/1.1 status line.
HTTP Header Compression [COMPRESSION] does not preserve the order of header fields, because the relative order of header fields with different names is not important. However, the same header field can be repeated to form a list (see [HTTP-p1], Section 3.2.2), where the relative order of header field values is significant. This repetition can occur either as a single header field with a comma-separated list of values, or as several header fields with a single value, or any combination thereof. Therefore, in the latter case, ordering needs to be preserved before compression takes place.
To preserve the order of multiple occurrences of a header field with the same name, its ordered values are concatenated into a single value using a zero-valued octet (0x0) to delimit them.
After decompression, header fields that have values containing zero octets (0x0) MUST be split into multiple header fields before being processed.
For example, the following HTTP/1.x header block:
Content-Type: text/html Cache-Control: max-age=60, private Cache-Control: must-revalidate
contains three Cache-Control directives; two in the first Cache-Control header field, and the last one in the second Cache-Control field. Before compression, they would need to be converted to a form similar to this (with 0x0 represented as "\0"):
cache-control: max-age=60, private\0must-revalidate content-type: text/html
Note here that the ordering between Content-Type and Cache-Control is not preserved, but the relative ordering of the Cache-Control directives -- as well as the fact that the first two were comma-separated, while the last was on a different line -- is.
Header fields containing multiple values MUST be concatenated into a single value unless the ordering of that header field is known to be insignificant.
The special case of set-cookie - which does not form a comma-separated list, but can have multiple values - does not depend on ordering. The set-cookie header field MAY be encoded as multiple header field values, or as a single concatenated value.
The Cookie header field [COOKIE] can carry a significant amount of redundant data.
The Cookie header field uses a semi-colon (";") to delimit cookie-pairs (or "crumbs"). This header field doesn't follow the list construction rules in HTTP (see [HTTP-p1], Section 3.2.2), which prevents cookie-pairs from being separated into different name-value pairs. This can significantly reduce compression efficiency as individual cookie-pairs are updated.
To allow for better compression efficiency, the Cookie header field MAY be split into separate header fields, each with one or more cookie-pairs. If there are multiple Cookie header fields after decompression, these MUST be concatenated into a single octet string using the two octet delimiter of 0x3B, 0x20 (the ASCII string "; ").
The Cookie header field MAY be split using a zero octet (0x0), as defined in Section 8.1.3.3. When decoding, zero octets MUST be replaced with the cookie delimiter ("; ").
A malformed request or response is one that uses a valid sequence of HTTP/2 frames, but is otherwise invalid due to the presence of prohibited header fields, the absence of mandatory header fields, or the inclusion of uppercase header field names.
A request or response that includes an entity body can include a content-length header field. A request or response is also malformed if the value of a content-length header field does not equal the sum of the DATA [DATA] frame payload lengths that form the body.
Intermediaries that process HTTP requests or responses (i.e., all intermediaries other than those acting as tunnels) MUST NOT forward a malformed request or response.
Implementations that detect malformed requests or responses need to ensure that the stream ends. For malformed requests, a server MAY send an HTTP response prior to closing or resetting the stream. Clients MUST NOT accept a malformed response. Note that these requirements are intended to protect against several types of common attacks against HTTP; they are deliberately strict, because being permissive can expose implementations to these vulnerabilites.
In HTTP/1.1, an HTTP client is unable to retry a non-idempotent request when an error occurs, because there is no means to determine the nature of the error. It is possible that some server processing occurred prior to the error, which could result in undesirable effects if the request were reattempted.
HTTP/2 provides two mechanisms for providing a guarantee to a client that a request has not been processed:
Requests that have not been processed have not failed; clients MAY automatically retry them, even those with non-idempotent methods.
A server MUST NOT indicate that a stream has not been processed unless it can guarantee that fact. If frames that are on a stream are passed to the application layer for any stream, then REFUSED_STREAM [REFUSED_STREAM] MUST NOT be used for that stream, and a GOAWAY [GOAWAY] frame MUST include a stream identifier that is greater than or equal to the given stream identifier.
In addition to these mechanisms, the PING [PING] frame provides a way for a client to easily test a connection. Connections that remain idle can become broken as some middleboxes (for instance, network address translators, or load balancers) silently discard connection bindings. The PING [PING] frame allows a client to safely test whether a connection is still active without sending a request.
HTTP/2 enables a server to pre-emptively send (or "push") one or more associated responses to a client in response to a single request. This feature becomes particularly helpful when the server knows the client will need to have those responses available in order to fully process the response to the original request.
Pushing additional responses is optional, and is negotiated between individual endpoints. The SETTINGS_ENABLE_PUSH [SETTINGS_ENABLE_PUSH] setting can be set to 0 to indicate that server push is disabled.
Because pushing responses is effectively hop-by-hop, an intermediary could receive pushed responses from the server and choose not to forward those on to the client. In other words, how to make use of the pushed responses is up to that intermediary. Equally, the intermediary might choose to push additional responses to the client, without any action taken by the server.
A client cannot push. Thus, servers MUST treat the receipt of a PUSH_PROMISE [PUSH_PROMISE] frame as a connection error [ConnectionErrorHandler]. Clients MUST reject any attempt to change the SETTINGS_ENABLE_PUSH [SETTINGS_ENABLE_PUSH] setting to a value other than "0" by treating the message as a connection error [ConnectionErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
A server can only push responses that are cacheable (see [HTTP-p6], Section 3); promised requests MUST be safe (see [HTTP-p2], Section 4.2.1) and MUST NOT include a request body.
Server push is semantically equivalent to a server responding to a request; however, in this case that request is also sent by the server, as a PUSH_PROMISE [PUSH_PROMISE] frame.
The PUSH_PROMISE [PUSH_PROMISE] frame includes a header block that contains a complete set of request header fields that the server attributes to the request. It is not possible to push a response to a request that includes a request body.
Pushed responses are always associated with an explicit request from the client. The PUSH_PROMISE [PUSH_PROMISE] frames sent by the server are sent on that explicit request's stream. The PUSH_PROMISE [PUSH_PROMISE] frame also includes a promised stream identifier, chosen from the stream identifiers available to the server (see Section 5.1.1).
The header fields in PUSH_PROMISE [PUSH_PROMISE] and any subsequent CONTINUATION [CONTINUATION] frames MUST be a valid and complete set of request header fields [HttpRequest]. The server MUST include a method in the :method header field that is safe and cacheable. If a client receives a PUSH_PROMISE [PUSH_PROMISE] that does not include a complete and valid set of header fields, or the :method header field identifies a method that is not safe, it MUST respond with a stream error [StreamErrorHandler] of type PROTOCOL_ERROR [PROTOCOL_ERROR].
The server SHOULD send PUSH_PROMISE [PUSH_PROMISE] (Section 6.6) frames prior to sending any frames that reference the promised responses. This avoids a race where clients issue requests prior to receiving any PUSH_PROMISE [PUSH_PROMISE] frames.
For example, if the server receives a request for a document containing embedded links to multiple image files, and the server chooses to push those additional images to the client, sending push promises before the DATA [DATA] frames that contain the image links ensures that the client is able to see the promises before discovering embedded links. Similarly, if the server pushes responses referenced by the header block (for instance, in Link header fields), sending the push promises before sending the header block ensures that clients do not request them.
PUSH_PROMISE [PUSH_PROMISE] frames MUST NOT be sent by the client. PUSH_PROMISE [PUSH_PROMISE] frames can be sent by the server on any stream that was opened by the client. They MUST be sent on a stream that is in either the "open" or "half closed (remote)" state to the server. PUSH_PROMISE [PUSH_PROMISE] frames are interspersed with the frames that comprise a response, though they cannot be interspersed with HEADERS [HEADERS] and CONTINUATION [CONTINUATION] frames that comprise a single header block.
After sending the PUSH_PROMISE [PUSH_PROMISE] frame, the server can begin delivering the pushed response as a response [HttpResponse] on a server-initiated stream that uses the promised stream identifier. The server uses this stream to transmit an HTTP response, using the same sequence of frames as defined in Section 8.1. This stream becomes "half closed" to the client [StreamStates] after the initial HEADERS [HEADERS] frame is sent.
Once a client receives a PUSH_PROMISE [PUSH_PROMISE] frame and chooses to accept the pushed response, the client SHOULD NOT issue any requests for the promised response until after the promised stream has closed.
If the client determines, for any reason, that it does not wish to receive the pushed response from the server, or if the server takes too long to begin sending the promised response, the client can send an RST_STREAM [RST_STREAM] frame, using either the CANCEL [CANCEL] or REFUSED_STREAM [REFUSED_STREAM] codes, and referencing the pushed stream's identifier.
A client can use the SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS] setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGS_MAX_CONCURRENT_STREAMS [SETTINGS_MAX_CONCURRENT_STREAMS] value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSH_PROMISE frames; clients need to reset any promised streams that are not wanted.
Clients receiving a pushed response MUST validate that the server is authorized to provide the response, see Section 10.1. For example, an server that offers a certificate for only the example.com DNS-ID or Common Name is not permitted to push a response for https://www.example.org/doc.
In HTTP/1.x, the pseudo-method CONNECT ([HTTP-p2], Section 4.3.6) is used to convert an HTTP connection into a tunnel to a remote host. CONNECT is primarily used with HTTP proxies to establish a TLS session with an origin server for the purposes of interacting with https resources.
In HTTP/2, the CONNECT method is used to establish a tunnel over a single HTTP/2 stream to a remote host, for similar purposes. The HTTP header field mapping works as mostly as defined in Request Header Fields [HttpRequest], with a few differences. Specifically:
A proxy that supports CONNECT establishes a TCP connection [TCP] to the server identified in the :authority header field. Once this connection is successfully established, the proxy sends a HEADERS [HEADERS] frame containing a 2xx series status code to the client, as defined in [HTTP-p2], Section 4.3.6.
After the initial HEADERS [HEADERS] frame sent by each peer, all subsequent DATA [DATA] frames correspond to data sent on the TCP connection. The payload of any DATA [DATA] frames sent by the client are transmitted by the proxy to the TCP server; data received from the TCP server is assembled into DATA [DATA] frames by the proxy. Frame types other than DATA [DATA] or stream management frames (RST_STREAM [RST_STREAM], WINDOW_UPDATE [WINDOW_UPDATE], and PRIORITY [PRIORITY]) MUST NOT be sent on a connected stream, and MUST be treated as a stream error [StreamErrorHandler] if received.
The TCP connection can be closed by either peer. The END_STREAM flag on a DATA [DATA] frame is treated as being equivalent to the TCP FIN bit. A client is expected to send a DATA [DATA] frame with the END_STREAM flag set after receiving a frame bearing the END_STREAM flag. A proxy that receives a DATA [DATA] frame with the END_STREAM flag set sends the attached data with the FIN bit set on the last TCP segment. A proxy that receives a TCP segment with the FIN bit set sends a DATA [DATA] frame with the END_STREAM flag set. Note that the final TCP segment or DATA [DATA] frame could be empty.
A TCP connection error is signaled with RST_STREAM [RST_STREAM]. A proxy treats any error in the TCP connection, which includes receiving a TCP segment with the RST bit set, as a stream error [StreamErrorHandler] of type CONNECT_ERROR [CONNECT_ERROR]. Correspondingly, a proxy MUST send a TCP segment with the RST bit set if it detects an error with the stream or the HTTP/2 connection.
This section outlines attributes of the HTTP protocol that improve interoperability, reduce exposure to known security vulnerabilities, or reduce the potential for implementation variation.
HTTP/2 connections are persistent. For best performance, it is expected clients will not close connections until it is determined that no further communication with a server is necessary (for example, when a user navigates away from a particular web page), or until the server closes the connection.
Clients SHOULD NOT open more than one HTTP/2 connection to a given destination, where a destination is the IP address and port that is derived from a URI, a selected alternative service [ALT-SVC], or a configured proxy. A client can create additional connections as replacements, either to replace connections that are near to exhausting the available stream identifier space [StreamIdentifiers], or to replace connections that have encountered errors [ConnectionErrorHandler].
A client MAY open multiple connections to the same IP address and TCP port using different Server Name Indication [TLS-EXT] values or to provide different TLS client certificates, but SHOULD avoid creating multiple connections with the same configuration.
Clients MAY use a single server connection to send requests for URIs with multiple different authority components as long as the server is authoritative [authority].
Servers are encouraged to maintain open connections for as long as possible, but are permitted to terminate idle connections if necessary. When either endpoint chooses to close the transport-level TCP connection, the terminating endpoint SHOULD first send a GOAWAY [GOAWAY] (Section 6.8) frame so that both endpoints can reliably determine whether previously sent frames have been processed and gracefully complete or terminate any necessary remaining tasks.
Implementations of HTTP/2 MUST support TLS 1.2 [TLS12]. The general TLS usage guidance in [TLSBCP] SHOULD be followed, with some additional restrictions that are specific to HTTP/2.
The TLS implementation MUST support the Server Name Indication (SNI) [TLS-EXT] extension to TLS. HTTP/2 clients MUST indicate the target domain name when negotiating TLS.
The TLS implementation MUST disable compression. TLS compression can lead to the exposure of information that would not otherwise be revealed [RFC3749]. Generic compression is unnecessary since HTTP/2 provides compression features that are more aware of context and therefore likely to be more appropriate for use for performance, security or other reasons.
Implementations MUST negotiate - and therefore use - ephemeral cipher suites, such as ephemeral Diffie-Hellman (DHE) or the elliptic curve variant (ECDHE) with a minimum size of 2048 bits (DHE) or security level of 128 bits (ECDHE). Clients MUST accept DHE sizes of up to 4096 bits.
Implementations are encouraged not to negotiate TLS cipher suites with known vulnerabilities, such as [RC4].
An implementation that negotiates a TLS connection that does not meet the requirements in this section, or any policy-based constraints, SHOULD NOT negotiate HTTP/2. Removing HTTP/2 protocols from consideration could result in the removal of all protocols from the set of protocols offered by the client. This causes protocol negotiation failure, as described in Section 3.2 of [TLSALPN].
Due to implementation limitations, it might not be possible to fail TLS negotiation based on all of these requirements. An endpoint MUST terminate an HTTP/2 connection that is opened on a TLS session that does not meet these minimum requirements with a connection error [ConnectionErrorHandler] of type INADEQUATE_SECURITY [INADEQUATE_SECURITY].
Clients MUST support gzip compression for HTTP response bodies. Regardless of the value of the accept-encoding header field, a server MAY send responses with gzip encoding. A compressed response MUST still bear an appropriate content-encoding header field.
This effectively changes the implicit value of the Accept-Encoding header field ([HTTP-p2], Section 5.3.4) from "identity" to "identity, gzip", however gzip encoding cannot be suppressed by including ";q=0". Intermediaries that perform translation from HTTP/2 to HTTP/1.1 MUST decompress payloads unless the request includes an Accept-Encoding value that includes "gzip".
A client is only able to accept HTTP/2 responses from servers that are authoritative for those resources. This is particularly important for server push [PushResources], where the client validates the PUSH_PROMISE [PUSH_PROMISE] before accepting the response.
HTTP/2 relies on the HTTP/1.1 definition of authority for determining whether a server is authoritative in providing a given response, see [HTTP-p1], Section 9.1). This relies on local name resolution for the "http" URI scheme, and the offered server identity for the "https" scheme (see [RFC2818], Section 3).
A client MUST NOT use, in any way, resources provided by a server that is not authoritative for those resources.
In a cross-protocol attack, an attacker causes a client to initiate a transaction in one protocol toward a server that understands a different protocol. An attacker might be able to cause the transaction to appear as valid transaction in the second protocol. In combination with the capabilities of the web context, this can be used to interact with poorly protected servers in private networks.
Completing a TLS handshake with an ALPN identifier for HTTP/2 can be considered sufficient. ALPN provides a positive indication that a server is willing to proceed with HTTP/2, which prevents attacks on other TLS-based protocols.
The encryption in TLS makes it difficult for attackers to control the data which could be used in a cross-protocol attack on a cleartext protocol.
The cleartext version of HTTP/2 has minimal protection against cross-protocol attacks. The connection preface [ConnectionHeader] contains a string that is designed to confuse HTTP/1.1 servers, but no special protection is offered for other protocols. A server that is willing to ignore parts of an HTTP/1.1 request containing an Upgrade header field could be exposed to a cross-protocol attack.
HTTP/2 header field names and values are encoded as sequences of octets with a length prefix. This enables HTTP/2 to carry any string of octets as the name or value of a header field. An intermediary that translates HTTP/2 requests or responses into HTTP/1.1 directly could permit the creation of corrupted HTTP/1.1 messages. An attacker might exploit this behavior to cause the intermediary to create HTTP/1.1 messages with illegal header fields, extra header fields, or even new messages that are entirely falsified.
Header field names or values that contain characters not permitted by HTTP/1.1, including carriage return (U+000D) or line feed (U+000A) MUST NOT be translated verbatim by an intermediary, as stipulated in [HTTP-p1], Section 3.2.4.
Translation from HTTP/1.x to HTTP/2 does not produce the same opportunity to an attacker. Intermediaries that perform translation to HTTP/2 MUST remove any instances of the obs-fold production from header field values.
Pushed responses do not have an explicit request from the client; the request is provided by the server in the PUSH_PROMISE [PUSH_PROMISE] frame.
Caching responses that are pushed is possible based on the guidance provided by the origin server in the Cache-Control header field. However, this can cause issues if a single server hosts more than one tenant. For example, a server might offer multiple users each a small portion of its URI space.
Where multiple tenants share space on the same server, that server MUST ensure that tenants are not able to push representations of resources that they do not have authority over. Failure to enforce this would allow a tenant to provide a representation that would be served out of cache, overriding the actual representation that the authoritative tenant provides.
Pushed responses for which an origin server is not authoritative (see Section 10.1) are never cached or used.
An HTTP/2 connection can demand a greater commitment of resources to operate than a HTTP/1.1 connection. The use of header compression and flow control depend on a commitment of resources for storing a greater amount of state. Settings for these features ensure that memory commitments for these features are strictly bounded. Processing capacity cannot be guarded in the same fashion.
The SETTINGS [SETTINGS] frame can be abused to cause a peer to expend additional processing time. This might be done by pointlessly changing SETTINGS parameters, setting multiple undefined parameters, or changing the same setting multiple times in the same frame. WINDOW_UPDATE [WINDOW_UPDATE] or PRIORITY [PRIORITY] frames can be abused to cause an unnecessary waste of resources. A server might erroneously issue ALTSVC [ALTSVC] frames for origins on which it cannot be authoritative to generate excess work for clients.
Large numbers of small or empty frames can be abused to cause a peer to expend time processing frame headers. Note however that some uses are entirely legitimate, such as the sending of an empty DATA [DATA] frame to end a stream.
Header compression also offers some opportunities to waste processing resources; see [COMPRESSION] for more details on potential abuses.
Limits in SETTINGS [SETTINGS] parameters cannot be reduced instantaneously, which leaves an endpoint exposed to behavior from a peer that could exceed the new limits. In particular, immediately after establishing a connection, limits set by a server are not known to clients and could be exceeded without being an obvious protocol violation.
All these features - i.e., SETTINGS [SETTINGS] changes, small frames, header compression - have legitimate uses. These features become a burden only when they are used unnecessarily or to excess.
An endpoint that doesn't monitor this behavior exposes itself to a risk of denial of service attack. Implementations SHOULD track the use of these features and set limits on their use. An endpoint MAY treat activity that is suspicious as a connection error [ConnectionErrorHandler] of type ENHANCE_YOUR_CALM [ENHANCE_YOUR_CALM].
HTTP/2 enables greater use of compression for both header fields (Section 4.3) and response bodies (Section 9.3). Compression can allow an attacker to recover secret data when it is compressed in the same context as data under attacker control.
There are demonstrable attacks on compression that exploit the characteristics of the web (e.g., [BREACH]). The attacker induces multiple requests containing varying plaintext, observing the length of the resulting ciphertext in each, which reveals a shorter length when a guess about the secret is correct.
Implementations communicating on a secure channel MUST NOT compress content that includes both confidential and attacker-controlled data unless separate compression dictionaries are used for each source of data. Compression MUST NOT be used if the source of data cannot be reliably determined.
Further considerations regarding the compression of header fields are described in [COMPRESSION].
Padding within HTTP/2 is not intended as a replacement for general purpose padding, such as might be provided by TLS [TLS12]. Redundant padding could even be counterproductive. Correct application can depend on having specific knowledge of the data that is being padded.
To mitigate attacks that rely on compression, disabling compression might be preferable to padding as a countermeasure.
Padding can be used to obscure the exact size of frame content, and is provided to mitigate specific attacks within HTTP. For example, attacks where compressed content includes both attacker-controlled plaintext and secret data (see for example, [BREACH]).
Use of padding can result in less protection than might seem immediately obvious. At best, padding only makes it more difficult for an attacker to infer length information by increasing the number of frames an attacker has to observe. Incorrectly implemented padding schemes can be easily defeated. In particular, randomized padding with a predictable distribution provides very little protection; or padding payloads to a fixed size exposes information as payload sizes cross the fixed size boundary, which could be possible if an attacker can control plaintext.
Intermediaries SHOULD NOT remove padding, though an intermediary MAY remove padding and add differing amounts if the intent is to improve the protections padding affords.
Several characteristics of HTTP/2 provide an observer an opportunity to correlate actions of a single client or server over time. This includes the value of settings, the manner in which flow control windows are managed, the way priorities are allocated to streams, timing of reactions to stimulus, and handling of any optional features.
As far as this creates observable differences in behavior, they could be used as a basis for fingerprinting a specific client, as defined in http://www.w3.org/TR/html5/introduction.html#fingerprint.
A string for identifying HTTP/2 is entered into the "Application Layer Protocol Negotiation (ALPN) Protocol IDs" registry established in [TLSALPN].
This document establishes a registry for error codes. This new registry is entered into a new "Hypertext Transfer Protocol (HTTP) 2 Parameters" section.
This document registers the HTTP2-Settings header field for use in HTTP.
This document registers the PRI method for use in HTTP, to avoid collisions with the connection preface [ConnectionHeader].
This document creates two registrations for the identification of HTTP/2 in the "Application Layer Protocol Negotiation (ALPN) Protocol IDs" registry established in [TLSALPN].
The "h2" string identifies HTTP/2 when used over TLS:
The "h2c" string identifies HTTP/2 when used over cleartext TCP:
This document establishes a registry for HTTP/2 error codes. The "HTTP/2 Error Code" registry manages a 32-bit space. The "HTTP/2 Error Code" registry operates under the "Expert Review" policy [RFC5226].
Registrations for error codes are required to include a description of the error code. An expert reviewer is advised to examine new registrations for possible duplication with existing error codes. Use of existing registrations is to be encouraged, but not mandated.
New registrations are advised to provide the following information:
An initial set of error code registrations can be found in Section 7.
This section registers the HTTP2-Settings header field in the Permanent Message Header Field Registry [BCP90].
This section registers the PRI method in the HTTP Method Registry [HTTP-p2].
This document includes substantial input from the following individuals:
[RFC1323] | Jacobson, V., Braden, B. and D. Borman, "TCP Extensions for High Performance ", RFC 1323, May 1992. |
[RFC3749] | Hollenbeck, S., "Transport Layer Security Protocol Compression Methods", RFC 3749, May 2004. |
[TALKING] | Huang, L-S., Chen, E., Barth, A., Rescorla, E. and C. Jackson, "Talking to Yourself for Fun and Profit ", 2011. |
[BREACH] | Gluck, Y., Harris, N. and A. Prado, "BREACH: Reviving the CRIME Attack ", July 2013. |
[RC4] | Rivest, R., "The RC4 encryption algorithm ", RSA Data Security, Inc. , March 1992. |
[BCP90] | Klyne, G., Nottingham, M. and J. Mogul, "Registration Procedures for Message Header Fields", BCP 90, RFC 3864, September 2004. |
[TLSBCP] | Sheffer, Y., Holz, R. and P. Saint-Andre, "Recommendations for Secure Use of TLS and DTLS ", Internet-Draft draft-sheffer-tls-bcp-02, February 2014. |
[IDNA] | Klensin, J., "Internationalized Domain Names for Applications (IDNA): Definitions and Document Framework ", RFC 5890, August 2010. |
Changed "connection header" to "connection preface" to avoid confusion.
Added dependency-based stream prioritization.
Added "h2c" identifier to distinguish between cleartext and secured HTTP/2.
Adding missing padding to PUSH_PROMISE [PUSH_PROMISE].
Integrate ALTSVC frame and supporting text.
Dropping requirement on "deflate" Content-Encoding.
Improving security considerations around use of compression.
Adding padding for data frames.
Renumbering frame types, error codes, and settings.
Adding INADEQUATE_SECURITY error code.
Updating TLS usage requirements to 1.2; forbidding TLS compression.
Removing extensibility for frames and settings.
Changing setting identifier size.
Removing the ability to disable flow control.
Changing the protocol identification token to "h2".
Changing the use of :authority to make it optional and to allow userinfo in non-HTTP cases.
Allowing split on 0x0 for Cookie.
Reserved PRI method in HTTP/1.1 to avoid possible future collisions.
Added cookie crumbling for more efficient header compression.
Added header field ordering with the value-concatenation mechanism.
Marked draft for implementation.
Adding definition for CONNECT method.
Constraining the use of push to safe, cacheable methods with no request body.
Changing from :host to :authority to remove any potential confusion.
Adding setting for header compression table size.
Adding settings acknowledgement.
Removing unnecessary and potentially problematic flags from CONTINUATION.
Added denial of service considerations.
Marking the draft ready for implementation.
Renumbering END_PUSH_PROMISE flag.
Editorial clarifications and changes.
Added CONTINUATION frame for HEADERS and PUSH_PROMISE.
PUSH_PROMISE is no longer implicitly prohibited if SETTINGS_MAX_CONCURRENT_STREAMS is zero.
Push expanded to allow all safe methods without a request body.
Clarified the use of HTTP header fields in requests and responses. Prohibited HTTP/1.1 hop-by-hop header fields.
Requiring that intermediaries not forward requests with missing or illegal routing :-headers.
Clarified requirements around handling different frames after stream close, stream reset and GOAWAY [GOAWAY].
Added more specific prohibitions for sending of different frame types in various stream states.
Making the last received setting value the effective value.
Clarified requirements on TLS version, extension and ciphers.
Committed major restructuring atrocities.
Added reference to first header compression draft.
Added more formal description of frame lifecycle.
Moved END_STREAM (renamed from FINAL) back to HEADERS [HEADERS]/DATA [DATA].
Removed HEADERS+PRIORITY, added optional priority to HEADERS [HEADERS] frame.
Added PRIORITY [PRIORITY] frame.
Added continuations to frames carrying header blocks.
Replaced use of "session" with "connection" to avoid confusion with other HTTP stateful concepts, like cookies.
Removed "message".
Switched to TLS ALPN from NPN.
Editorial changes.
Added IANA considerations section for frame types, error codes and settings.
Removed data frame compression.
Added PUSH_PROMISE [PUSH_PROMISE].
Added globally applicable flags to framing.
Removed zlib-based header compression mechanism.
Updated references.
Clarified stream identifier reuse.
Removed CREDENTIALS frame and associated mechanisms.
Added advice against naive implementation of flow control.
Added session header section.
Restructured frame header. Removed distinction between data and control frames.
Altered flow control properties to include session-level limits.
Added note on cacheability of pushed resources and multiple tenant servers.
Changed protocol label form based on discussions.
Changed title throughout.
Removed section on Incompatibilities with SPDY draft#2.
Changed INTERNAL_ERROR [INTERNAL_ERROR] on GOAWAY [GOAWAY] to have a value of 2 https://groups.google.com/forum/?fromgroups#!topic/spdy-dev/cfUef2gL3iU.
Replaced abstract and introduction.
Added section on starting HTTP/2.0, including upgrade mechanism.
Removed unused references.
Added flow control principles [fc-principles] based on http://tools.ietf.org/html/draft-montenegro-httpbis-http2-fc-principles-01.
Adopted as base for draft-ietf-httpbis-http2.
Updated authors/editors list.
Added status note.