HTTP - HTTP/2 and HTTP/3 Features



HTTP/2 was released in 2015. It was a major upgrade after HTTP/1.1 introducing various new improvements that aimed to improve page load times, reduce latency, and improve the efficiency of resource utilization. It introduced new features such as multiplexing, header compression using HPACK, server push, and priority scheduling.

HTTP/3 was released in 2022 which replaced TCP with QUIC protocol as its transport layer. It introduced features such as using QUIC protocol, priority scheduling, Multiplexing without Head-of-Line Blocking and Improved Security. Let us go through key features of HTTP/2 and HTTP/3 that are mentioned below.

HTTP/2 Features

The key features of HTTP/2 are mentioned below:

Multiplexing

Multiplexing in HTTP/2 refers to sending multiple requests and responses simultaneously over a single connection. It overcomes the limitation of HTTP/1.1 to create multiple connections.

Working of Multiplexing in HTTP/2

  • A single TCP connection is established between the client and server.
  • The client sends multiple requests. Each request is assigned with a unique Stream ID over the same connection.
  • The server then breaks down the responses into smaller frames. Each frame is tagged with the corresponding Stream ID.
  • Then these frames are intermixed and sent over a single connection that allows parallel transmission of multiple streams.
  • The client reassembles the frames based on their Stream IDs to reconstruct each resource independently.
  • Flow control mechanisms prevents any one stream from dominating the connection.

Header Compression

HTTP/2 uses header compression algorithm 'HPACK' to compress the headers, which helps in reducing size and improving performance.

Working of Header Compression in HTTP/2

  • The client and server, both maintains a static table having common header fields and a dynamic table for session specific headers.
  • When sending headers, instead of sending the full header, the server references the header field's index from the static or dynamic table .
  • If the header is not present in the table, then it is added to the dynamic table for future use.
  • For reducing redundancy, repeated headers are transmitted efficiently by referencing their index in the dynamic table.
  • HPACK uses Huffman encoding to further compress the headers which minimizes the size of the transmitted data.
  • Both the client and server synchronize their dynamic tables for accurate header compression and decompression during the session.

Server Push

Server Push is a feature introduced in HTTP/2 that allows server to proactively send resources to clients if it predicts that client will need the resources. It reduces the latency by guessing the resources that the client will need and sends them to the client in advance.

Working of Server Push in HTTP/2

  • The client sends a request to the server for any resource such as for an HTML document.
  • The server analyzes the requested resource and determines which additional resources client will likely need such as CSS, JavaScript or any images.
  • The server sends a PUSH_PROMISE frame to notify the client about the resources it might send.
  • The client checks if the resource already exist in its cache. If so, it rejects the push, otherwise, it will accepts the resource.
  • The server proactively sends the additional resources, with the response for the requested resource, over the same connection.
  • The client receives and processes the pushed resources without having to request them explicitly, reducing latency.

Priority Scheduling

Priority Scheduling is a mechanism in HTTP/2 allowing clients to specify the priority of different streams. It enables important resources to be delivered first. HTTP/2 prioritizes critical resources to deliver faster.

Working of Priority Scheduling in HTTP/2

  • The client assigns priorities to each stream by sending a PRIORITY frame along with the request.
  • Each stream is assigned a priority value. Higher the priority values greater the importance.
  • The server then uses the priority information for allocating bandwidth and processing power, delivering higher priority resources first.
  • The client can adjust priorities dynamically by sending updated PRIORITY frames.
  • The server changes how it delivers resource based on updated priorities. This ensures important resources are sent first.

HTTP/3 Features

The key features of HTTP/3 are mentioned below with detailed explanation on their working.

QUIC Protocol

QUIC(Quick UDP Internet Connections) is based on UDP protocol that improves reliability of connection and performance using multiplexing, stream independence, and built-in encryption for security. QUIC allows multiplexing without Head-of-Line blocking. It also reduces the latency by combining encryption handshake and transport in a single step. Some key features of QUIC are mentioned below:

  • Multiplexing without Head-of-Line Blocking: QUIC allows multiplexing without causing head-of-line(HOL) blocking as QUIC allows independent handling of streams, so that one delayed stream does not affect others.
  • Improved Security: QUIC encrypts all data, which provides privacy and security from the beginning, unlike traditional HTTP over TCP which often relies on separate layer of TLS for encryption.
  • Connection Migration: QUIC supports connection migration, i.e, a connection continues even if the client's IP address changes for e.g, switching between Wi-Fi and mobile data.

Priority Scheduling

Priority Scheduling is a mechanism allowing clients to specify the priority of different streams. It enables important resources to be delivered first and prioritizes critical resources to deliver faster.

Working of Priority Scheduling in HTTP/3

  • A single QUIC connection is established between the client and server, to handle multiple independent streams.
  • Each stream is assigned with a unique ID, to ensure that data packets can be tracked and managed separately.
  • The client then sends requests for different resources using these independent streams, using the same connection.
  • If any packet is lost, then only that specific stream is affected whose packet is lost, while other streams continue without any interruptions.
  • QUIC then again transmits only the lost packets for the affected stream, which avoids delays for non-related streams.
  • The server sends responses over the streams at the same time parallely, ensuring resources are delivered efficiently and without blocking.
Advertisements