HTTP/2 is a relatively new protocol, having been ratified as a standard in 2015. Of course, it has its roots in Google’s SPDY protocol, but that was mainly experimental and not widely used for production websites.
Despite its short lifespan, HTTP/2 is now supported by nearly every modern web browser, and so it’s time we took a look at the potential performance improvements of enabling it for your website.
Plain text versus binary
The difference between HTTP/1.1 and HTTP/2 is that the latter is a binary protocol, while the former is plain text. Although this means you can’t read it as easily without the right tools, this also lends itself to better compression of headers. Headers are the metadata sent with every HTTP request that detail things like the type of content being sent. The more compressed these headers are, the less time it takes to send them from the browser to the server and back. This is the first way in which HTTP/2 decreases the round-trip time for each request.
Secondly, HTTP/2 solves an issue with HTTP/1.1 known as head-of-line blocking. In simple terms, this means that downloads must arrive in the correct order, and so many other things can be blocked waiting for a single item to arrive. In HTTP/2, pipelining and multiplexing means that this problem no longer exists, and items can arrive in any order.
HTTP/1.1 supported a mechanism known as pipelining where a single connection could be used to download more than one item. However, due to the limitation that items had to be streamed in the correct order, no major browser ever implemented support for it.
In HTTP/2, a new mechanism known as multiplexing allows for a similar end result but without the ordering limitation. This also means that the classic maximum of six connections per subdomain that was the de-facto standard implemented by nearly all browsers is no longer required. Browsers can now happily make as many requests as required and the multiplexing mechanism will allow the server to send back as much data as it can over the single connection.
Server push is a feature to be used sparingly only for the most commonly used items to prevent wasting bandwidth as well as flooding the user’s browser with useless or little-used items. But when used correctly, it can dramatically decrease the amount of time it takes to render the final page.
The HTTPS overhead
While the HTTP/2 specification does not specify any encryption prerequisites, all major browsers have limited HTTP/2 support to TLS only. This means that in order to use the features of HTTP/2, you must be serving your website over HTTPS with a valid certificate.
In the past, claims have been made about the overhead of serving websites over HTTPS due to the extra time it takes to negotiate the secure connection and set up encryption. For this reason, in the past, only websites or parts of websites that dealt with sensitive data (such as login and payment pages) used HTTPS.
However, in the last few years, after many well-advertised instances of data being stolen over plain-text HTTP connections, many websites started transitioning fully onto HTTPS.
With HTTP/1.1, the main reason for the overhead was the multiple connections that had to be opened, one for each item requested. With HTTP/2, multiplexing means that in practice, only a single connection needs to be opened and multiple items can be downloaded over it. A single connection means only a single HTTPS negotiation process, which considerably speeds up the process. While this may not have any net benefits over HTTP/1.1 without HTTPS, it does enable security to be default without a performance penalty.
A note of caution about measuring performance impacts
With each performance-improving change that you make, it is good practice to run “before” and “after” tests to be able to accurately quantify the difference you’ve made. These tests are normally run using services such as WebPageTest which download a number of pages and present metrics and screenshots. Using these services to quantify the difference made by enabling HTTP/2, you may notice that time-to-render and other similar metrics actually increase.
While HTTP/2 undoubtedly has performance improving features built in, the actual impact of these cannot be seen by requesting single pages with an empty cache. This is because the pre-work of things like server push only manifest themselves over a typical journey spanning multiple page requests. Therefore, in order to get a good idea of what improvement you’ve made, consider running more comprehensive tests where a typical user’s journey through your site is replicated with a warm cache. This will inevitable show that the slight performance decrease of the initial request is more than made up for by the increased speed of subsequent requests, and the overall journey is faster.