On 12th June, more than 20 people interested in #WebPerformance met up to take part in the CGN Webperf Meetup. Our Sevenval office, right by Cologne Cathedral, served as the location for the 23rd meetup.
Two talks were scheduled – the first by Doug Sillars about enhancing images and videos on websites. Doug is known as a developer evangelist for mobile and web performance and, among others, wrote the book “High Performance Android Apps”.
The second talk was held by Felix Hassert on the occasion of the third birthday of HTTP/2 in production. As the director of software development and application at Sevenval & wao.io, he gathers experience in the use of HTTP/2 (h2) on a daily basis.
The meetup participants arrived one after the other so that the buffet with cold drinks and snacks could be opened right away.
Besides familiar faces, this time there were also many new and curious frontenders and backenders at the meetup. Even one of Thomas Bachem’s students at the Code University Berlin had found his way to the meetup.
Talk I – Doug Sillars: Image and Video optimization for websites
The talks began on time at 7.30 pm with Doug, who committed the audience to himself with an analogy between the fear of rickety bridges over stony abysses and the stress users endure whilst waiting for websites to load (the stress of waiting is greater!).
In this case, we can learn from the Americans. Doug comes from California and knows how to fascinate his audience.
You can find his full slides and sources here: https://www.slideshare.net/dougsillars/cologne-webperf
And stress results in numbers: Such as the 20% increase in the bounce rate for every second longer a new website needs to load.
4% of mobile phone users go as far as throwing their mobile phone across the room when pages take too long to load!
The transition to the main problem – that of images and videos on websites – was fluent. Yet, how can this mass of data (75% of the mobile web meanwhile comprises image and video data) be delivered to the users as fast and as good as possible?
This is where Doug recommends the cycle of measuring, analysing and optimising. Tools such as Web Page Test or even the Analyzer by wao.io are suitable for this.
The analysis of the results demands special frontend expertise. Doug presented four specific problems and possible solutions by way of example:
1. Data volumes of graphic images and their mobile scalability
In general, the greatest improvements can be achieved by applying the SVG format (scalable vector graphics). At the same time, it needs to be noted that only “simple” graphics can be converted into SVG and after exporting from Adobe Illustrator, for example, the surplus material has to be removed manually from the image – otherwise the competitive edge in data over the image formats JPG and WebP is quickly lost again.
2. Data volumes of images that cannot be compressed into vector graphics – Lossy compression
The lossy compression of images means the image details that are too much for the human eye to perceive are no longer stored accurately but approximately. This type of compression can be managed via the quality parameter (q) for images.
Images that look good in terms of quality and which, at the same time, load fast is what is desired. There are various algorithms that compress images. Doug presented the tool Cloudinary, with which he carried out some compression tests to demonstrate the differences in size and quality of an image. The structural similarity index (SSIM) is used to measure the remaining image quality.
Personal recommendation from the author: As this means a lot of finesse, manual work and effort, wao.io offers an image compression algorithm that automatically calculates the best ratio of the image’s quality and size.
According to Doug, besides the compression, it also depends on the image format – the use of the likes of WebP in Chrome leads in 95% of the cases to a smaller data volume, whilst the quality remains the same.
When using the Internet on a mobile device, many different devices are used. From an iPhone 4 to the Galaxy 7, and all devices in between, the screen sizes and resolutions differ in most cases.
Although images can be automatically scaled, the respective browser and also the user downloads a huge amount of superfluous data at the same time.
The best-practice solution here is the use of “responsive breakpoints”, which load an image matching every size of a display. The data saved not only allow for a faster loading of the images but also allow the user to breathe a sigh of relief.
At the end of the talk, the question arose as to why the breakpoints have a 25% margin – from 25% the next image size is jumped to. From today’s point of view, this can no longer be answered clearly, but it has become accepted as standard.
57% of mobile Internet users do not use the scroll function. This sounds bad for all one-page websites, but can be taken advantage of in optimising web performance.
Lazy loading means that, in the first step, only contents that the user also sees are loaded. This leads to an average loading time improvement of 3.5 seconds with a 3G connection!
As images usually provide the larger share of the transmitted data even if they are well compressed, despite that, the use of preview images has been developed as a solution.
The advantage of preview images is the extremely small data volumes (e.g. an image that normally has a size of 50 kilobytes could be replaced by a small preview image of only 979 bytes until the original image has been fully loaded, giving the user the perceptive security that an image is being loaded here and the layout does not change during the loading process.
3. Data volumes of GIF that can be compressed
In the last part of his talk, Doug dealt with moving images, i.e. GIF or other video formats. Back then (in 1987!), GIF (Graphics Interchange Format) became popular due to its very efficient compression of the individual images (https://de.wikipedia.org/wiki/Graphics_Interchange_Format). Nowadays, the conversion of MP4 videos recorded on a mobile device in particular to GIF is now outdated. A 1.4mb MP4 video would be inflated to 3.8mb using GIF.
Yet many website operators and users do not want to go without the visually appealing loops that GIFs are often used for. This is where Doug presents a solution via video tags:
<video autoplay loop muted playsinline controls =”false” src=”video.mp4”/>
Due to the “autoplay” function, the video is automatically played, it automatically repeats itself per “loop”, no sound is played with the “muted” function and no full image video is played thanks to “playsinline”.
4. Delivering videos and what has to be considered with performance
About 13.88% of all videos on the web are aborted whilst being loaded. That results in approximately 800,000,000 hours of playback time per quarter that are not watched. That is a significant amount for publishers or websites that earn money from video content.
A reason for this could be geographical restrictions such as the daily news not being accessible in Kenya.
The case that probably occurs more frequently is that the delay in playing the video is too long (similar to high bounce rates when websites are slow in loading). This results in an increase in the bounce rate with videos of 5.8% per second after two seconds of initial waiting time.
- Do not load first any 3rd party/tracking scripts
- Buffer the video during ads
- Provide the video in several resolutions/data rates (start off quickly with low resolution and – if the bandwidth allows for it – improve the quality via MPEG-DASH or HLS)
<video preload = ‘metadata’ src=’<URL>’ >
This way, part of the video is preloaded, yet not all of it. It is to be noted that every browser interprets this attribute differently.
By the way: Many of the tips presented to optimise the web performance of websites have been automated in wao.io. If you wish to try out the efficiency for yourself, please go to wao.io
After a Q&A session and a short break, the CGN Webperf Meetup proceeded with the second talk by Felix Hassert.
Talk II – The anniversary of HTTP/2
Titled “Happy Birthday HTTP/2”, Felix took a look at the development of HTTP/2 (the second version of the Hypertext Transfer Protocol, also called h2) and its impact for developers servers and users. .
There is no reason not to use HTTP/2 (-:
Why was HTTP/2 introduced?
In the 1990s, websites mainly comprised HTML code. Today, the ratio has changed to 3% HTML and 97% images as well as other elements.
HTTP/1 was not created to do this. The main problem lies in the non-efficient benefit of the Transmission Control Protocol (TCP).
Problem 1: Connection handling – TCP slow start
As can be seen here, a connection has to be maintained and used for a very long time to reach a high transfer rate.
Problem 2: Unidirectional messaging – head of line blocking
With HTTP/1, only one answer per connection request can be answered, no other requests. This leads to slow loading times of complex websites.
Solutions with HTTP
Due to the fact that the problems did not arise overnight and their emphasis has increased with time and the complexity of websites, solutions were continuously developed.
However, these solutions are not sustainable which is why HTTP/2 was developed.
Short lived connections suffering from TCP slow start
HTTP head-of-line blocking
Requests are expensive
6 connections per host / Domain Sharding (= more connections)
Concatenation & Sprites
Generally speaking: It has become very complicated for developers (not servers) to deliver a website.
HTTP/2 was developed to move the complexity developers were facing back to the software.
Technical solutions were thus found for the known problems:
Bidirectional messaging & multiplexing
Bonus: Server Push
No head of line blocking
One connection per host
Mitigates TCP slower start
Reduces cost per requests
Utilizes initial server think time
How Multiplexing and HPACK (header compression) work and what this means for the loading times of websites can be found here in the presentation:
Prior fixes such as several parallel connections, domain sharding, inlining or sprites thus became superfluous.
Problems of the solutions
1. Transport Layer Security (TLS) – more commonly known under the previous name Secure Sockets Layer (SSL) – is obligatory.
This results in subsequent problems such as there must be a valid certificate for every domain, redirects must be set properly and 3rd party tools must also be securely integrated (keyword: Mixed content).
2. Request bursts
Even more requests emerge without inlining, sprites and bundles – which all arrive at the same time. Reverse proxies such as Varnish are of help in this case.
3. Connection problems
Connections need a long lifetime, lost packages have a great impact on the speed of h2 and h2 in itself is only for the “last mile”, i.e. the client end.
4. TCP head-of-line blocking
In this respect, I also recommend the slides (pages 30 to 43) by Felix Hassert, as the topic was examined in detail. Interesting insights to be found here such as the fact that an h1-typical image is, however, hidden in a WebPageTest waterfall diagram on the use of h2, for most resources do not become useful until they have been fully loaded:
To what extent is it worthwhile to employ h2 from the viewpoint of the end user?
97% of all desktop and mobile users in Germany surf with an HTTP/2-compatible device. The advantages in terms of speed with a network connection of better than or the same as 3G are obvious, which is why taking the step towards using HTTP/2 is recommended.
38% of all website operators make use of the advantages of HTTP/2; with customers of wao.io this percentage is 82%.
Speed up and secure your site with HTTP/2 – test for free 🔒🚀