Compressing Images without compromising: Achieving the best Quality with clever Algorithms

Reading Time: 4 minutes
Mobile phone screen in photo mode in focus. Background sea and mountains while dawn. Demonstrates well image compression.
Users want to see great high-resolution images. Yet, if the website takes too long to load, they leave it. Read here how to compress images with modern algorithms and little effort.

COMPRESSING IMAGES FOR MORE PAGE SPEED

Websites are increasing in size. This is mainly due to images: More than half of the average download volume is allotted to image files. Most of them are JPEGs.

There’s not an online shop around that can manage without high-resolution product images that can be zoomed into. There’s no website that can survive without crystal-clear photos of people and landscapes that arouse feelings and encourage us to purchase.

Yet, there is another factor that influences us even more: After an ideal value of two seconds’ loading time, it becomes all the more probable that users will leave the website. Too long loading times affect the length of time we stay and, in the end, the conversion of users to customers.

It is thus imperative to reduce the size of images for reasons of website optimization. Yet, this means website operators are confronted with several hurdles. They try to find a compromise between compression, quality and the effort invested.

The difficulty of finding the right balance of effort, quality and grade of compression for images.
The right scale between quality, compression and effort of image optimization is important

QUALITY BEFORE COMPRESSION: A BAD COMPROMISE

Image compression for JPEG files is possible using many tools. Some brief research on the Internet leads to websites and software that can compress individual or several images at the same time.

When using these, one usually sets a quality level at which the images are to be optimized. On the basis of the maximum value of 100, it is the crucial factor for how far the image size can be compressed.

Compressing them, however, can quickly lead to unattractive results, especially with JPEG images: Artefacts and streaks in the images are often the results of small reductions in the quality. Users are also sensitive to this. An over-compressed image does not entice us to purchase but makes the advertised product seem cheap and unattractive.

In the spirit of user-friendliness, quality levels are thus frequently set very conservatively. Google and image experts advise you to set the level between 80 and 85, below which we should not compress images and photos.

This across-the-board classification ensures the image quality at little effort. Yet, this happens at the cost of compression and thus the website loading times. Apart from that, some images appear to be unattractive to the human eye even at a quality level of 85. This compromise is thus a bad one.

OPTIMIZING IMAGES INDIVIDUALLY

Let us compare the image compression of the following JPEG photos.  It is obvious that they are perceived differently by those looking at them although the quality is the same. The pelican has a lot of potential for saving at a level of 92. Up to a level of 69 – far below the Google standard – only a few changes are noticeable. With 29kb, the initial size has been reduced to a third.

The photo of the salt desert, in contrast, is easy to look at a quality level of up to 92. When arriving at a value of 69, a high degree of fragmentation can be seen in the gradient of the sky, meaning that this setting is not an option. In this case, the image size could be reduced by about 20%.

Every image thus seems to have different demands on its compression. By taking an across-the-board quality level of 85, a considerable potential saving would have been wasted in the pelican. The salt desert, in contrast, would have come across as unattractive to the viewer.

The consequence of this is: An optimum in compression and quality can only be achieved by looking at every single image on its own. Yet how if an online shop with several thousands of product images and regular updates meant to afford such an effort?

SSIM REPLACES THE HUMAN

Nowadays, the individual assessment of quality levels can be left up to the computer. With the SSIM (Structural Similarity) algorithm, the structural similarity or deviation between a compressed image and its origin can be determined.

SSIM was developed by the assessment of human method of perception and thus replaces the eye for the computer. The algorithm has been continuously extended in recent years. With MS-SSIM (multi-scale SSIM) and DSSIM (Structural Dissimilarity), improvements are now available which provide reliable results.  

The further development of MS-SSIM, for example, is suitable for checking the results of compression. With DSSIM, in contrast, calculating the difference to the original is simpler and more significant

The example graph shows the image difference (DSSIM) in comparison to the image size. By determining a certain threshold value for DSSIM, the ideal quality of the image can be determined. At the same time, this determines the ideal savings by the compression. The quality levels of the photos above were determined in this way.

Unfortunately, this method is computationally very intensive. Every image has to be compressed at many quality levels. For each intermediate result, an SSIM index and consequently the DSSIM value has to be calculated.
Machine learning can be applied to optimize this process. At wao.io, we are currently using the “1-step Newton model”, but other models such as “Random Forest” and “Flat Decision Tree”, that are available in the scikit-learn software can also be used for this purpose. The necessary computing capacity is thus reduced.

CONCLUSION: NO COMPROMISES WHEN OPTIMIZING IMAGES

It is essential to compress images in times of the modern Web. The consideration shows, however, that website operators often decide in favor of a compromise that is to the disadvantage of compression, quality or effort.

In order to minimize the effort of compression, most operators rely on set quality levels. In doing so, an acceptable degree of quality is achieved, yet individual potential savings are left aside. If, in contrast, images are considered and compressed individually,  an optimum of quality and compression is achieved. Yet, the time and effort in these cases are enormous.

An intelligent compression process and a quality assessment by the computer can produce relief. This can also be achieved by a DSSIM optimized towards the image context and machine learning. At wao.io, we offer this kind of image optimization – automated and without development efforts.

Website and online shop operators that do not wish to accept any compromises are thus in search of this solution.