Browsing Category

Core Web Vitals

Core Web Vitals

How to Optimize For Largest Contentful Paint (LCP)

LCP (largest contentful paint) is one of the most important core web vitals, it is the only core web vital that is related 100% to speed and hence it is the most difficult one to diagnose and optimize for. LCP could be measured using Google PageSpeed Insights which is powered by Lighthouse.

LCP should load within 2.5 seconds, if 75% of the loads ( regardless of which pages) on a website achieve that number, LCP will be marked as passing assessment. Only elements within the user viewport (above the fold) will be used to calculate LCP.

Google has provided a great tutorial how to optimize for LCP, in this post I will try to provide a check list and actionable directions that you can implement yourself or take to your web developer that is working on improving Core Web Vitals (mainly LCP).

14 improvements that you can do to improve LCP:

  1. Adequate resources on the server
  2. Enable browser caching
  3. Enable GZIP (text compression)
  4. Enable server caching like OPcache or reverse proxy
  5. Keep your software up-to-date (CMS, Plugins, operating system like Ubuntu, control panel like Cpanel, PHP, MySQL and Apache)
  6. Install a server side HTML caching for your CMS
  7. Setup browser side caching using service worker
  8. Minify JavaScript and CSS
  9. Compress images and use the right format for them
  10. Resize images to fit the required dimensions in the style sheet
  11. Remove or defer files and codes that block critical rendering path for above the fold content
  12. Inline critical CSS and critical JavaScript files where possible
  13. Lazy load images below the fold
  14. Use CDN for resources like images and JavaScript files or for the whole website

I will provide more details how to improve each of the items above, most of my examples will work best for WordPress that is hosted on Linux with Apache being the web server.

Adequate resources on the server:

You do not need to be cheap when it comes to web hosting, having more resources than what your website needs is better than lacking resources. Start with a VPS, possibly 2 cores, 4 GB of ram and SSD, and keep eye on your resources usage, your average CPU load must stay one or below and your RAM usage must be less than 50% on average. Add more resources if you did not achieve those numbers.

Enable browser caching:

This setup could be done in most cases using the server's configuration, Apache has mod_expires module that enables you to do that by adding the code below to your .htaccess file:

<FilesMatch "\.(webm|ogg|mp4|ico|pdf|flv|jpg|jpeg|png|gif|webp|js|css|swf|x-html|css|xml|js|woff|woff2|otf|ttf|svg|eot)(\.gz)?$">
<IfModule mod_expires.c>
AddType application/font-woff2 .woff2
AddType application/x-font-opentype .otf
ExpiresActive On
ExpiresDefault A0
ExpiresByType video/webm A10368000
ExpiresByType video/ogg A10368000
ExpiresByType video/mp4 A10368000
ExpiresByType image/webp A10368000
ExpiresByType image/gif A10368000
ExpiresByType image/png A10368000
ExpiresByType image/jpg A10368000
ExpiresByType image/jpeg A10368000
ExpiresByType image/ico A10368000
ExpiresByType image/svg+xml A10368000
ExpiresByType text/css A10368000
ExpiresByType text/javascript A10368000
ExpiresByType application/javascript A10368000
ExpiresByType application/x-javascript A10368000
ExpiresByType application/font-woff2 A10368000
ExpiresByType application/x-font-opentype A10368000
ExpiresByType application/x-font-truetype A10368000
</IfModule>
<IfModule mod_headers.c>
Header set Expires "max-age=A10368000, public"
Header unset ETag
Header set Connection keep-alive
FileETag None
</IfModule>
</FilesMatch>

Enable GZIP (text compression):

This setup could be done in most cases using the server configuration, Apache has mod_deflate module that enables you to do that by adding the code below to your .htaccess file:

<IfModule mod_deflate.c>
AddType x-font/woff .woff
AddType x-font/ttf .ttf
AddOutputFilterByType DEFLATE image/svg+xml
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE text/javascript
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript
AddOutputFilterByType DEFLATE application/x-font-ttf
AddOutputFilterByType DEFLATE x-font/ttf
AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
AddOutputFilterByType DEFLATE font/opentype font/ttf font/eot font/otf
</IfModule>

Enable server caching like OPcache or reverse proxy:

Server caching will save popular pages in the memory of the server so they could be served quickly to users, PHP has OPcache that could be enabled to do that.

Keep your software up-to-date:

There is a lot of technology involved in running a website:

  • The server and its software (Operating system like Linux, web server like Apache, programming language like PHP, database engine like MySQL, control panel like Cpanel and more)
  • CMS, like WordPress.
  • CMS add-ons like Plugins.

Keeping all this software up-to-date is vital for both performance and security.

Install a server side HTML caching for your CMS:

For a CMS like WordPress to generate a page there will be few PHP calls to the database followed by a HTML version of the page passed by the web server (Apache) to the browser to parse it then render it, this is a long journey with too many variables. Caching a copy of every single page of the website in a HTML format will save a lot for work required by the server and enable it to pass a static HTML file to the browser for rendering, this is a big time saver. A plugin like WPfastest cache can take care of that for WordPress.

Setup browser side caching using service worker:

The Service Worker API comes with a Cache interface, that lets you create stores of responses keyed by request. Service Worker can even enable the website to work totally off-line after first load.

Minify JavaScript and CSS:

Spaces and comments can inflate CSS and JavaScript files significantly, minifying those files manually or using a plugin if you have WordPress will improve performance. For WordPress WPcache can do the minification.

Compress images and use the right format for them:

Images have different format and purposes, JPEG, PNG, GIFs, SVG, and each of them also has multiple format like JPEG 2000, choosing the right format and resolution could be a huge size saver. For WordPress WPsuper cache plugin has an image compression module that makes image compression an easy process.

Consider using WEBP, It can reduce your image size by 30% comparing to JPEG or PNG.

Resize images to fit the required dimensions in the style sheet:

The image dimensions should always match the required space on the page, having a 1000X1000 pixel images in 100X100 pixel space will add a lot of unnecessary bytes to the total page size.

Remove or defer files and codes that block critical rendering path for above the fold content:

Files that are blocking the critical rendering path could be checked using the coverage tab in Chrome Developer Tools:

Coverage section chrome developer tools

Files that are not used for critical rendering must be deferred. If you have multiple JavaScript files that are partially used for the critical rendering path, you can combine them in two files, one that has all the codes that are required for the critical rendering path critical.js/critial.css and another one that includes codes that are not required for the critical rendering path non-critical.js/non-critial.css, the second file must be deferred.

Consider also preloading files that are required for critical rendering path:

<head>
<link rel="preload" as="script" href="critical.js">

<link rel="preload" href="critical.woff2" as="font" type="font/woff2" crossorigin>

<link rel="preload" href="critical.css" as="style">

<link rel="preload" href="bg-image-narrow.png" as="image" media="(max-width: 600px)">

</head>

You can defer JavaScript and CSS files using the codes below:

<head>
<script src="demo_async.js" async></script>

<link rel="preload" href="styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="styles.css"></noscript>
</head>

You can also consider inlining CSS and JS codes that are required for the critical rendering path (read more below).

Inline critical CSS and critical JavaScript files where possible:

Web page rendering will be blocked by the browser until all included resources like JS and CSS are fully loaded, having CSS or JS codes that are required for critical rendering path in files will add an extra step to the rendering process which is requesting those file from the server (extra requests to the server). Inlining codes that are required for critical rendering path and put the rest in deferred files will improve website speed.

Lazy load images below the fold:

Images below the fold contributes to page load time but they are not seen by users during the initial page load, lazy loading those images will reduce load time without negatively affecting user experience.

WordPress 5.4 will natively support lazy load, for other CMS you can use a lazy load library or you can use the browser native lazy load attribution:

<img src="image.png" loading="lazy" alt="…" width="200" height="200">

Use CDN for resources like images and JavaScript files or for the whole website:

CDN has a lot of features that can help with speed:

  • Provide files to users from data centres close to their physical location which improves website speed
  • Reduces server load as most files will be served from the CDN's provider servers (normally powerful servers with load balance)
  • Firewall with DDOS protection
  • Caching content in static HTML (server side caching)

For WordPress Jetpack plugin can provide images and JS files caching, you can also use providers like Cloudeflare which providers a full website caching.

Be aware that optimization for speed is not one and done process, there is always something to improve. Keep monitoring your Core Web Vitals using Google Search Console and make sure your website is having good score.

Core Web Vitals Website Speed

Lab data VS field data VS Origin Summary Core Web Vitals

PageSpeed insights along with Google Search Console are the best tools when comes to assess a website performance against Core Web Vitals (LCP largest contentful paint, FID first input delay and CLS cumulative layout shift). However the terminology used in those tools could be sometimes confusing to users.

One of the most popular questions when it comes to Core Web Vitals assessment is what is the difference between lap data, field data and origin summary, in this post I will try to answer this question.

core web vitals

Lab data:

Things you need to know about lab data:

  • Lab data is empowered by LightHouse technology that simulates mobile throttling with a lower speed. Read more here.
  • Lighthouse uses in many cases a slower CPU than the average CPU available for users .
  • Lighthouse in most cases will use one location (USA).
  • Lab data is generated only for the tested URL in Google PageSpeed Insights tool.
  • Lab data is a live data, it reflects the speed at the time the test was run.

When you run Lighthouse in Google Chrome Developer Tools, at the end of the report you will be able to see the settings that are used in that experiment:

lighthouse test

Looking at the settings above we can easily see how Lighthouse is using a slower internet connection and slowing the CPU power by 5 times. The reasons Lighthouse does that is to get the test to cover all online users that use different type of computers and connect to the internet using different levels of speed.

Field data:

Field data is generate by actual Google Chrome everyday users, with:

  • Different computers/phones (different resources like CPU, RAM and GPU).
  • Different internet speed.
  • Different locations.
  • Field data is generated only for the URL that is tested in PageSpeed Insights .
  • Field data is based on Chrome User Experience Report (CrUX), so it is not live data, it the average user data collected in the last 28 days for the tested URL.
  • Google uses the 75th percentile value of all page views to the tested page to produce the score, if at least 75 percent of page views to the tested page meet "good" threshold, the page is classified as having "good" performance for that metric.

So if most of your users coming from a population with a high internet speed and powerful devices, it is normal to see field data better than lab data.

On the other hand if your server is overloaded at the time you are running the test, it is normal to see lab data recording higher numbers than field data.

Original summary:

This is very similar to field data but it represents the average performance of all pages on your website/domain.

Google uses the 75th percentile value of all page views to that site/domain to produce the score, if at least 75 percent of page views to the tested site meet "good" threshold, the site is classified as having "good" performance for that metric.

blank

Which data set to care mostly about?

Google will be using the field data (CrUX) and original summary data to judge your pages/website, which is also data that is available in Google Search Console.

All what you need to monitor in the PageSpeed Insights report is the original summary data, which is the date that will be used by Google algorithm updates in the future. It is ok to use lab data while working on speed improvements as you need an instant feed back.