Browsing Category

Core Web Vitals

Core Web Vitals Website Speed

How To Get Your Website Ready For Page Experience Core Web Vitals Update

Google has been tweaking their algorithm in the last few years toward user experience elements such as:

  • Website speed
  • Mobile friendliness
  • Secure connection HTTPs

Early 2020 Google Chrome team announced Core Web Vitals, following that Google announced that Core Web Vitals will be added as ranking factors to their algorithm along with the old user experience elements, in what the SEOs call page experience update.

Core Web Vitals measure real world user experience dimensions of web usability, such as load time, interactivity, and the stability of content as it loads. The three core web vitals announced by Google for 2020 are:

  1. Loading: Largest Contentful Paint (LCP)
  2. Interactivity: First Input Delay (FID)
  3. Visual Stability: Cumulative Layout Shift (CLS)

Due to COVID-19 Google will delay launching the new Google algorithm that includes Core Web Vitals (page experience update), and will provide at least six months notice before it rolls out.

How to evaluate Core Web Vitals for a website:

There are many tools that can help with that:

It is very important to understand the difference between lap data and field data before start optimizing for Core Web Vitals.

From Google: "The field data is a historical report about how a particular URL has performed, and represents anonymized performance data from users in the real-world on a variety of devices and network conditions. The lab data is based on a simulated load of a page on a single device and fixed set of network conditions. As a result, the values may differ."

 Field DataOriginal SummaryLap Data
Internet Connection speedDifferent connection speed for real-world usersDifferent connection speed for real-world usersLatency: 150ms Throughput: 1.6Mbps down / 750 Kbps up.    
Machine HardwareDifferent CPU, RAM, GPU, depending or real-world usersDifferent CPU, RAM, GPU, depending or real-world usersMid-Tier Mobile (4X slowdown of a high tier desktop)      
Results timeAverage last 28 daysAverage last 28 daysCurrent
LocationReal-world user’ locationReal-world user’ locationCalifornia
Result forTested URLWhole DomainTested URL

1- Loading - Largest Contentful Paint - LCP

"Largest Contentful Paint (LCP): measures loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading."

The Largest Contentful Paint (LCP) metric reports the render time of the largest image or text block visible within the viewport (hero element). Website speed used to be monitored using metrics like Speed Index or First Contentful Paint FCP, those metrics are still important to monitor website speed, but the new focus should be improving LCP.

The best tool to measure and optimize for LCP is Google Chrome Developers Tools, the timestamp number represents LCP, the highlighted area marks the hero element that LCP is calculated when it finishes loading.

It is very important to run LCP test using different internet speed modes, and CPU speed modes, to better understand LCP value for different users based on their internet connection and machine power. Same thing applies to other Core Web Vitals and performance metrics.

How to optimize for LCP?

Optimizing for LCP is optimizing for speed, check my how to optimize for LCP post. A good start will be checking the recommendations provided by Lighthouse:

2- First Input Delay (FID)

"FID measures the time from when a user first interacts with a page (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction."

Not all users will interact with your site every time they visit and the timing of the interaction is different, and not all interactions are relevant to FID. This means some users will have no FID values, some users will have low FID values, and some users will probably have high FID values.

FID is a metric that can only be measured in the field, as it requires a real user to interact with your page. You can measure FID only using Google Speed Insights tool, which is using Chrome User Experience Database, read here to better understand the difference between lab data and filed data. The good news that there is a lab metric (Total Blocking Time) that can help to diagnose and improve FID.

The Total Blocking Time (TBT) metric measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness. No task should exceed 50 MS, Chrome Developers Tools provides total blocking time along with the tasks that are exceeding the 50 MS mark:

How to optimize for FID:

Optimizing for LCP includes lots of common elements that will improve FID, most common causes of high FID that needs to be worked on are:

  • Unnecessary JavaScript loading, parsing, or execution. While analyzing your code in the Performance panel you might discover that the main thread is doing work that isn't really necessary to load the page. Reducing JavaScript payloads with code splittingremoving unused code, or efficiently loading third-party JavaScript should improve your TBT score.
  • Inefficient JavaScript statements. For example, after analyzing your code in the Performance panel, suppose you see a call to document.querySelectorAll('a') that returns 2000 nodes. Refactoring your code to use a more specific selector that only returns 10 nodes should improve your TBT score.

3- Cumulative Layout Shift (CLS):

"CLS measures visual stability. Have you ever been reading an article online when something suddenly changes on the page? Without warning, the text moves, and you've lost your place. Or even worse: you're about to tap a link or a button, but in the instant before your finger lands—BOOM—the link moves, and you end up clicking something else!"

This could be the easiest Core Web Vital to diagnose, by looking at the thumbnails tracker in Chrome Developers Tools and enabling the Layout Shift Regions it should be easy to find out what is causing a high CLS:

How to optimize for CLS?

CLS diagnosis and optimization could be the easiest among all Core Web Vitals, in most cases CLS issues are visible to the human eye, with some CSS and JS work, creating stability while the page is loaded should be achievable. Check the experience section in Chrome Developers Tools, and find which elements are moving and see if you can:

  • Make them stable on the page
  • Reserve a place holder for them from the beginning wile waiting for them to load.

CLS is measured for the entire life cycle of the page (even if it is kept opened for days by users), for that reason the actual user experience value could be different form Lighthouse or Chorome Developers Tools (where they measure only one trace).

The Marathon Of Good Core Web Vitals score:

Keeping a good Core Web Vitals score is a marathon that needs constant website improvements, there are too many variables that can change the score that used to be a pass before:

  • Website technology changes (hosting and CMS) and content changes.
  • Users technology changes (the type of machines and internet connection they use)
  • Google changing their Core Web Vitals criteria and scoring

Monitoring the Core Web Vitals section in Google Search Console will be key to find any Core Web Vitals issues and start working on them as soon as they are detected.

Core Web Vitals improvement is not only a search engine best practice, it is also a user experience element that can lead to better a conversion rate.

Core Web Vitals

How to Optimize For Largest Contentful Paint (LCP)

LCP (largest contentful paint) is one of the most important core web vitals, it is the only core web vital that is related 100% to speed and hence it is the most difficult one to diagnose and optimize for. LCP could be measured using Google PageSpeed Insights which is powered by Lighthouse.

LCP should load within 2.5 seconds, if 75% of the loads ( regardless of which pages) on a website achieve that number, LCP will be marked as passing assessment. Only elements within the user viewport (above the fold) will be used to calculate LCP.

Google has provided a great tutorial how to optimize for LCP, in this post I will try to provide a check list and actionable directions that you can implement yourself or take to your web developer that is working on improving Core Web Vitals (mainly LCP).

14 improvements that you can do to improve LCP:

  1. Adequate resources on the server
  2. Enable browser caching
  3. Enable GZIP (text compression)
  4. Enable server caching like OPcache or reverse proxy
  5. Keep your software up-to-date (CMS, Plugins, operating system like Ubuntu, control panel like Cpanel, PHP, MySQL and Apache with HTTP2 module)
  6. Install a server side HTML caching for your CMS
  7. Setup browser side caching using service worker
  8. Minify JavaScript and CSS
  9. Compress images and use the right format for them
  10. Resize images to fit the required dimensions in the style sheet
  11. Remove or defer files and codes that block critical rendering path for above the fold content
  12. Inline critical CSS and critical JavaScript files where possible
  13. Lazy load images below the fold
  14. Use CDN for resources like images and JavaScript files or for the whole website
  15. Fast DNS server

I will provide more details how to improve each of the items above, most of my examples will work best for WordPress that is hosted on Linux with Apache being the web server.

Adequate resources on the server:

You do not need to be cheap when it comes to web hosting, having more resources than what your website needs is better than lacking resources. Start with a VPS, possibly 2 cores, 4 GB of ram and SSD, and keep eye on your resources usage, your average CPU load must stay one or below and your RAM usage must be less than 50% on average. Add more resources if you did not achieve those numbers.

Enable browser caching:

This setup could be done in most cases using the server's configuration, Apache has mod_expires module that enables you to do that by adding the code below to your .htaccess file:

<FilesMatch "\.(webm|ogg|mp4|ico|pdf|flv|jpg|jpeg|png|gif|webp|js|css|swf|x-html|css|xml|js|woff|woff2|otf|ttf|svg|eot)(\.gz)?$">
<IfModule mod_expires.c>
AddType application/font-woff2 .woff2
AddType application/x-font-opentype .otf
ExpiresActive On
ExpiresDefault A0
ExpiresByType video/webm A10368000
ExpiresByType video/ogg A10368000
ExpiresByType video/mp4 A10368000
ExpiresByType image/webp A10368000
ExpiresByType image/gif A10368000
ExpiresByType image/png A10368000
ExpiresByType image/jpg A10368000
ExpiresByType image/jpeg A10368000
ExpiresByType image/ico A10368000
ExpiresByType image/svg+xml A10368000
ExpiresByType text/css A10368000
ExpiresByType text/javascript A10368000
ExpiresByType application/javascript A10368000
ExpiresByType application/x-javascript A10368000
ExpiresByType application/font-woff2 A10368000
ExpiresByType application/x-font-opentype A10368000
ExpiresByType application/x-font-truetype A10368000
</IfModule>
<IfModule mod_headers.c>
Header set Expires "max-age=A10368000, public"
Header unset ETag
Header set Connection keep-alive
FileETag None
</IfModule>
</FilesMatch>

Enable GZIP (text compression):

This setup could be done in most cases using the server configuration, Apache has mod_deflate module that enables you to do that by adding the code below to your .htaccess file:

<IfModule mod_deflate.c>
AddType x-font/woff .woff
AddType x-font/ttf .ttf
AddOutputFilterByType DEFLATE image/svg+xml
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE text/javascript
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript
AddOutputFilterByType DEFLATE application/x-font-ttf
AddOutputFilterByType DEFLATE x-font/ttf
AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
AddOutputFilterByType DEFLATE font/opentype font/ttf font/eot font/otf
</IfModule>

Enable server caching like OPcache or reverse proxy:

Server caching will save popular pages in the memory of the server so they could be served quickly to users, PHP has OPcache that could be enabled to do that.

Keep your software up-to-date:

There is a lot of technology involved in running a website:

  • The server and its software (Operating system like Linux, web server like Apache with HTTP2 module, programming language like PHP, database engine like MySQL, control panel like Cpanel and more)
  • CMS, like WordPress.
  • CMS add-ons like Plugins.

Keeping all this software up-to-date is vital for both performance and security.

Install a server side HTML caching for your CMS:

For a CMS like WordPress to generate a page there will be few PHP calls to the database followed by a HTML version of the page passed by the web server (Apache) to the browser to parse it then render it, this is a long journey with too many variables. Caching a copy of every single page of the website in a HTML format will save a lot for work required by the server and enable it to pass a static HTML file to the browser for rendering, this is a big time saver. A plugin like WPfastest cache or WP Rocket can take care of that for WordPress.

Setup browser side caching using service worker:

The Service Worker API comes with a Cache interface, that lets you create stores of responses keyed by request. Service Worker can even enable the website to work totally off-line after first load.

Minify JavaScript and CSS:

Spaces and comments can inflate CSS and JavaScript files significantly, minifying those files manually or using a plugin if you have WordPress will improve performance. For WordPress WPcache can do the minification.

Compress images and use the right format for them:

Images have different format and purposes, JPEG, PNG, GIFs, SVG, and each of them also has multiple format like JPEG 2000, choosing the right format and resolution could be a huge size saver. For WordPress WPsuper cache plugin has an image compression module that makes image compression an easy process, you can also use plugins like ShortPixel.

Consider using WEBP, It can reduce your image size by 30% comparing to JPEG or PNG.

Resize images to fit the required dimensions in the style sheet:

The image dimensions should always match the required space on the page, having a 1000X1000 pixel images in 100X100 pixel space will add a lot of unnecessary bytes to the total page size.

Remove or defer files and codes that block critical rendering path for above the fold content:

Files that are blocking the critical rendering path could be checked using the coverage tab in Chrome Developer Tools:

Coverage section chrome developer tools

Files that are not used for critical rendering must be deferred. If you have multiple JavaScript files that are partially used for the critical rendering path, you can combine them in two files, one that has all the codes that are required for the critical rendering path critical.js/critial.css and another one that includes codes that are not required for the critical rendering path non-critical.js/non-critial.css, the second file must be deferred.

Consider also preloading files that are required for critical rendering path:

<head>
<link rel="preload" as="script" href="critical.js">

<link rel="preload" href="critical.woff2" as="font" type="font/woff2" crossorigin>

<link rel="preload" href="critical.css" as="style">

<link rel="preload" href="bg-image-narrow.png" as="image" media="(max-width: 600px)">

</head>

You can defer JavaScript and CSS files using the codes below:

<head>
<script src="demo_async.js" async></script>

<link rel="preload" href="styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="styles.css"></noscript>
</head>

You can also consider inlining CSS and JS codes that are required for the critical rendering path (read more below).

Inline critical CSS and critical JavaScript files where possible:

Web page rendering will be blocked by the browser until all included resources like JS and CSS are fully loaded, having CSS or JS codes that are required for critical rendering path in files will add an extra step to the rendering process which is requesting those file from the server (extra requests to the server). Inlining codes that are required for critical rendering path and put the rest in deferred files will improve website speed.

Lazy load images below the fold:

Images below the fold contributes to page load time but they are not seen by users during the initial page load, lazy loading those images will reduce load time without negatively affecting user experience.

WordPress 5.4 will natively support lazy load, for other CMS you can use a lazy load library or you can use the browser native lazy load attribution:

<img src="image.png" loading="lazy" alt="…" width="200" height="200">

Use CDN for resources like images and JavaScript files or for the whole website:

CDN has a lot of features that can help with speed:

  • Provide files to users from data centres close to their physical location which improves website speed
  • Reduces server load as most files will be served from the CDN's provider servers (normally powerful servers with load balance)
  • Firewall with DDOS protection
  • Caching content in static HTML (server side caching)

For WordPress Jetpack plugin can provide images and JS files caching, you can also use providers like Cloudeflare which providers a full website caching.

Fast DNS server

DNS is the system that connects readable domain name that human can read and remember (website.com) to an IP (internet protocol) which a machine/computer can understand and handle. This process takes in most cases less than 300 MS, but the difference between a slow DNS and a fast DNS is worth it considering how cheap good DNS providers are. See the screen shot below for Amazon.ca

Summary:

Be aware that optimization for speed is not one and done process, there is always something to improve. Keep monitoring your Core Web Vitals using Google Search Console and make sure your website is having good score.

Core Web Vitals Website Speed

Lab data VS field data VS Origin Summary Core Web Vitals

PageSpeed insights along with Google Search Console are the best tools when comes to assess a website performance against Core Web Vitals (LCP largest contentful paint, FID first input delay and CLS cumulative layout shift). However the terminology used in those tools could be sometimes confusing to users.

One of the most popular questions when it comes to Core Web Vitals assessment is what is the difference between lap data, field data and origin summary, in this post I will try to answer this question.

core web vitals

Lab data:

Things you need to know about lab data:

  • Lab data is empowered by LightHouse technology that simulates mobile throttling with a lower speed. Read more here.
  • Lighthouse uses in most cases a slower CPU than the average CPU available for users "Lighthouse applies CPU throttling to emulate a mid-tier mobile device even when run on far more powerful desktop hardware" .
  • Lighthouse in most cases will use one location (USA).
  • Lab data is generated only for the tested URL in Google PageSpeed Insights tool.
  • Lab data is a live data, it reflects the speed at the time the test was run.
  • Some users interaction elements like FID will not be visitable in the lab data.

When you run Lighthouse in Google Chrome Developer Tools, at the end of the report you will be able to see the settings that are used in that experiment:

lighthouse test

Looking at the settings above we can easily see how Lighthouse is using a slower internet connection and slowing the CPU power by 5 times. The reasons Lighthouse does that is to get the test to cover all online users that use different type of computers and connect to the internet using different levels of speed.

Field data:

Field data is generate by actual Google Chrome everyday users, with:

  • Different computers/phones (different resources like CPU, RAM and GPU).
  • Different internet speed.
  • Different locations.
  • Field data is generated only for the URL that is tested in PageSpeed Insights .
  • Field data is based on Chrome User Experience Report (CrUX), so it is not live data, it the average user data collected in the last 28 days for the tested URL.
  • Google uses the 75th percentile value of all page views to the tested page to produce the score, if at least 75 percent of page views to the tested page meet "good" threshold, the page is classified as having "good" performance for that metric.

So if most of your users coming from a population with a high internet speed and powerful devices, it is normal to see field data better than lab data.

On the other hand if your server is overloaded at the time you are running the test, it is normal to see lab data recording higher numbers than field data.

Original summary:

This is very similar to field data but it represents the average performance of all pages on your website/domain.

Google uses the 75th percentile value of all page views to that site/domain to produce the score, if at least 75 percent of page views to the tested site meet "good" threshold, the site is classified as having "good" performance for that metric.

The default Core Web Vital charts you see in Google Search Console represents the history of original summary:

Which data set to care mostly about?

Google will be using the field data (CrUX) and original summary data to judge your pages/website, which is also data that is available in Google Search Console.

What I recommend to monitor in the PageSpeed Insights report is the original summary data, which is most likely the data that will be used by Google algorithm updates in the future. It is ok to use lab data while working on speed improvements as you need an instant feed back.

 Field DataOriginal SummaryLap Data
Internet Connection speedDifferent connection speed for real-world usersDifferent connection speed for real-world usersLatency: 150ms Throughput: 1.6Mbps down / 750 Kbps up.    
Machine HardwareDifferent CPU, RAM, GPU, depending or real-world usersDifferent CPU, RAM, GPU, depending or real-world usersMid-Tier Mobile (4X slowdown of a high tier desktop)      
Results timeAverage last 28 daysAverage last 28 daysCurrent
LocationReal-world user’ locationReal-world user’ locationCalifornia
Result forTested URLWhole DomainTested URL

Why I do not see field data or origin summary data?

As those two data sets are actual user data coming from Google Chrome Experience Report (CrUX), it is normal for low traffic websites not to see data in one of them or in both of them.

In a situation like that is is recommended to adjust Google Chrome developers tool to match the most popular connection speed used by your users and do a lap test using those settings: