I’m working on optimizing a React web application, and I’m seeing some conflicting results between Core Web Vitals and Lighthouse:
- My website passes Core Web Vitals based on real-world data (LCP, FID, and CLS) and seems to perform well according to field reports.
- However, when I run a Google Lighthouse performance audit, my Lighthouse performance score is much lower, especially when it comes to:
- Main-thread work (~9 seconds)
- JavaScript execution time (~6 seconds)
- Render-blocking resources
What I’ve done so far:
-
Implemented code-splitting and lazy loading for non-essential components.
-
Preloaded critical resources like the Largest Contentful Paint (LCP) image.
-
Minified JavaScript and CSS using Webpack and Terser.
-
Moved heavy computations to Web Workers.
My Questions:
-
Why is there such a large discrepancy between Core Web Vitals (field data) and Lighthouse (lab data)? Is this normal, and how should I interpret the results?
-
Should I focus on improving Lighthouse scores or stick with Core Web Vitals as the primary benchmark for performance, given that Core Web Vitals reflects real user experiences?
-
What further steps can I take to reduce main-thread work time and improve JavaScript execution to boost my Lighthouse scores?
Thanks in advance for any guidance on how to align these performance measurements and what to prioritize when optimizing!
My website passes Core Web Vitals based on real-world data (LCP, FID, and CLS) and seems to perform well according to field reports.
I presume you mean INP instead of FID? Since INP has replaced FID as a Core Web Vital since May this year.
However, when I run a Google Lighthouse performance audit, my Lighthouse performance score is much lower, especially when it comes to:
Main-thread work (~9 seconds)
JavaScript execution time (~6 seconds)
Render-blocking resources
The Lighthouse Performance score is not based directly on those audits, but on 5 metrics:
The audits are potential areas to investigate to improve those metrics.
Why is there such a large discrepancy between Core Web Vitals (field data) and Lighthouse (lab data)? Is this normal, and how should I interpret the results?
It’s not unusual. Lighthouse does a simple static load under set conditions. It could be your users are all on faster networks and with better devices than this runs under.
In particularly is seems your website is JavaScript heavy, which is not unusual for React apps and so Lighthouse is pointing out that the browser needs to do a lot of work to run a React app. This likely means your website will struggle on low-end Android devices (that Lighthouse simulates).
You can run Lighthouse using different settings, but some Lighthouse tooling (PageSpeed Insights, Lighthouse in DevTools) does not offer those options, so you would need to run from the command line.
Should I focus on improving Lighthouse scores or stick with Core Web Vitals as the primary benchmark for performance, given that Core Web Vitals reflects real user experiences?
The real user experiences are what you’re real users are experiencing. So you should always concentrate on that.
The Lighthouse audits are intended to point out potential issues. It deliberately uses a low-end device to do this. These devices are a lot more typical than developers assume, but not in all regions and for all businesses.
However, do be aware that your website is potentially quite slow for these users and so these may be users that will not use your site because of that.
If you have access to a low-end android then test your website on this, and you might see that the Lighthouse results are not inaccurate.
What further steps can I take to reduce main-thread work time and improve JavaScript execution to boost my Lighthouse scores?
Frameworks are increasingly working on improving responsiveness. React 18 has several improvements, and APIs available to developers to split up work to reduce the types off issues highlighted by Lighthouse here. If you are not on the latest version then you may want to investigate that.