Web Development10 min read14 September 2025

Website Performance Engineering: A Practical Guide

Core Web Vitals are not a box-ticking exercise. Done properly, performance engineering translates directly into conversion rate and SEO ranking.

Website performance is not a nice-to-have. It's a commercial decision. The data connecting page speed to conversion rate is consistent across industries: faster pages convert better, rank higher in search, and cost less to serve at scale.

Core Web Vitals — Google's framework for measuring user experience — have been a ranking factor since 2021. But they're worth caring about independent of SEO. They're a proxy for whether your site feels fast and responsive to real users.

What Core Web Vitals actually measure

Largest Contentful Paint (LCP) measures how long it takes for the main content of a page to become visible. Poor LCP is usually caused by slow server response times, large unoptimised images, or render-blocking JavaScript.

Interaction to Next Paint (INP) measures how responsive a page is to user interactions. High INP is almost always caused by JavaScript executing on the main thread and blocking the browser from responding to clicks and taps.

Cumulative Layout Shift (CLS) measures how much the page shifts visually while loading. The classic cause is images without explicit dimensions or late-loading elements that push content down the page.

The common mistakes

The most prevalent performance issues in production websites follow predictable patterns.

Images served without optimisation. A hero image served at 4MB when it could be 150KB optimised at the correct dimensions and format. This is the single most common cause of poor LCP. Every image should be served in WebP or AVIF format, at an appropriate size for the display dimensions, and lazy-loaded unless it's above the fold.

JavaScript loaded eagerly that isn't needed immediately. Third-party scripts — analytics, chat widgets, A/B testing tools — are frequently loaded synchronously, blocking page render while they load. Most of these can be loaded asynchronously or deferred without any impact on functionality.

Fonts causing layout shift. Custom fonts loaded externally introduce layout shift when they swap in after the page has already rendered with a fallback font. Self-hosting fonts and using font-display: swap combined with size-adjust on the fallback eliminates this.

No server-side caching. Pages that are rebuilt on every request when they could be served from cache add unnecessary latency. Static site generation or incremental static regeneration for pages that don't require real-time data is almost always the right choice.

Building performance into the architecture

Performance is much easier to maintain when it's built into the development process rather than retrofitted after launch.

Performance budgets — limits on JavaScript bundle size, image sizes, and Lighthouse scores — set expectations before work begins. Lighthouse CI integrated into a deployment pipeline catches regressions before they reach production. Regular audits on real devices on real networks surface issues that only appear outside the optimised development environment.

The goal is not a perfect Lighthouse score on desktop in Chrome. It's a fast experience for your actual users, on their actual devices, on their actual connections. Those two things are related but not identical, which is why real user monitoring (RUM) data is ultimately more valuable than synthetic lab scores.

A site that scores 92 on Lighthouse and loads in 1.2 seconds for the median user on a median connection is doing its job. Getting from 92 to 97 matters much less than getting the median load time from 3.8 seconds to 1.2 seconds. Keep the commercial objective in front of you.

Found this useful?

We help companies put these frameworks into practice. Let's talk about your situation.