The need for fast and responsive applications has never been greater because of the move from desktop to mobile (opens new window). Still, web applications have been increasing in complexity and size (opens new window), with rising load times. It is therefore clear why the topic of webpage performance is more popular today than it likely ever was.
This article aims at giving a practical introduction to the whys and hows of web performance, without getting lost in the depth or breadth of this massive topic.
# Why performance matters
The time it takes for a service to become usable, as well as its general responsiveness, bear a lot of weight on the user's perception of that service. Helpful features, great design and other prominent characteristics all become irrelevant when an online service is so slow that users navigate away.
You can build the best web application in the world, but be mindful that each user will have a specific amount of time they are willing to invest in your service to solve their problems. Exceed that amount, and you risk losing them to a different, more performant solution. This is even truer for new users, who haven't yet been given proof of the quality of your service, and are essentially investing their time up-front, hoping for a return.
# A competitive differentiator
There is a brighter side to the topic: if low performance can sink an online platform, high performance can very well help it rise to the top. Speed and responsiveness can be a differentiating characteristic for a service, prompting users to choose it over the competition. Therefore an investment in this area will almost always pay off. Some notorious real-world examples from known businesses include:
- Pinterest decreasing wait time for their users, increasing both traffic and conversions (opens new window).
- Zalando applying small improvements in load time and finding a direct correlation with increased revenue per session (opens new window).
- The BBC discovering that every extra second that a page took to load led to 10% of users leaving the page (opens new window).
# Measuring performance
Given the importance of page performance, it is no coincidence that browsers expose a ton of insights into performance metrics (opens new window). Being aware of how your application scores against these across time will provide you the feedback you need to keep it performant for your users. There are several approaches that can be combined to achieve the best results:
- Real user monitoring to understand what performance actual end-users of your service are experiencing.
- Synthetic monitoring to proactively gather intel on service performance, as well as to find issues before users stumble into them.
- Performance testing to avoid releasing performance regression to production in the first place.
- Regular audits to get an overview of your page's performance and suggestions on how to improve it, e.g. with tools such as Google Lighthouse (opens new window).
# Performance with headless tools
As much as we should be striving to build performant applications, we should commit to monitoring and testing performance to enable continuous feedback and rapid intervention in case of degradation. Puppeteer and Playwright give us a great toolkit to power both synthetic monitoring and performance testing.
- Access to the Web Performance APIs, especially PerformanceNavigationTiming (opens new window) and PerformanceResourceTiming (opens new window).
- Whenever testing against Chromium, access to the Chrome DevTools Protocol for traffic inspection, network emulation and more.
- Easy interoperability with performance libraries from the Node.js ecosystem.
# Web Performance APIs
The Navigation Timing (opens new window) and the Resource Timing (opens new window) performance APIs are W3C (opens new window) specifications. The MDN docs (opens new window) very clearly define the scope of both:
Navigation timings are metrics measuring a browser's document navigation events. Resource timings are detailed network timing measurements regarding the loading of an application's resources. Both provide the same read-only properties, but navigation timing measures the main document's timings whereas the resource timing provides the times for all the assets or resources called in by that main document and the resources' requested resources.
We can use the Navigation Timing API to retrieve timestamps of key events in the page load timeline.
The Resource Timing API allows us to zoom in to single resources and get accurate information about how quickly they are being loaded. For example, we could specifically look at our website's logo:
# Chrome DevTools for performance
The Chrome DevTools Protocol offers many great performance tools for us to leverage together with Puppeteer and Playwright.
One important example is network throttling, through which we can simulate the experience of users accessing our page with different network conditions.
The DevTools Protocol is quite extensive. We recommend exploring the documentation (opens new window) and getting a comprehensive overview of its capabilities.
# Additional performance libraries
Lighthouse can easily be used programmatically with Playwright and Puppeteer to gather values and scores for different metrics, like Time To Interactive (TTI) (opens new window):
All above examples can be run as follows:
# Further reading
- The comprehensive MDN Web Performance documentation (opens new window)
- web.dev's performance section (opens new window)
- Web Performance Recipes With Puppeteer (opens new window) by Addy Osmani
- Getting started with Chrome DevTools Protocol (opens new window) by Andrey Lushnikov
- Get Started with Google Lighthouse (opens new window)