Nobody likes waiting for software. Snappy, responsive interfaces make us happy, and research shows there’s a relationship between responsiveness and attention1. But maintaining fast-feeling websites often requires tradeoffs. This might mean diverting resources from the development of new features, paying off technical debt, or other engineering work. The key to justifying such diversions is by connecting the dots between performance and business outcomes—something we can do through measurement.
Over the last year, we’ve been rethinking the way we track page load performance on the web at Dropbox. After identifying a few gaps in our existing metrics, we decided we needed a more objective, user-focused way to define page load performance so that we could more reliably and meaningfully compare experiences across products. We thought a relatively new page load metric called Time To Visually Complete (TTVC) could work well.
There was just one problem: Browsers don’t yet report the moment a page becomes visually complete. If we wanted to adopt TTVC as our new primary performance metric, we would have to fill that gap. So we built a small library to allow us to track TTVC as our users experience it in the real world. That library is @dropbox/ttvc—and we’re excited to be open-sourcing this work!
Advantages of TTVC
When monitoring a page load, there are several useful milestones (Google’s Web Vitals project has recently popularized a few). Each of these metrics are measured from the moment the browser first issues a new request:
- Time to First Byte (TTFB): The time it takes for the web server to deliver the first byte to the browser
- First Contentful Paint (FCP): The timestamp of the first render frame with visible content
- Largest Contentful Paint (LCP): The timestamp of the render frame which introduced the largest visible block-level element
- Time to Visually Complete (TTVC): The time of the last visible paint event. Nothing on the user’s screen should change without user input
- Time to Interactive (TTI): The time at which the page becomes consistently responsive to user input. This is a less well-defined milestone, but is sometimes calculated as the point when the CPU and network both become idle
A few years ago, Dropbox made a big investment in aligning on and optimizing our web product for Largest Contentful Paint (LCP). This was successful, and by isolating and prioritizing our core UI elements, we were able to respond to inputs with usable interfaces much more quickly.
However, by focusing narrowly on LCP, we sometimes did so at the cost of page stability and secondary content and features. Prioritizing the largest element on your page means de-prioritizing secondary content. This is often why users experience the dreaded content jump.
When re-assessing the situation last year, we decided that a good way out was to identify a more objective, user-focused metric to align on: Time to Visually Complete (TTVC). With this metric, a page is considered visually complete at the moment that the pixels within the viewport finish rendering (i.e. stop changing). This is easy to develop an intuition for, it’s a meaningful milestone for the user, and it encourages layout stability.
Once we knew TTVC was a good fit for our requirements, we had to figure out how to measure it.
Measuring TTVC in the field
The most straightforward way to capture any performance metric is to run a lab test. This means setting up a device—preferably ensuring that CPU, memory and network resources are consistent between tests—and loading a web page, recording the page load, and identifying the timestamp for each milestone.
Of course, a lab environment is never going to accurately represent the broad range of devices, usage patterns, and network conditions your users will face in the wild. To capture that information, you really want to be field testing—sometimes referred to as Real User Monitoring (RUM).
There are a variety of tools available today that help you capture, collect and monitor the performance of page load metrics as your customers really experience them. While several existing tools offer automated lab testing of TTVC, we could not find any that supported field measurement.
Fortunately, our team found that the addition of two recent browser APIs—MutationObserver and IntersectionObserver—give us a way to approximate this pretty well, and without much overhead! This gave us the confidence to try building a new measurement library for field testing of TTVC.
TTVC can only be captured retroactively. This means we need to know when the page is done loading. Only then can we can look backward, identify the time of the last visible update, and finally, report it. We consider the page done when we observe network and main thread activity to be simultaneously idle for at least two seconds.
There are actually quite a few things a webpage can do that might modify the pixels in your viewport. It might load stylesheets, fonts, or images, or it could perform DOM mutations or canvas paint events (among other things). In the interest of minimizing overhead, we only monitor two types of updates: DOM mutations and image loading2. In practice, we have found this to capture the vast majority of use cases.
With this in mind, the @dropbox/ttvc library implements the following three components:
- requestAllIdleCallback: To detect that the browser is idle, we implemented a new function, requestAllIdleCallback. This wraps the browser API requestIdleCallback and combines it with some clever load event instrumentation to identify periods of network and CPU inactivity
- InViewportMutationObserver: By combining MutationObserver and IntersectionObserver, we can construct an InViewportMutationObserver. Using a MutationObserver instance, we first detect and enqueue mutation events for processing by IntersectionObserver. The IntersectionObserver instance can report whether the node(s) associated with each mutation intersect with the viewport. Finally, we surface the timestamp associated with each mutation, and keep only the most recent value
- InViewportImageObserver: To track loading images, we implement a similar structure, called InViewportImageObserver. This observer uses a single, capture-phase load event listener on the document as the initial source of events, which we then feed to IntersectionObserver
Once we have visibility into DOM mutations, image loading, and browser activity, assembling the three pieces is straightforward. First we subscribe to loading images and mutations within the viewport. Next, we keep track of the most recent timestamp observed. And finally, we wait until the browser is idle, and report the most recent timestamp recorded.
However, there are still some edge cases we need to account for:
- User interaction. Once a user interacts with the page, we can no longer safely assume that visual changes are not the results of that interaction. In these cases, we simply abort the measurement and do not report TTVC. (We consider user interaction to consist of click/touch and keydown events. We do not consider scroll events to be user interaction, since it is fairly common practice to drive scroll positions programmatically on page load)
- Background tabs. Similar to user interaction, if a loading page is backgrounded, we have no guarantees that the browser will continue to execute code or fetch resources for that page. Rather than report very long load times that have no relation to the implementation of the page, we throw out the measurement
- Viewport sizes. With IntersectionObserver, we can instrument exactly what a user sees. This means that the content which is visible may change from device to device. Additionally, scrolling will impact which parts of a page are considered visible
Measuring TTVC in your own projects
$ npm install @dropbox/ttvc
The API is composed of two primary methods. Call init() as early in page load as possible to set up instrumentation. Then, call getTTVC() to subscribe to TTVC metric events.
Basic usage
import {init, getTTVC} from '@dropbox/ttvc';
// Call this as early in page load as possible to set up instrumentation.
init();
// Reports the last visible change for each navigation that
// occurs during the life of this document.
const unsubscribe = getTTVC((measurement) => {
console.log('TTVC:', measurement.duration);
});
Instrumenting AJAX requests
We monitor CPU activity and asset loading automatically. But to help avoid detecting that the page is done prematurely, we also export two helper functions that allow you to instrument AJAX requests in your application. Here’s a quick example illustrating how to use them to instrument the native fetch API.
import {incrementAjaxCount, decrementAjaxCount} from '@dropbox/ttvc';
// patch window.fetch
const nativeFetch = window.fetch;
window.fetch = (...args) => {
incrementAjaxCount();
return nativeFetch(...args).finally(decrementAjaxCount);
};
For a complete walkthrough of the API and common usage patterns, check out the official documentation on NPM.
Single-page applications
One additional bonus of adopting TTVC is that it turns out it is equally well-defined for traditional page load and single-page app navigation! The only addition we needed to make to our library to support this was to allow applications to trigger a new measurement when starting a new single-page app navigation.
// app.js
import {start} from '@dropbox/ttvc';
import React, {useEffect} from 'react';
import ReactDOM from 'react-dom';
import {BrowserRouter, useLocation} from 'react-router-dom';
ReactDOM.render(
<BrowserRouter>
<App />
</BrowserRouter>,
document.getElementById('root')
);
const App = () => {
const location = useLocation();
useEffect(() => {
// Option 1: If you have access to the ttvc library, import it and
// call start().
start();
// Option 2: Dispatch a custom 'locationchange' event. @dropbox/ttvc subscribes to
// this and will call start() for you.
window.dispatchEvent(new Event('locationchange'));
}, [location]);
return (
<div className="App">
<h1>Welcome to React Router!</h1>
<Routes>
<Route path="/" element={<Home />} />
{/* ... more routes */}
</Routes>
</div>
);
};
How you can get involved
In the future, we hope that browsers consider reporting TTVC to us directly. That will always be more performant and more accurate than anything we can do with JavaScript. But until then, @dropbox/ttvc provides a mechanism for computing the TTVC metric in real time, allowing us to incorporate this objective, user-focused milestone into our performance monitoring.
We are excited for this chance to share our work with the open-source community. If you’d like to measure TTVC in your own projects, you can find @dropbox/ttvc on npm and GitHub.
While this should still be considered beta software, we are confident enough in our work to have begun setting TTVC goals in our quarterly planning processes. If you’d like to get involved, we would be very happy to see bug reports or contributions that help us improve the accuracy and performance of the library.
Does building innovative products, experiences, and infrastructure excite you? Come build the future with us! Visit dropbox.com/jobs to see our open roles, and follow @LifeInsideDropbox on Instagram and Facebook to see what it's like to create a more enlightened way of working.
1 https://www.nngroup.com/articles/response-times-3-important-limits/
2 If we only tracked DOM mutations, we might report TTVC prematurely on a page with a lot of image content (imagine a photo gallery, or the Netflix homepage).