Directly use Lighthouse in your Cypress test suites.
Lighthouse is a tool that is supposed to run against a production bundle for computing the performance and best-practices metrics. But it's widely suggested by Cypress to run their test on development environment. While this seems a bit counter intuitive, we can rely on the Cypress project feature to run some dedicated test suites against production bundles and to have quick feedbacks (or prevent regression) on these metrics.
If you don't provide any argument to the cy.lighthouse command, the test will fail if at least one of your metrics is under 100.
You can make assumptions on the different metrics by passing an object as argument to the cy.lighthouse command:
it("should verify the lighthouse scores with thresholds", function () {
cy.lighthouse({
performance: 85,
accessibility: 100,
"best-practices": 85,
seo: 85,
pwa: 100,
});
});If the Lighthouse analysis returns scores that are under the one set in arguments, the test will fail.
You can also make assumptions only on certain metrics. For example, the following test will only verify the "correctness" of the performance metric:
it("should verify the lighthouse scores ONLY for performance and first contentful paint", function () {
cy.lighthouse({
performance: 85,
"first-contentful-paint": 2000,
});
});This test will fail only when the performance metric provided by Lighthouse will be under 85.
While I would recommend to make per-test assumptions, it's possible to define general metrics inside the cypress.json file as following:
{
"lighthouse": {
"performance": 85,
"accessibility": 50,
"best-practices": 85,
"seo": 85,
"pwa": 50
}
}Note: These metrics are override by the per-tests one.
You can also pass any argument directly to the Lighthouse module using the second and third options of the command:
const thresholds = {
/* ... */
};
const lighthouseOptions = {
/* ... your lighthouse options */
};
const lighthouseConfig = {
/* ... your lighthouse configs */
};
cy.lighthouse(thresholds, lighthouseOptions, lighthouseConfig);With Lighthouse 6, we're now able to make assumptions on categories and audits.
The categories are what we're used to with Lighthouse and provided a score between 0 and 100:
- performance
- accessibility
- best-practices
- seo
- pwa
The audits are things like the first meaningful paint and the score is provided in milliseconds:
- first-contentful-paint
- largest-contentful-paint
- first-meaningful-paint
- load-fast-enough-for-pwa
- speed-index
- estimated-input-latency
- max-potential-fid
- server-response-time
- first-cpu-idle
- interactive
- mainthread-work-breakdown
- bootup-time
- network-rtt
- network-server-latency
- metrics
- uses-long-cache-ttl
- total-byte-weight
- dom-size
