WPTprint is a web platform designed to launch the WPT test suite on CSS Print tools.
It can:
- launch the test suite and compare the results to the references;
- display the tests and references side by side, as generated by one CSS Print tool and the current browser;
- let users manually check the results of the tests;
- generate a standard JSON file containing the results;
- automatically generate a new JSON file when a new version of the test suite and/or the tool is available.
We generate the same JSON files as the WPT platform, we should definitely keep this feature for interoperability.
The overall tool interface is pretty solid, with CLI commands for rendering updates (that are generally launched asynchronously) and a web interface for visual checks.
The logic for status updates is technically a bit complex but is easy to use in real life. The first time we render a test, we can compare to the reference when it exists, and let users confirm this visually. When we update WPT or a tool, we can compare to the previous rendering, keep all the previous information if the renderings are the same, or update the status if it changed and remove the "manually set" tag.
The web interface is easy to understand. Installing tools with one click feels a bit like magic. Having keyboard shortcuts to click on these pass/fail buttons is a must-have.
Having tool-specific code in very short files in the "tools" folder is an achievement. Python is great for that, a simple list of constants and functions and code that is easy to read and to maintain. Let’s keep this simple!
The technical architecture feels wrong. Python was easy to try things quickly and the whole project was a pretext to learn htmx (which is really cool by the way). I wouldn’t do it that way now: we should have an API to JSON files, with a nice web app in JS and a CLI in… whatever.
The web interface is limited and could be much more beautiful and usable.
The dirty client-server system awfully coded for WeasyPrint should be shared by all tools, in a real separate file. It avoids relaunching the tool each time.
Client-side test filtering. On names, on status, on tags, on everything. And on differences with other tools, and on other versions of the tool.
Nice graphs and tables comparing different versions of the same tool, to see all the new features we implemented and find regressions.
Comparing more than 2 tools side by side. And a cool image diff tool. And a cool before/after slider on images to compare renderings.
A nice HTML source with highlighting. And links to the related specifications.
A web interface to launch the CLI features.
All the tools easily downloadable:
- WeasyPrint,
- PagedJS,
- Vivliostyle,
- Prince (evaluation version),
- PDFreactor (evaluation version).
Requirements:
- Python 3.9+,
- Git,
- Linux (should work on other OSes with minor tweaks),
- Poppler’s pdftoppm.
Requirements for tools:
- WeasyPrint requires Pango and its dependencies,
- PagedJS and Vivliostyle require NodeJS and Chromium dependencies,
- PDFreactor requires Java.
Install steps:
# Clone this repository
git clone https://github.com/CourtBouillon/wptprint.git
# Enter the WPTprint folder
cd wptprint
# Clone the test suite
git clone https://github.com/web-platform-tests/wpt.git
# Create test manifest
./wpt/wpt manifest
# Create a virtual environment
python3 -m venv venv
# Install dependencies
venv/bin/pip install .
# Launch WPTprint
venv/bin/flask --app wptprint run
# Enjoy!
open http://localhost:5000/The first thing you’ll probably want to do is to install a tool. The home page lists the different tools and gives the possibility to install different versions of each tool. Select a version, click the "Install" button, wait a bit, and you’re done! Click on the version number to open the test suite for this browser.
The test suite page lets you select a test and view its rendering. By default, the rendering of the current tool is on the left and the reference rendered by the same tool on the right. You can choose to render the test or the reference with another tool or with your browser with the select box on top of the renderings.
You can save the result of the test (pass, fail, n/a or error) by clicking on the corresponding button at the bottom. These buttons are accessible with generic keyboard accessibility shortcuts (Alt + Shift + P with Firefox or Chrome on Linux).
The status of each test is displayed next to the test name:
- ✔️ pass;
- ❌ fail;
- 🟡 n/a;
⚠️ error;- ❔ unknown.
An eye 👁️ means that the result has been manually set.
It is possible to generate automatic test results, based on the test reference. You can render all the tests of the test suite, compare them to the references, and store the result of this automatic test.
venv/bin/flask --app wptprint generate <tool> <version>
If the test contains one of the tags that are marked as ignored for a tool, or if it’s unavailable in the test suite (meaning that there’s a bug in the suite) it’s marked as n/a.
If the test crashes or takes longer than the timeout (10 seconds), it’s marked as error.
If the test and the reference are exactly the same, the test is marked as passing, and otherwise it’s marked as failed.
If you update a tool and want to keep the results from the previous version, you can first generate the results for the new version, and then update the results.
venv/bin/flask --app wptprint generate <tool> <new-version>
venv/bin/flask --app wptprint update <tool> <old-version> <new-version>
If the results are the same between the two versions, the status is kept, including the auto/manual flag. If they are different, the result of the generation is kept with the manual flag unset.
You can update the test suite and automatically update the results of all the tools.
venv/bin/flask --app wptprint update-wpt