-
Notifications
You must be signed in to change notification settings - Fork 33
Description
Heya,
I would like to note here the way how to test visualizations.
After some googling (1), (2), I've decided that a good solution would be to use py.test plugin to validate Jupyter notebooks.
The plugin adds functionality to py.test to recognise and collect Jupyter notebooks. The intended purpose of the tests is to determine whether execution of the stored inputs match the stored outputs of the .ipynb file. Whilst also ensuring that the notebooks are running without errors.
The tests were designed to ensure that Jupyter notebooks (especially those for reference and documentation), are executing consistently.
Comparing cell-output while testing with the stored one in a notebook will cause all cells to fail because the output information contains the memory address where the figure is stored (obviously, this is the unique value so it can't be compared).
But we can make sure that the notebooks (with visualisations) are running without errors.
The only drawback I see is adding a new package-requirement - nbval.
The alternative solution to consider is mentioned here. The idea is to add the following test
Alternative test
import subprocess
import tempfile
def _exec_notebook(path):
with tempfile.NamedTemporaryFile(suffix=".ipynb") as fout:
args = ["jupyter", "nbconvert", "--to", "notebook", "--execute",
"--ExecutePreprocessor.timeout=1000",
"--output", fout.name, path]
subprocess.check_call(args)
def test():
_exec_notebook('visualisations_tutorial.ipynb')
I prefer the 1st solution - use the py.test extension.
By the way, do we care about the time needed for Travis to run tests? I mean, in both cases, I will create a jupyter notebook with visualisations, but I can not figure out the "length" of this notebook. Should I include all the possible ways how to call any visualisation function? Or including a few different calls (around 5-7) of every function would be sufficient?
Sources: