-
Notifications
You must be signed in to change notification settings - Fork 17
Description
I love the idea to simply use a process in a docker container as kernel.
I see an issue, though:
When you run pip within this kernel, you install stuff in the docker container, right?
Some pip packages come with jupyter extensions.
If you install them in the virtual environment within the container, the jupyter instance running "outside" has no access to those.
If I install stuff via ! pip in jupyterlab and run docker diff on the container running my kernel, I see that stuff was installed to e.g.
/usr/local/lib/python3.7/site-packages/
/usr/local/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/usr/local/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/schemas/@jupyter-widgets/jupyterlab-manager/plugin.json
(all in the container).
As far as I understand, I can supply additional directories, where jupyter lab searches for extensions.
The documentation mentions JUPYTER_CONFIG_PATH and JUPYTER_PATH.
The proposal would be:
If we mount a directory on the host as volume in the container running the kernel and then modify the environmental variables (or modify the paths by some other configuration setting), then jupyter should get access to the thus installed extensions.