-
Notifications
You must be signed in to change notification settings - Fork 7
Remote Rendering with PBRT v4
These notes explain how to configure your computer and Matlab environment to render using a GPU on a remote machine. We assume that you have access to a machine with a GPU and the a Docker container that will use it to render with PBRT.
- At present, we use the syncIntoDocker branch of iset3d-v4. This will move into the main branch later.
- You need key-validated ssh access to the remote machine (see how to below).
- You need to create a Docker context on your machine that points at your ssh endpoint
This is done by running
docker context create --docker host=ssh://\<username@hostname> remote-render
- You need to know what the appropriate docker image is on the rendering server. That gets plugged in to the render settings. For example, Vistalab users on the machine in our distinguished colleague's room has digitalprodev/pbrt-v4-gpu-ampere-mux. This is shown in the example below
"Help dockerWrapper" is one good place for making sure you have the right parameters. We might shift to keeping the master in piRender. The examples here are current as of December 17 2021.
If you get the parameters right, and the server is up and listening your renders should run there.
s_demoChess is a sample script that creates many images from the ChessSet scene, and generates an illlustrative video.
To run on a particular machine using the Docker GPU version you set up a Matlab 'pref'. The basic look is this
renderString = {'gpuRendering', true, ...
'remoteMachine', <machine name>,...
'renderContext', <docker context>,...
'remoteImage', 'digitalprodev/pbrt-v4-gpu-ampere-mux', ...
'remoteRoot',<homedir>, ...
'remoteUser', uName, ...
'localRoot', <should only be needed for Windows/WSL>, ...
'whichGPU', <#>}; % omit or set to -1 for any, otherwise from 0 up
setpref('docker', 'renderString', renderString);
You need to be able to issue the command
ssh <username>@<machinename>
and not get asked for a password. The ssh connects to the Docker daemon on the server and for the rsync of data files.
Many people who connect using ssh already have the first element of this - just a key. So you don't need to do this, but in this example we did it told ssh-keygen not to overwrite what was there.
wandell % ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/wandell/.ssh/id_rsa):
/Users/wandell/.ssh/id_rsa already exists.
Overwrite (y/n)? n
Then we need to copy the key for the machine we use.
wandell % ssh-copy-id wandell@<machine-name-here>
You will be asked for a password for that machine. Ask Brian/Zheng/Dave for the machines we use.
This is a single docker command. The renderContext should be the same one that you set in setpref.
docker context create --docker host=ssh://username@machineaddress renderContext
To get list of running containers:
docker [--context <context-name>] ps
To see what's up on your Nvidia GPUs:
nvidia-smi
To keep track of things on a remote server:
ssh <remote-user>:<remote-server>
nvidia-smi -l
Once you've created a dockerWrapper object, you can use its gpuStatus() method to check the status of the remote GPU:
% For Example:
renderPrefs = getpref('docker','renderString', {'gpuRendering', false});
ourDocker = dockerWrapper(renderPrefs{:});
[status, result] = ourDocker.gpuStatus();
There is also a class-level command to clean up the current remote docker context:
dockerWrapper.reset();
You may need to copy scene components like material/texture files to the scene's directory. Otherwise ISET3d may simply put full local pathnames for them into the relevant .pbrt file, but pbrt running on the remote machine won't have them. Everything in the scene's directory is copied to the rendering server by the remote rendering code automatically.
Basically, there are three filesystems involved for remote rendering via Docker. There is the local iset3d-v4 file system, the remote machine Linux filesystem, and the filesystem inside the Docker container. The first two are what they are, and we need to code to allow for that. The way the pbrt docker images have traditionally been constructed is with a /pbrt that has the pbrt code and related utilities, and /iset that has the files needed for rendering.
In the “traditional” local case, only the specific scene to be rendered was mounted into /iset/iset3d-v4/local/.
But, for performance we added the ability to leave an image’s container running and use it for multiple renders. That meant that we needed to mount the entire /local directory, and then populate specific scenes when they are required. So we figure out the local and remote scene paths and use rsync to move over any files that aren’t already there. We also delete the /renderings folder on the server to get rid of any cruft from previous runs.
After the render, we only pull back the /renderings directory, to save time. The un-written rule means that all outputs need to be in the renderings sub-folder. Typically this is a .exr file, although there could be others.
In another bit of sleight of hand, we use a local version of pbrt on the CPU for further processing (typically imgtool –exr2bin or similar), as that saves us round-tripping to the server, and those conversions only need the CPU anyway.
ISET3d development is led by Brian Wandell's Vistalab group at Stanford University and supported by contributors from other research institutions and industry.
- Introduction
- Installation
- Workflow
- Camera
- Assets
- Materials Overview
- Textures
- Lights
- Rendering
- Scene data
- Programming overview
- [General Notes on moving to v4]
- Dockerized pbrt-v4 on the GPU