-
Notifications
You must be signed in to change notification settings - Fork 7
Camera lightfield
- tls_cameraLightField.mlx - Illustrates how to build a light field camera with microlenses. -- Under Development for v4.
- Lightfield Tutorial Video - A video walkthrough of the Lightfield Tutorial.
We can simulate light field cameras based on a microlens array with multiple pixels behind each microlens. This relies on the omni camera model and special tools for adding a microlens array to the lens model.
We convert a normal lens file into a microlens file using the lenstool inside of the docker container.
To see the options run
docker run vistalab/pbrt-v3-spectral lenstool
The options are printed to your screen as
usage: lenstool <command> [options] <filenames...>
commands: convert insertmicrolens
convert options:
--inputscale <n> Input units per mm (which are used in the output). Default: 1.0
--implicitdefaults Omit fields from the json file if they match the defaults.
insertmicrolens options:
--xdim <n> How many microlenses span the X direction. Default: 16
--ydim <n> How many microlenses span the Y direction. Default: 16
--filmwidth <n> Width of target film in mm. Default: 20.0
--filmheight <n> Height of target film in mm. Default: 20.0
--filmtolens <n> Distance from film to back of main lens system (in mm). Default: 50.0
--filmtomicrolens <n> Distance from film to back of microlens. Default: 0.0
The 'convert' option converts old 'dat' format lens files to the newer 'JSON' format.
The insertmicrolens adds an array of microlenses, as illustrated below.
The lenstool takes an existing mainLens in JSON format and a definition of the microlens, which is usually quite simple such as a lens with a spherical and flat surface, and inserts the microlens definition into the imaging lens at multiple positions as defined by the parameters
docker run lenstool insertmicrolens mainLens.JSON microLens.JSON outputOfCombined.JSON <parameters>
To create a lens with the double Gauss lens and an array of 256 x 256 microlenses over the sensor surface, for example, we would use the command
docker run lenstool insertmicrolens dgauss.22deg.100.0mm.json 2elLens.json outputOfCombined.JSON <parameters>
Specifying these additional parameters creates a light field camera (implicitly).
thisR.set('n microlens',[128 128]);
thisR.set('n subpixels',[7, 7]);
The 'n microlens' parameter specifies the number of row/col microlenses (or pinholes) in the microlens array. The 'n subpixels' parameter specifies the number of pixels illuminated by each microlens (or pinhole). Together, these parameters set the spatial sampling of the spectral irradiance.
It is possible to simulate either a pinhole or a microlens by setting this parameter.
thisR.set('microlens',1); % Determines whether we are simulating a pinhole or lens
The microlens f/# is set to bring the image into focus on the sensor/film plane, which is always XX mm behind the microlens array (we think).
Here we initialize ISET, Docker, and load the teapot-area data into a recipe:
%% Set up ISET and Docker
ieInit;
if ~piDockerExists, piDockerConfig; end
% Read the scene pbrt file
% We organize the pbrt files with its includes (textures, brdfs, spds, geometry)
% in a single directory.
fname = fullfile(piRootPath,'data','V3','teapot','teapot-area-light.pbrt');
if ~exist(fname,'file'), error('File not found'); end
% Read the main scene pbrt file. Return it as a recipe
thisR = piRead(fname);
Specifying how many microlenses and the number of sub-pixels behind each microlens. (Ask OVT about this issue).
%% Modify the recipe for a light field camera
%% Needs to be updated for v3!
thisR.set('camera','light field');
thisR.set('n microlens',[128 128]);
thisR.set('n subpixels',[7, 7]);
thisR.set('microlens',1); % Not sure about on or off
thisR.set('aperture',50);
thisR.set('rays per pixel',128);
thisR.set('light field film resolution',true);
% Move the camera far enough away for a focus.
thisR.set('object distance',35); % I guess about 10 mm away because the scene is tiny
thisR.set('autofocus',true);
Rendering at this resolution can take 10 minutes or so on an older Mac
[p,n,e] = fileparts(fname);
thisR.set('outputFile',fullfile(piRootPath,'local','teapot',[n,e]));
piWrite(thisR);
% Render with the Docker container
[oi,results] = piRender(thisR,'mean illuminance',10);
vcAddObject(oi); oiWindow; oiSet(oi,'gamma',0.5);
We are returned an optical image of the irradiance at the sensor. The image here was simulated with the double gauss imaging lens, 128 x 128 microlenses, and 7x7 array under each microlens.
The ISET sensor is set to have a pixel size that matches the spatial resolution of the light field irradiance. This produces an image where we can see the shading of the pixels behind each of the microlenses.
sensor = sensorCreate('light field',oi);
sensor = sensorSet(sensor,'exp time',0.01); % 10 ms.
sensor = sensorCompute(sensor,oi);
The example script performs some simple light field image processing by converting the sensor data to the light field structure used in Don Dansereau's Lightfield Toolbox.
ip = ipCreate;
ip = ipCompute(ip,sensor);
% Pack the rgb image into Dansereau's lightfield structure and make a video
nPinholes = thisR.get('n microlens');
lightfield = ip2lightfield(ip,'pinholes',nPinholes,'colorspace','srgb');
LFDispVidCirc(lightfield.^(1/2.2));
ISET3d development is led by Brian Wandell's Vistalab group at Stanford University and supported by contributors from other research institutions and industry.
- Introduction
- Installation
- Workflow
- Camera
- Assets
- Materials Overview
- Textures
- Lights
- Rendering
- Scene data
- Programming overview
- [General Notes on moving to v4]
- Dockerized pbrt-v4 on the GPU