For more information, see the Blog Post
Code for NeRF2D, a 2D analogue of NeRF designed for facilitating experimentation with Neural Radiance Fields, and Novel View Synthesis algorithms.
NeRF2D is trained with 1D views of a 2D scene, and learns to reconstruct a 2D radiance field. Conceptually, this is the same as 3D novel view synthesis, but requires much less compute, and is conceptually easier to understand and visualize:
We show that we can reformulate NeRF in 2D by reconstructing a 2D shape from 1D views of it. Fitting a 2D NeRF is very fast and we propose this as a viable toy dataset for performing quick experimentation on NeRF
To train a 3D NeRF, we need a 2D multi-view dataset. Since these are not readily available we include a blender script for rendering 1D images of an object:
With the addon, we can generate training, validation and testing datasets of any object, with different distribution of camera poses.
Since each training view is just a 2D line, we visualize all of the views together by concatenating each view horizontally and plotting them together. For the example scene above we get the following:
We perform experiments on four testing scenes:
We fit a 2D NeRF using 50 views of resolution 100 in under a minute. We show the reconstructed testing views after training each scene:
Additionally, since we are working in 2D space, we can visualize the density field by simply uniformly sampling
In NeRF, a critical component to their success was the use of positional encoding. The spectral bias of neural networks makes it difficult for them to express high-frequency spatial functions. They found that a simple solution is to pass the coordinates through a positional encoding
Where
We validated this in NeRF2D, with the "Bunny" scene and unsurprisingly found that without positional encoding










