摘要

Computer generated images are most easily generated as pinhole images whereas images obtained with optical lenses exhibit a Depth-of-Field (DOF) effect. This is due to the fact that optical lenses gather light across finite apertures whereas the simulation of a pinhole lens means that the light is gathered through an infinitesimal small aperture, thus producing sharp images at any depth. Simulating the physical process of gathering light across a finite aperture can be done for example with distributed ray tracing, but it is computationally much more expensive than the simulation through an infinitesimal aperture. The usual way of simulating lens effects is therefore to produce a pinhole image and then post processes the image to approximate the DOF. Post processing algorithms are fast but suffer from incorrect visibilities. In this paper, we propose a novel algorithm that tackles the visibility issue with a sparse set of views rendered through the optical center of the lens and several peripheral viewpoints distributed on the lens. All peripheral images are warped towards the central view to create a Layered-Depth-Image (LDI), so that all observed 3D points located on the same central view-ray are stacked on the same pixel of the LDI. Then, each pixel in the LDI is conceptually scattered into a Point-Spread-PSF) and blended in depth order. While the scatter method is very inefficient on a GPU, we propose a selective gather method for DOF synthesis, which scans the neighborhood of a pixel and blends the colors from the PSFs covering the pixel. Experiments show that the proposed algorithm can synthesize high-quality DOF effects close to the results of distributed ray tracing but at a much higher speed.

  • 出版日期2016-4