摘要

For range sensing using depth-from-defocus methods, the distance D of a point object from the lens can be evaluated by the concise depth formula D = P/(Q - d(b)), where P and Q are constants for a given camera setting and d(b) is the diameter of the blur circle for the point object on the image detector plane. The amount of defocus d(b) is traditionally estimated from the spatial parameter of a Gaussian point spread function using a complex iterative solution. In this paper, we use a straightforward and computationally fast method to estimate the amount of defocus from a single camera The observed gray-level image is initially converted into a gradient image using the Sobel edge operator. For the edge point of interest, the proportion of the blurred edge region p(e) in a small neighborhood window is then calculated using the moment-preserving technique. The value of p(e) increases as the amount of defocus increases and; therefore, is used as the description of degradation of the point-spread function. In addition to the use of the geometric depth formula for depth estimation, artificial neural networks are also proposed in this study to compensate for the estimation errors from the depth formula. Experiments have shown promising results that the RMS depth errors are within 5% for the depth formula, and within 2% for the neural networks.

  • 出版日期1998-5