Airborne Vision-Based Collision-Detection System

作者:Lai John*; Mejias Luis; Ford Jason J
来源:Journal of Field Robotics, 2011, 28(2): 137-157.
DOI:10.1002/rob.20359

摘要

Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and Traffic Alert and Collision Avoidance System). This paper describes the development and evaluation of a real-time, vision-based collision-detection system suitable for fixed-wing aerial robotics. Using two fixed-wing unmanned aerial vehicles (UAVs) to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400 to about 900 m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advance warning of between 8 and 10 s ahead of impact, which approaches the 12.5-s response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units (GPUs) found on commercial-off-the-shelf graphics devices. Our chosen CPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1,024 x 768 pixel image frames at a rate of approximately 30 Hz. Flight trials using manned Cessna aircraft in which all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.

  • 出版日期2011-4