摘要

This paper presents a visual-inertial odometry framework that tightly fuses inertial measurements with visual data from one or more cameras, by means of an iterated extended Kalman filter. By employing image patches as landmark descriptors, a photometric error is derived, which is directly integrated as an innovation term in the filter update step. Consequently, the data association is an inherent part of the estimation process and no additional feature extraction or matching processes are required. Furthermore, it enables the tracking of noncorner-shaped features, such as lines, and thereby increases the set of possible landmarks. The filter state is formulated in a fully robocentric fashion, which reduces errors related to nonlinearities. This also includes partitioning of a landmark's location estimate into a bearing vector and distance and thereby allows an undelayed initialization of landmarks. Overall, this results in a compact approach, which exhibits a high level of robustness with respect to low scene texture and motion blur. Furthermore, there is no timeconsuming initialization procedure and pose estimates are available starting at the second image frame. We test the filter on different real datasets and compare it with other state-of-the-art visual-inertial frameworks. Experimental results show that robust localization with high accuracy can be achieved with this filter-based framework.

  • 出版日期2017-9