Adaptive-Frame-Rate Monocular Vision and IMU Fusion for Robust Indoor Positioning

作者:Tian Ya*; Zhang Jie; Tan Jindong
来源:IEEE International Conference on Robotics and Automation (ICRA), Germany, 2013-05-06 to 2013-05-10.

摘要

Robust navigation for mobile robots requires an accurate method for tracking the robot position in the environment. This paper presents a simple and novel visual-inertial integration system suitable for unstructured and unprepared indoor environments, where MARG (Magnet, Angular Rate and Gravity) sensors and a monocular camera are used. The pre-estimated orientation from MARG sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman Filter is selected as the fusion estimator. the use of pre-estimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared to the RAN SAC (RANdom SAmple Consensus) algorithm. In addition, an adaptive-frame-rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.