A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars

作者:Xue, Jian-ru*; Wang, Di; Du, Shao-yi; Cui, Di-xiao; Huang, Yong; Zheng, Nan-ning
来源:Frontiers of Information Technology and Electronic Engineering, 2017, 18(1): 122-138.
DOI:10.1631/FITEE.1601873

摘要

Most state-of-the-art robotic cars' perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.