摘要

Many classical visual odometry and simultaneous localization and mapping methods are able to achieve excellent performance, but mainly are restricted on the static scenes and suffer degeneration when there are many dynamic objects. In this paper, an efficient coarse-to-fine algorithm is proposed for moving object detection in dynamic scenes for autonomous driving. A motion-based conditional random field for this task is modeled. Particularly, for initial dynamic-static segmentation, a superpixel-based binary segmentation is processed, and further for refinement, a pixel-level object segmentation in local region is performed. Additionally, to reduce the projection noise caused by disparity estimation, an approximate Mahalanobis normalization is provided. Finally, in order to evaluate the proposed method, two relative methods are compared as baseline on the public KITTI data set for visual odometry and moving object detection separately. The experiments show the effectiveness and improvement on odometry when the dynamic region is removed and also on moving objects detection.