摘要

Video-based traffic sign detection, tracking, and recognition is one of the important components for the intelligent transport systems. Extensive research has shown that pretty good performance can be obtained on public data sets by various state-of-the-art approaches, especially the deep learning methods. However, deep learning methods require extensive computing resources. In addition, these approaches mostly concentrate on single image detection and recognition task, which is not applicable in real-world applications. Different from previous research, we introduce a unified incremental computational framework for traffic sign detection, tracking, and recognition task using the mono-camera mounted on a moving vehicle under nonstationary environments. The main contributions of this paper are threefold: 1) to enhance detection performance by utilizing the contextual information, this paper innovatively utilizes the spatial distribution prior of the traffic signs; 2) to improve the tracking performance and localization accuracy under nonstationary environments, a new efficient incremental framework containing off-line detector, online detector, and motion model predictor together is designed for traffic sign detection and tracking simultaneously; and 3) to get a more stable classification output, a scale-based intra-frame fusion method is proposed. We evaluate our method on two public data sets and the performance has shown that the proposed system can obtain results comparable with the deep learning method with less computing resource in a near-real-time manner.