Sparse representation combined with context information for visual tracking

作者:Feng, Ping; Xu, Chunyan; Zhao, Zhiqiang; Liu, Fang; Yuan, Caihong; Wang, Tianjiang; Duan, Kui*
来源:Neurocomputing, 2017, 225: 92-102.
DOI:10.1016/j.neucom.2016.11.009

摘要

In visual tracking, the main problem is to find candidates that are most likely to be the target in successive frames, so it is important to design a proper mechanism to evaluate this. In this paper, we propose a novel sparse representation based visual tracking algorithm, which well integrates the temporal and spatial context information of tracking objects into a unified framework. Specifically, we compute the similarity between the target and its candidates, which is acquired by fusing three aspects of the target's appearance variation with different weights. For the first part, we apply a patch based sparse representation to measure the similarities between the target in the first frame and candidates in current frame. Since the tracking result in the last frame provides the latest variation information of the target, we employ an image quality assessment method to obtain the similarity scores in the second part, and the spatial context information is also exploited. As the target appearance may suffer from radical changes along the video sequence, tracking that only uses the two parts mentioned above will suffer from serious drifting problems and easily cause incorrect tracking results. In order to ease this problem, we exploit the temporal context information by generating a group of history target templates adaptively according to previous tracking results, computing similarities between each candidate with them and the maximum will be used in the third part. Finally, we combine these parts to calculate the similarity scores and take those candidates with the highest score as the new targets in current frames. The extensive experiments on twelve challenging video sequences show that our algorithm can achieve performance competitive with state-of-the-art trackers.