Attention-driven action retrieval with DTW-based 3D descriptor matching

作者:Ji Rongrong*; Sun Xiaoshuai; Yao Hongxun; Xu Pengfei; Liu Tianqiang
来源:16th ACM International Conference on Multimedia, MM '08, 2008-10-26 to 2008-10-31.
DOI:10.1145/1459359.1459443

摘要

From visual perception viewpoint, actions in videos can capture high-level semantics for video content understanding and retrieval. However, action-level video retrieval meets great challenges, due to the interferences from global motions or concurrent actions, and the difficulties in robust action describing and matching. This paper presents a content-based action retrieval framework to enable effective search of near-duplicated actions in large-scale video database. Firstly, we present an attention shift model to distill and partition human-concerned saliency actions from global motions and concurrent actions. Secondly, to characterize each saliency action, we extract 3D-SIFT descriptor within its spatial-temporal region, which is robust against rotation, scale, and view point variances. Finally, action similarity is measured using Dynamic Time Warping (DTW) distance to offer tolerance for action duration variance and partial motion missing. Search efficiency in large-scale dataset is achieved by hierarchical descriptor indexing and approximate nearest-neighbor search. In validation, we present a prototype system VILAR to facilitate action search within "Friends" soap operas with excellent accuracy, efficiency, and human perception revealing ability.

  • 出版日期2008

全文