摘要

With the explosively increasing of mobile phones and other oriented camera devices, more and more video data is captured and stored. This brings out an urgent need for fast browsing and understanding video contents. Automatic generation of video summarization is one of effective techniques to tackle these problems which extracts succinct summaries to represent the original long videos. It involves two problems: video segmentation and summary generation. Most previous works just focused on addressing the second problem by exploiting a simple strategy like boundary detection to segment videos. However, this type of approach leads to suboptimal result because they not only lack of learning mechanism in video segmentation stage, but also separate the whole task into two independent stages. In this paper, we proposed a novel structure-transfer-driven temporal subspace clustering segmentation (STSC) method for video summarization. We first learn the structure information from source videos and then transfer it to target videos. By the Determinantal Point Process (DPP) algorithm, we select an informative subset of shots to create the final video summary. Experimental results on SumMe and TVSum datasets demonstrate the effection of our proposed method, against state-of-the-art methods.