摘要

A brief preview of a news video can be generated by semantically aligning the textual sentences of the anchor report, summarized by the anchor, with the visual field shots. Since accurately detecting the object in a visual shot is difficult and a textual term may generally correspond to several synonyms, the alignment of an anchor sentence with a video shot remains challenging. In this study, the temporal relation among the frames in a visual shot is characterized by a visual language model. The language model-based temporal relation is then applied to sentence-based alignment. The bag-of-word representations for the main objects in the key frames of a visual shot are firstly mapped to the visual patterns trained from the news video database. Furthermore, the textual terms in the report sentence are mapped to the textual concepts that are obtained from the HowNet knowledge base. Finally, unsupervised alignment between the textual concepts and the visual patterns in the news videos is performed using the IBM model-1. For evaluation, the visual pattern language model yields an alignment score of 0.77, exceeding that, 0.66, from the DTW method. Considering the performance for different news categories, visual pattern discovery and textual concept discovery can indeed improve the alignment performance in most news categories.

  • 出版日期2011-4