摘要

We address the problem of video contrast enhancement. Existing techniques either do not exploit temporal information at all or do not exploit it correctly. This results in inconsistency that causes undesirable flash and flickering artifacts. Our method analyzes video streams and cluster frames that are similar to each other. Our method does not have omniscient information about the entire video sequence. It is an online process with a fixed delay. A sliding window mechanism successfully detects shot boundaries "on-the-fly" in a video. A graph-based technique called "modularity" performs automatic clustering of video frames without a priori information about clusters. For every cluster in the video, we extract key frames belonging to each cluster using eigen analysis and estimate enhancement parameters for only the key frame, then use these parameters to enhance frames belonging to that cluster, thus making our method robust. We evaluate the clustering method on video sequences from the TRECVid 2001 dataset and compare it with existing methods. We show reduction of flash artifacts in enhanced videos. We show statistically significant improvement in perceived video quality and validate that by conducting experiments on human observers. We show application of our clustering process to perform robust video segmentation.

  • 出版日期2012-9