摘要

Data redundancy caused by correlation has motivated the application of collaborative multimedia in-network processing for data filtering and compression in wireless multimedia sensor networks (WMSNs). This paper proposes an information theoretic image compression framework with an objective to maximize the overall compression of the visual information gathered in a WMSN. The novelty of this framework relies on its independence of specific image types and coding algorithms, thereby providing a generic mechanism for image compression under different coding solutions. The proposed framework consists of two components. First, an entropy-based divergence measure (EDM) scheme is proposed to predict the compression efficiency of performing joint coding on the images collected by spatially correlated cameras. The EDM only takes camera settings as inputs without requiring statistics of real images. Utilizing the predicted results from EDM, a distributed multi-cluster coding protocol (DMCP) is then proposed to construct a compression-oriented coding hierarchy. The DMCP aims to partition the entire network into a set of coding clusters such that the global coding gain is maximized. Moreover, in order to enhance decoding reliability at data sink, the DMCP also guarantees that each sensor camera is covered by at least two different coding clusters. Experiments on H.264 standards show that the proposed EDM can effectively predict the joint coding efficiency from multiple sources. Further simulations demonstrate that the proposed compression framework can reduce 10%-23% total coding rate compared with the individual coding scheme, i.e., each camera sensor compresses its own image independently.

  • 出版日期2011-4