摘要

Information fusion is one of the essential part of distributed wireless sensor networks as well as perceptual user interfaces. Irrelevant and redundant data severely affect the performance of the information fusion process. In this paper, a method based on multivariate mutual information is presented to validate the acceptability of data from two sources (visual and auditory). The audiovisual information is fused to observe the ventriloquism effect to validate the algorithm. Unlike the preceding algorithms, this framework does not require any preprocessing such as automatic face recognition. Moreover, statistical modeling or feature extraction and learning algorithms are not required to extract the maximum information regions. The results for various cases, containing a single speaker as well as a group of speakers, are also presented.

  • 出版日期2016-10

全文