摘要

One of the key features of the human auditory system is its nearly constant omnidirectional sensitivity, e. g., the system reacts to alerting signals coming from a direction away from the sight of focused visual attention. Auditory signal processing already starts outside the head. The external sound field has to couple into the ear canals. The relative positions of the two ear canals and the sound source lead to a coupling that is strongly dependent on frequency. In this context, not only the two pinnae, but also the whole head have an important functional role, which is best described as a spatial filtering process. This linear filtering is usually quantified in terms of so-called head-related transfer functions (HRTFs), which can also be interpreted as the directivity characteristics of the two ears. In this paper, the HRTF constitutes the cornerstone of a sound localization algorithm, which uses Bayesian information fusion to increase the localization resolution in a 3-D reverberant environment. The localization performance is demonstrated through simulation and is further tested in a household environment. Compared with the existing techniques, the method is able to localize, with higher accuracy, 3-D sound sources under high reverberation conditions. The simplicity of the presented algorithm allows a cost-effective real-time implementation for robotic platforms.

  • 出版日期2014-9