摘要

Audiovisual perception of conflicting stimuli displays a large level of intersubject variability, generally larger than pure auditory or visual data. However, it is not clear whether this actually reflects differences in integration per se or just the consequence of slight differences in unisensory perception. It is argued that the debate has been blurred by methodological problems in the analysis of experimental data, particularly when using the fuzzy-logical model of perception (FLMP) [Massaro, D. W. (1987). Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry (Laurence Erlbaum Associates, London)] shown to display overfitting abilities with McGurk stimuli [Schwartz, J. L. (2006). J. Acoust. Soc. Am. 120, 1795-1798]. A large corpus of McGurk data is reanalyzed, using a methodology based on (1) comparison of FLMP and a variant with subject-dependent weights of the auditory and visual inputs in the fusion process, weighted FLMP (WFLMP); (2) use of a Bayesian selection model criterion instead of a root mean square error fit in model assessment; and (3) systematic exploration of the number of useful parameters in the models to compare, attempting to discard poorly explicative parameters. It is shown that WFLMP performs significantly better than FLMP, suggesting that audiovisual fusion is indeed subject-dependent, some subjects being more "auditory," and others more "visual." Intersubject variability has important consequences for theoretical understanding of the fusion process, and re-education of hearing impaired people.

  • 出版日期2010-3