A Graphical Model for Online Auditory Scene Modulation Using EEG Evidence for Attention

作者:Haghighi Marzieh*; Moghadamfalahi Mohammad; Akcakaya Murat; Shinn Cunningham Barbara G; Erdogmus Deniz
来源:IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2017, 25(11): 1970-1977.
DOI:10.1109/TNSRE.2017.2712419

摘要

Recent findings indicate that brain interfaces have the potential to enable attention-guided auditory scene analysis and manipulation in applications, such as hearing aids and augmented/virtual environments. Specifically, non-invasively acquired electroencephalography (EEG) signals have been demonstrated to carry some evidence regarding, which of multiple synchronous speech waveforms the subject attends to. In this paper, we demonstrate that: 1) using data-and model-driven cross-correlation features yield competitive binary auditory attention classification results with at most 20 s of EEG from 16 channels or even a single well-positioned channel; 2) a model calibrated using equal-energy speech waveforms competing for attention could perform well on estimating attention in closed-loop unbalanced-energy speech waveform situations, where the speech amplitudes are modulated by the estimated attention posterior probability distribution; 3) such a model would perform even better if it is corrected (linearly, in this instance) based on EEG evidence dependence on speech weights in the mixture; and 4) calibrating a model based on population EEG could result in acceptable performance for new individuals/users; therefore, EEG-based auditory attention classifiers may generalize across individuals, leading to reduced or eliminated calibration time and effort.