Duration-Controlled LSTM for Polyphonic Sound Event Detection

作者:Hayashi Tomoki*; Watanabe Shinji; Toda Tomoki; Hori Takaaki; Le Roux Jonathan; Takeda Kazuya
来源:IEEE/ACM Transactions on Audio Speech and Language Processing, 2017, 25(11): 2059-2070.
DOI:10.1109/TASLP.2017.2740002

摘要

This paper presents a new hybrid approach called duration-controlled long short-term memory (LSTM) for polyphonic sound event detection (SED). It builds upon a state-of-theart SED method that performs frame-by-frame detection using a bidirectional LSTM recurrent neural network (BLSTM), and incorporates a duration-controlled modeling technique based on a hidden semi-Markov model. The proposed approach makes it possible to model the duration of each sound event precisely and to perform sequence-by-sequence detection without having to resort to thresholding, as in conventional frame-by-frame methods. Furthermore, to effectively reduce sound event insertion errors, which often occur under noisy conditions, we also introduce a binarymask-based postprocessing that relies on a sound activity detection network to identify segments with any sound event activity, an approach inspired by the well-known benefits of voice activity detection in speech recognition systems. We conduct an experiment using the DCASE2016 task 2 dataset to compare our proposed method with typical conventional methods, such as nonnegative matrix factorization and standard BLSTM. Our proposed method outperforms the conventionalmethods both in an event-based evaluation, achieving a 75.3% F1 score and a 44.2% error rate, and in a segment-based evaluation, achieving an 81.1% F1 score, and a 32.9% error rate, outperforming the best results reported in the DCASE2016 task 2 Challenge.

  • 出版日期2017-11