摘要

In acoustic event detection, the training data size of some acoustic events is often small and imbalanced. To deal with this, this paper proposes generating the virtual training data categorically using the auxiliary classifier generative adversarial networks. Soft labels of acoustic events are first calculated to represent the acoustic event localization information. The closer the current frame is to the middle of the manually labeled acoustic event, the higher the soft label will be, which makes the soft labels positively correlated with the acoustic event localization. Then, the acoustic event class and the quantized soft labels are used as the input condition to the auxiliary classifier generative adversarial networks to generate an arbitrary number of training samples. Experimental results on the TUT Sound Event 2016 under the home environment and TUT Sound Event 2017 under the street environment demonstrate the improved performance of the proposed technique compared to existing acoustic event detection systems.

  • 出版日期2019-6