摘要

The automatic control of emotional expression in music is a challenge that is far from being solved. This paper describes research conducted with the aim of developing a system with such capabilities. The system works with standard MIDI files and develops in two stages: the first offline, the second online. In the first stage, MIDI files are partitioned in segments with uniform emotional content. These are subjected to a process of features extraction, then classified according to emotional values of valence and arousal and stored in a music base. In the second stage, segments are selected and transformed according to the desired emotion and then arranged in song-like structures.
The system is using a knowledge base, grounded on empirical results of works of Music Psychology that was refined with data obtained with questionnaires; we also plan to use data obtained with other methods of emotional recognition in a near future. For the experimental setups, we prepared web-based questionnaires with musical segments of different emotional content. Each subject classified each segment after listening to it, with values for valence and arousal. The modularity, adaptability and flexibility of our system's architecture make it applicable in various contexts like video-games, theater, films and healthcare contexts.

  • 出版日期2010-12