摘要

In college classrooms, large quantities of digital-media data showing students' affective behaviors are continuously captured by cameras on a daily basis. To provide a bench mark for affect recognition using these big data collections, in this paper we propose the first large-scale spontaneous and multimodal student affect database. All videos in our database were selected from daily big data recordings. The recruited subjects extracted one-person image sequences of their own affective behaviors, and then they made affect annotations under standard rules set beforehand. Ultimately, we have collected 2117 image sequences with 11 types of students' affective behaviors in a variety of classes. The Beijing Normal University Large-scale Spontaneous Visual Expression Database version 2.0 (BNU-LSVED2.0) is an extension database of our previous BNU-LSVED1.0 and it has a number of new characteristics. The nonverbal behaviors and emotions in the new version database are more spontaneous since all image sequences are from the recording videos recorded in actual classes, rather than of behaviors stimulated by induction videos. Moreover, it includes a greater variety of affective behaviors, from which can be inferred students' learning status during classes; these behaviors include facial expressions, eye movements, head postures, body movements, and gestures. In addition, instead of providing only categorical emotion labels, the new version also provides affective behavior labels and multi-dimensional Pleasure-Arousal-Dominance (PAD) labels that have been assigned to the image sequences. Both the detailed subjective descriptions and the statistical analyses of the self-annotation results demonstrate the reliability and the effectiveness of the multi-dimensional labels in the database.