摘要

Given finite attentional resources, how emotional aspects of stimuli are processed automatically is controversial. Present study examined the time-course for automatic processing of facial expression by assessing N170, and late positive potentials (LPPs) of event-related potentials (ERPs) using a modified rapid serial visual presentation (RSVP) paradigm. Observers were required to confirm a certain house image and to detect whether a face image was presented at the end of a series of pictures. There were no significant main effects on emotional type for P1 amplitudes, whereas happy and fearful expressions elicited larger N170 amplitudes than neutral expressions. Significantly different LPP amplitudes were elicited depending on the type of emotional facial expressions (fear > happy > neutral). These results indicated that threatening priority was absent but discrimination of expressive vs. neutral faces occurred in implicit emotional tasks, at approximately 250 ms post-stimulus. Moreover, the three types of expressions were discriminated during the later stages of processing. Encoding emotional information of faces can be automated to a relatively higher degree, when attentional resources are mostly allocated to superficial analyzing.

  • 出版日期2017-3-28
  • 单位重庆大学; 重庆文理学院