摘要

Judgments of upright faces tend to be more rapid than judgments of inverted faces. This is consistent with encoding at different rates via discrepant mechanisms, or via a common mechanism that is more sensitive to upright input. However, to the best of our knowledge no previous study of facial coding speed has tried to equate sensitivity across the characteristics under investigation (eg emotional expression, facial gender, or facial orientation). Consequently we cannot tell whether different decision speeds result from mechanisms that accrue information at different rates, or because facial images can differ in the amount of information they make available. To address this, we examined temporal integration times, the times across which information is accrued toward a perceptual decision. We examined facial gender and emotional expressions. We first identified image pairs that could be differentiated on 80% of trials with protracted presentations (1 s). We then presented these images at a range of brief durations to determine how rapidly performance plateaued, which is indicative of integration time. For upright faces gender was associated with a protracted integration relative to expression judgments. This difference was eliminated by inversion, with both gender and expression judgments associated with a common, rapid, integration time. Overall, our data suggest that upright facial gender and expression are encoded via distinct processes and that inversion does not just result in impaired sensitivity. Rather, inversion caused gender judgments, which had been associated with a protracted integration, to become associated with a more rapid process.

  • 出版日期2011