摘要

We investigated the neural representation of observed actions in the human parietal and premotor cortex, which comprise the action observation network or the mirror neuron system for action recognition. Participants observed object-directed hand actions, in which action as well as other properties were independently manipulated: action (grasp or touch), object (cup or bottle), perspective (1st or 3rd person), hand (right or left), and image size (large or small). We then used multi-voxel pattern analysis to determine whether each feature could be correctly decoded from regional activities. The early visual area showed significant above-chance classification accuracy, particularly high in perspective, hand, and size, consistent with pixel-wise dissimilarity of stimuli. In contrast, the highest decoding accuracy for action was observed in the anterior intraparietal sulcus (alPS) and the ventral premotor cortex (PMv). Moreover, the decoder for action could be correctly generalized for images with high dissimilarity in the parietal and premotor region, but not in the visual area. Our study indicates that the parietal and premotor regions encode observed actions independent of retinal variations, which may subserve our capacity for invariant action recognition of others.

  • 出版日期2011-5-15