摘要

The present study examined the extent to which speech and manual gestures spontaneously entrain in a non-communicative task. Participants had to repeatedly utter nonsense /CV/ syllables while continuously moving the right index finger in flexion/extension. No instructions to coordinate were given. We manipulated the type of syllable uttered (/ba/ vs. /sa/), and vocalization (phonated vs. silent speech). Assuming principles of coordination dynamics, a stronger entrainment between the fingers oscillations and the jaw motion was predicted (1) for /ba/, due to expected larger amplitude of jaw motion and (2) in phonated speech, due to the auditory feedback. Fifteen out of twenty participants showed simple ratios of speech to finger cycles (1:1, 1:2 or 2:1). In contrast with our predictions, speech-gesture entrainment was stronger when vocalizing /sa/ than /ba/, also more widely distributed on an in-phase mode. Furthermore, results revealed a spatial anchoring and an increased temporal variability in jaw motion when producing /sa/. We suggested that this indicates a greater control of the speech articulators for /sa/, making the speech performance more receptive to environmental forces, resulting in the greater entrainment observed to gesture oscillations. The speech-gesture coordination was maintained in silent speech, suggesting a somatosensory basis for their endogenous coupling.

  • 出版日期2015-8