摘要

Social robots are becoming a companion in everyday life. To be well accepted by humans, they should efficiently understand meanings of their partners' motions and body language and respond accordingly. Learning concepts by imitation brings them this ability in a user-friendly way. This paper presents a fast and robust model for incremental learning of concepts by imitation (ILoCI). In ILoCI, observed multimodal spatiotemporal demonstrations are incrementally abstracted and generalized based on their perceptual and functional similarities during the imitation. Perceptually similar demonstrations are abstracted by a dynamic model of the mirror neuron system. The functional similarities of demonstrations are also learned through a limited number of interactions with the teacher. Incremental relearning of acquired concepts together through memory rehearsal enables the learner to gradually extract and utilize the common structural relations among demonstrations to expedite the learning process especially at the initial stages. Performance of ILoCI is assessed using a standard benchmark dataset and a human-robot interaction task in which a humanoid robot learns to abstract teacher's hand motions during imitation. Its performance is also evaluated on occluded observations that are probable in real environments. The results show efficiency of ILoCI in concept acquisition, recognition, prediction, and generation in addition to its robustness to occlusions and high variability in observations.

  • 出版日期2017-2