摘要

Robotic manipulation systems that operate in unstructured environments must be responsive to feedback from sensors that are disparate in both location and modality. This paper describes a distributed framework for assimilating the disparate feedback provided by force and vision sensors, including active vision sensors, for robotic manipulation systems. The main components of the expectation-based framework include object schemas and port-based agents. Object schemas represent the manipulation task internally in terms of geometric models with attached sensor mappings. Object schemas are dynamically updated by sensor feedback, and thus provide an ability to perform three dimensional spatial reasoning during task execution. Because object schemas possess knowledge of sensor mappings, they are able to both select appropriate sensors and guide active sensors based on task characteristics. Port-based agents are the executors of reference inputs provided by object schemas and are defined in terms of encapsulated control strategies. Experimental results demonstrate the capabilities of the framework in two ways: the performance of manipulation tasks with active camera-lens systems, and the assimilation of force and vision sensory feedback.

  • 出版日期1999-9

全文