摘要

Kinect-like depth sensors are widely used in rehabilitation systems. However, using a single depth sensor is less reliable due to limb blocking, data-loss or error. This paper uses two Kinect sensors and a data fusion algorithm to solve these problems. Firstly, two Kinect sensors are used to capture the motion data of the healthy side arm of the hemiplegic patient. Secondly, the data is processed with time alignment, then coordinates are transformed with Bursa transform and data fusion is done using extended set membership filter (ESMF) successively. Then, the motion data is mirrored by the middle plane, namely "mirror motion". In the end, the mirrored motion data controls the wearable robotic arm to drive the patient's paralytic side arm so as to interactively and initiatively complete a variety of recovery actions prompted by computer with 3D animation games. The effectiveness of the proposed approach is validated by both experiments on Kinect sensors &VICON and a 7 DOF manipulator. Also, two Kinect sensors can solve those problems effectively.

全文