摘要

Many studies on eye tracking have been conducted in diverse research areas. Nevertheless, eye tracking continues to be limited by low accuracy and a severe vibration problem due to pupil tremors. Furthermore, because almost all selection interactions, such as click events, use a dwell-time or eye-blinking method, eye tracking presents issues for both time consumption and involuntary blinking. In this paper, we therefore propose a multi-modal interaction method using a combination of eye tracking and hand gesture recognition with the commercial hand gesture controller. This method performs global and intuitive navigation using eye tracking, and local and detailed navigation using hand gesture controller. It supports intuitive hand gestures for mouse-button clicking. Experimental results indicate that the targeting time for small points is significantly improved using the proposed method. Especially, the proposed method has advantages in large display with high spatial resolution environment. Also, the proposed clicking interaction and modality switching concept showed accurate recognition rate and positive training effect, respectively.

  • 出版日期2017-9