摘要

The use of hand gestures offers an alternative to the commonly used human computer interfaces (i.e., keyboard, mouse, gamepad), providing a more intuitive way of navigating among menus and in multimedia applications. One of the most difficult issues when designing a hand gesture recognition system is to introduce new detectable gestures without high cost, this is known as gesture scalability. Commonly, the introduction of new gestures needs a recording session of them, involving real subjects in the process. This paper presents a training framework for hand posture detection systems based on a learning scheme fed with synthetically generated range images. Different configurations of a 3D hand model result in sets of synthetic subjects, which have shown good performance in the separation of gestures from several dictionaries of the State of Art. The proposed approach allows the learning of new dictionaries with no need of recording real subjects, so it is fully scalable in terms of gestures. The obtained accuracy rates for the dictionaries evaluated are comparable to, and for some cases better than, the ones reported for different real subjects training schemes.

  • 出版日期2014-7