摘要

Visual-based action recognition has already been widely used in human-machine interfaces. However, it is a challenging research to recognize the human actions from different viewpoints. In order to solve this issue, a novel multi-view space hidden Markov models (HMMs) algorithm for view-invariant action recognition is proposed. First, a view-insensitive feature representation by combining the bag-of-words of interest point with the amplitude histogram of optical flow is utilized for describing the human action sequences. The combined features could not only solve the problem that there was no possibility in establishing an association between traditional bag-of-words of interest point method and HMMs, but also greatly reduce the redundancy in the video. Second, the view space is partitioned into multiple sub-view space according to the camera rotation viewpoint. Human action models are trained by HMMs algorithm in each sub-view space. By computing the probabilities of the test sequence (i.e., observation sequence) for the given multi-view space HMMs, the similarity between the sub-view space and the test sequence viewpoint are analyzed during the recognition process. Finally, the action with unknown viewpoint is recognized via the probability weighted combination. The experimental results on multi-view action dataset IXMAS demonstrate that the proposed approach is highly efficient and effective in view-invariant action recognition.