Attention-Based 3D-CNNs for Large-Vocabulary Sign Language Recognition

作者:Huang, Jie; Zhou, Wengang*; Li, Houqiang*; Li, Weiping
来源:IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(9): 2822-2832.
DOI:10.1109/TCSVT.2018.2870740

摘要

Sign language recognition (SLR) is an important and challenging research topic in the multimedia field. Conventional techniques for SLR rely on hand-crafted features, which achieve limited success. In this paper, we present attention-based 3D-convolutional neural networks (3D-CNNs) for SLR. The framework has two advantages: 3D-CNNs learn spatio-temporal features from raw video without prior knowledge and the attention mechanism helps to select the clue. When training 3D-CNN for capturing spatio-temporal features, spatial attention is incorporated into the network to focus on the areas of interest. After feature extraction, temporal attention is utilized to select the significant motions for classification. The proposed method is evaluated on two large scale sign language data sets. The first one, collected by ourselves, is a Chinese sign language data set that consists of 500 categories. The other is the ChaLearn14 benchmark. The experiment results demonstrate the effectiveness of our approach compared with state-of-the-art algorithms.