摘要

This study extends the state of the art of deep learning convolutional neural network (CNN) to the classification of video images of echocardiography, aiming at assisting clinicians in diagnosis of heart diseases. Specifically, the architecture of neural networks is established by embracing hand-crafted features within a data-driven learning framework, incorporating both spatial and temporal information sustained by the video images of the moving heart and giving rise to two strands of two-dimensional convolutional neural network (CNN). In particular, the acceleration measurement along the time direction at each point is calculated using dense optical flow technique to represent temporal motion information. Subsequently, the fusion of both networks is conducted via linear integrations of the vectors of class scores obtained from each of the two networks. As a result, this architecture maintains the best classification results for eight viewpoint categories of echo videos with 92.1% accuracy rate whereas 89.5% is achieved using only single spatial CNN network. When concerning only three primary locations, 98% of accuracy rate is realised. In addition, comparisons with a number of well-known hand-engineered approaches are also performed, including 2D KAZE, 2D KAZE with Optical Flow, 3D KAZA, Optical Flow, 2D SIFT and 3D SIFT, which delivers accuracy rate of 89.4%, 84.3%, 87.9%, 79.4%, 83.8% and 73.8% respectively.