摘要

Image registration is one of the major technologies of the medical image processing and analysis. According to the nature of the transformation, image registration can be categorized into two main classes: Rigid Registration and Non-rigid Registration. In this paper, a novel learning-based dissimilarity metric and its approximation are proposed and respectively adopted to the rigid and non-rigid registration of medical images. This novel metric utilizes Bhattacharyya Distances to measure the dissimilarity of the testing image pairs by incorporating the expected intensity distributions which is learnt from the registered training image pairs. In order to test the robustness and accuracy of the proposed metric in the multi-modal rigid registration, one thousand eight hundred randomized CT-T1 registrations were performed and evaluated by the Retrospective Image Registration Evaluation (RIRE) project. For the non-rigid registration, the approximation of the proposed dissimilarity metric is adopted with the Markov Random Field (MRF) modeled non-rigid registration approach. Non-rigid registration experiments of 2D synthetic images with artificial deformation fields and 3D images from the Simulated Brain Database were performed and evaluated respectively in this paper. The experimental results of both rigid and non-rigid medical image registrations demonstrated that the proposed dissimilarity metric outperforms the state-of-the-art approach (Mutual Information) and the close related approach (Kullback-Leibler Distances).