摘要

For the quality of the fused outcome is determined by the amount of the information captured from the source images, thus, a multi-modal medical image fusion method is developed in the shift-invariant shearlet transform (SIST) domain. The two-state Hidden Markov Tree (HMT) model is extended into the SIST domain to describe the dependent relationships of the SIST coefficients of the cross-scale and inter-subbands. Base on the model, we explain why the conventional Average-Maximum fusion scheme is not the best rule for medical image fusion, and therefore a new scheme is developed, where the probability density function and standard deviation of the SIST coefficients are employed to calculate the fused coefficients. Finally, the fused image is obtained by directly applying the inverse SIST. Integrating the SIST and the HMT model, more spatial feature information of the singularities and more functional information contents can be preserved and transferred into the fused results. Visual and statistical analyses demonstrate that the fusion quality can be significantly improved over that of five typical methods in terms of entropy and mutual information, edge information, standard deviation, peak signal to noise and structural similarity. Besides, color distortion can be suppressed to a great extent, providing a better visual sense.