摘要

Automatic segmentation of prostate in computed tomography (CT) images plays an important role in medical image analysis and image-guided radiation therapy. It remains as a challenging problem mainly due to three issues: 1) the image contrast between the prostate and its surrounding tissues is low in prostate CT images and no obvious boundaries can be observed; 2) the unpredictable prostate motion causes large position variations of the prostate in the treatment images scanned at different treatment days; and 3) the uncertainty of the existence of bowel gas in treatment images significantly changes the image appearance even for images taken from the same patient. To address these issues, in this paper we propose a feature-based learning framework for accurate prostate localization in CT images. The main contributions of the proposed method lie in the following aspects. 1) Anatomical features are extracted from input images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to help localize the prostate. 2) Regions with salient features but irrelevant to the localization of prostate, such as regions filled with bowel gas, are automatically filtered out by the proposed method. 3) An online update mechanism is adopted to adaptively combine both population information and patient-specific information to localize the prostate. The proposed method is evaluated on a CT prostate dataset of 24 patients to localize the prostate, where each patient has more than 10 longitudinal images scanned at different treatment times. It is also compared with several state-of-the-art prostate localization algorithms in CT images, and the experimental results demonstrate that the proposed method achieves the highest localization accuracy among all the methods under comparison.

  • 出版日期2012-8