摘要

Purpose Due to the low contrast, blurry boundaries, and large amount of shadows in breast ultrasound (BUS) images, automatic tumor segmentation remains a challenging task. Deep learning provides a solution to this problem, since it can effectively extract representative features from lesions and the background in BUS images. Methods A novel automatic tumor segmentation method is proposed by combining a dilated fully convolutional network (DFCN) with a phase-based active contour (PBAC) model. The DFCN is an improved fully convolutional neural network with dilated convolution in deeper layers, fewer parameters, and batch normalization techniques; and has a large receptive field that can separate tumors from background. The predictions made by the DFCN are relatively rough due to blurry boundaries and variations in tumor sizes; thus, the PBAC model, which adds both region-based and phase-based energy functions, is applied to further improve segmentation results. The DFCN model is trained and tested in dataset 1 which contains 570 BUS images from 89 patients. In dataset 2, a 10-fold support vector machine (SVM) classifier is employed to verify the diagnostic ability using 460 features extracted from the segmentation results of the proposed method. Results Advantages of the present method were compared with three state-of-the-art networks; the FCN-8s, U-net, and dilated residual network (DRN). Experimental results from 170 BUS images show that the proposed method had a Dice Similarity coefficient of 88.97 +/- 10.01%, a Hausdorff distance (HD) of 35.54 +/- 29.70 pixels, and a mean absolute deviation (MAD) of 7.67 +/- 6.67 pixels, which showed the best segmentation performance. In dataset 2, the area under curve (AUC) of the 10-fold SVM classifier was 0.795 which is similar to the classification using the manual segmentation results. Conclusions The proposed automatic method may be sufficiently accurate, robust, and efficient for medical ultrasound applications.