摘要

Contour is a powerful tool for spatial analysis, which has been widely applied in the research of marine background field. However, it is still a challenge to efficiently extract contours from ocean remote sensing data in large scale. In this paper, we propose a parallel approach for contour extraction based on the analysis of GPU’s parallel architecture and CUDA’s flexible programmability. A contour tracing algorithm is implemented through block searching, which reduces the times of grid traversal and avoids the search of excluded cells. The proposed approach was experimented with different sizes of Sea Surface Temperature data. The results show the effectiveness and efficiency of the approach especially for large scale data, demonstrating that GPU has its novel advantage compared to the CPU with a high speed-up ratio.

全文