摘要

This paper presents an adaptive coupled variation model based on graphic processing unit (GPU) acceleration. First, the model decomposes a given image as the sum of two components: geometric structure and oscillating pattern according to Meyer's theory, and an adaptive variation model based on diffusion tensor is used to enhance the structure part. Second, a nonlocal means filter is used to remove noise in the oscillating part while preserving the image edges and fine details. Then the general purpose computation function of the GPU is used to process the coupled variation model in parallel, and the coalesced access and shared memory combined strategy is used to greatly improve the algorithm performance. Experiment results show that the proposed model can preserve the image edge information and enhance the image texture feature well; and the processing speed of the GPU is at least 15 times faster than that of the CPU serial processing.

全文