摘要

Visual attention mechanism (VAM) automatically ignores the superfluous information and pays attention to the most significant objects when people are watching the pictures. There are numerous bottom-up visual attention computational models to detect the salient area of an image. In this paper, an improved visual attention computational model based on Itti's model is proposed, which is comprised of three components. Firstly, the lower-level primitive image features s are extracted from CIELa*b* color space instead of RGB color space; secondly, the feature images are decomposed into wavelet pyramids by wavelet-based multi-scale transform. Thirdly, a new strategy is used to combine all conspicuity maps into a final saliency map with different weights, which are proportional to the contribution of each conspicuity map. Compared with Itti's models, subjective experiments prove that the approach proposed in this paper is more effective.