摘要

Previous change blindness studies have failed to address the importance of balancing low-level visual salience when producing experimental stimuli for a change detection task. Therefore, prior results suggesting that top-down processes influence change detection may be contaminated by low-level saliency differences in the stimuli used. Here we present a novel technique for generating semi-automated balanced modifications to a scene, handled by a genetic algorithm coupled with a computational model for bottom-up saliency. The saliency model obtains global saliency values for input images by analysing peaks in feature contrast maps. This quantification approach facilitates the generation of experimental stimuli using natural images and is an extension to a recently investigated approach using only low-level stimuli (Verma & McOwan, 2009). In this exemplar study, subjects were asked to detect changes in a flicker task containing the original scene image (A) and a synthesised modified version (A'). We find under the conditions where global saliency is balanced between A and A' as well as between all modifications (all instantiations of A') that low-level saliency is indeed a reasonable estimator of change detection performance in comparison with high-level measures such as mouse-click densities. When the saliency of the changes are similar, addition/removal changes are detected more readily than colour changes to the scene.

  • 出版日期2010