摘要

Biological vision systems have evolved over millions of years, resulting in complex neural structures for representation and processing of stimuli. Moreover, biological visual systems are typically far more efficient than current human-made machine vision systems. The present report describes a non-task-dependent image representation scheme that simulates a biological neural vision mechanism in the early visual system. We designed a neural model involving multiple types of computational units to simulate ganglion cells and their non-classical receptive fields, local feedback control circuits and receptive field dynamic self-adjustment mechanisms in the retina. Beyond the pixel level, our model is able to represent images self-adaptively and rapidly. In addition, the improved representation was found to substantially facilitate image segmentation, figure-ground separation, saliency detection, and object recognition. We propose that these improvements arise because the retinal ganglion cells can resize their receptive fields, enabling multi-scale analysis functionality, a neighborhood referring function, and a localized synthetic function. The ganglion cell layer is the starting point for a diverse variety of subsequent visual processing. The extracted features and image presentation by the ganglion cell will be transmitted into high levels of visual system, for many visual tasks such as image segmentation, contour detection, object recognition and so on, the visual representation in the early stage of visual system is universal and independent on visual tasks.