摘要

Compared with traditional computer animation modelingartificial life animation focus on the agent behavior modeling, processing the behavior and motion detail of agents. It brings a great mass of calculation costs, especially the communication and the scale of computing generated by the perception module, making it difficult to achieve parallel in simulation and rendering for the large-scale animation. In this paper, a new Olfactory-Visual Perception Model is proposed for large scenes and parallel structure. Firstly, the model transforms the neighbor search perception to a new perception which common with active visual and passive olfactory perception. Secondly, parallel tasks are divided further through sensors. And at last, a Perception-Reduction approach is designed to achieve parallel computing in this system. It is proved by experiments that the model has solved the difficulties restricting the parallelization in artificial life animation system, increased resource utilization effectively in large scene simulation and guaranteed real time.

全文