摘要

In evolutionary optimization, it is important to understand how fast evolutionary algorithms converge to the optimum per generation, or their convergence rates. This letter proposes a new measure of the convergence rate, called the average convergence rate. It is a normalized geometric mean of the reduction ratio of the fitness difference per generation. The calculation of the average convergence rate is very simple and it is applicable for most evolutionary algorithms on both continuous and discrete optimization. A theoretical study of the average convergence rate is conducted for discrete optimization. Lower bounds on the average convergence rate are derived. The limit of the average convergence rate is analyzed and then the asymptotic average convergence rate is proposed.