摘要

To improve the performance of the standard particle swarm optimization (PSO) which suffers from premature convergence and slow convergence speed, many PSO variants introduce lots of stochastic or aimless strategies to overcome the convergence problem. However, the mutual learning between elites particles is omitted, although which might be benefit to the convergence speed and, prevent the premature convergence. In this paper, we introduce DSLPSO, which integrates three novel strategies, specifically, tabu detecting, shrinking and local learning strategies, into PSO to overcome the aforementioned shortcomings. In DSLPSO, search space of each dimension is divided into many equal subregions. Then the tabu detecting strategy, which has good ergodicity for search space, helps the global historical best particle to detect a more suitable subregion, and thus help it jump out of a local optimum. The shrinking strategy enables DSLPSO to take optimization in a smaller search space and obtain a higher convergence speed. In the local learning strategy, a differential between two elites particles is used to increase solution accuracy. The experimental results show that DSLPSO has a superior performance in comparison with several other participant PSOs on most of the tested functions, as well as offering faster convergence speed, higher solution accuracy and stronger reliability.