摘要

The discrete cross-entropy optimization algorithm iteratively samples solutions according to a probability density on the solution space. The density is adapted to the good solutions observed in the present sample before producing the next sample. The adaptation is controlled by a so-called smoothing parameter. We generalize this model by introducing a flexible concept of feasibility and desirability into the sampling process. In this way, our model covers several other optimization procedures, in particular the ant-based algorithms. The focus of this paper is on some theoretical properties of these algorithms. We examine the first hitting time tau of an optimal solution and give conditions on the smoothing parameter for tau to be finite with probability one. For a simple test case we show that runtime can be polynomially bounded in the problem size with a probability converging to 1. We then investigate the convergence of the underlying density and of the sampling process. We show, in particular, that a constant smoothing parameter, as it is often used, makes the sample process converge in finite time, freezing the optimization at a single solution that need not be optimal. Moreover, we define a smoothing sequence that makes the density converge without freezing the sample process and that still guarantees the reachability of optimal solutions in finite time. This settles an open question from the literature.

  • 出版日期2014-10