摘要

The main contribution of this paper is to present a convergent and flexible deconvolution algorithm based on the well-known Tikhonov-regularized least squares estimate under non-negativity constraints. The key idea for developing the algorithm is to replace the minimization of the cost function at each iteration by the minimization of a surrogate function, leading to a guaranteed decrease in the cost function. The algorithm derivation can also be interpreted as the expectation maximization process, where the surrogate function may be viewed as the negative conditional expectation, and then the minimization of the surrogate function is equivalent to the maximization of the conditional expectation. The proposed algorithm has some favorable properties, including the monotonic decrease of the cost function, the self-constraining in the feasible region and the absence of a pre-determined step size. This algorithm can be seen as a special case of Lanteri's method, but this paper theoretically proves that the iteration sequence will converge to a global solution. The simulation results confirm similar behaviors of Lanteri's method and the proposed one. The results also demonstrate that the proposed algorithm provides a performance comparable to those of the other commonly used methods as regards restoration ability, convergence speed and computational cost.