摘要

In recent studies on sparse modeling, nonconvex penalties have received considerable attentions due to their superiorities on sparsity-inducing over the convex counterparts. In this paper, we study the convergence of a nonconvex iterative thresholding algorithm for solving a class of sparse regularized optimization problems, where the corresponding thresholding functions of the penalties are discontinuous with jump discontinuities. Therefore, we call the algorithm the iterative jumping thresholding (IJT) algorithm. The finite support and sign convergence of IJT algorithm is first verified via taking advantage of such jump discontinuity. Together with the introduced restricted Kurdyka-Lojasiewicz property, then the global convergence(1) of the entire sequence can be further proved. Furthermore, we can show that the IJT algorithm converges to a strictly local minimizer at an eventual linear rate(2) under some additional conditions. Moreover, we derive a posteriori computable error estimate, which can be used to design an efficient terminate rule. It should be pointed out that the l(q) quasinorm (0 < q < 1) is an important subclass of the nonconvex penalties studied in this paper. In particular, when applied to the l(q) regularization, IJT algorithm can converge to a local minimizer with an eventual linear rate under certain concentration conditions. We also apply the proposed algorithm to sparse signal recovery and synthetic aperture radar imaging problems. The experiment results show the effectiveness of the proposed algorithm.