摘要

Algorithms for solving variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When the operators enjoy some (local) regularity, these parameters can be selected using the so-called Stein Unbiased Risk Estimator (SURE). While this selection is usually performed by an exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering nonsmooth regularizers, such as the popular l(1)-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed by Ramani and Unser for the Monte-Carlo SURE approach. Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of soft-thresholding, it is proved to also be a consistent estimator. This gradient estimate can then be used as a basis for performing a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the nonsmooth function. We provide its expression for a large class of iterative methods including proximal splitting methods and apply our strategy to regularizations involving nonsmooth convex structured penalties. Illustrations of various image restoration and matrix completion problems are given.

  • 出版日期2014