摘要

This paper revisits the problem of dictionary learning on several points although we still consider an alternating optimization scheme. Our first contribution consists in providing a simple proof of convergence for this numerical scheme for a large class of constraints and regularizers on the dictionary atoms. We also investigate the use of a well-known optimization method named alternating direction method of multipliers for solving each of the alternate step of the dictionary learning problem. We show that such an algorithm yields to several benefits. Indeed, it can be more efficient than other competing algorithms such as Iterative Shrinkage Thresholding approach and besides, it allows one to easily deal with mixed constraints or regularizers over the dictionary atoms or approximation coefficients. For instance, we have induced joint sparsity, positivity and smoothness on the dictionary atoms by means of some total variation and sparsity-inducing regularizers. Our experimental results prove that using these mixed constraints helps in achieving better learned dictionary elements especially when learning from noisy signals.

  • 出版日期2013-4-15