摘要

The authors study an iterative algorithm for learning a linear Gaussian observation model with an exponential power scale mixture prior (EPSM). This is a generalisation of previous study based on the Gaussian scale mixture prior. The authors use the principle of majorisation minimisation to derive the general iterative algorithm which is related to a reweighted l(p)-minimisation algorithm. The authors then show that the Gaussian and Laplacian scale mixtures are two special cases of the EPSM and the corresponding learning algorithms are related to the reweighted l(2)- and l(1)-minimisation algorithms, respectively. The authors also study a particular case of the EPSM which is a Pareto distribution and discuss Bayesian methods for parameter estimation.

  • 出版日期2011-2

全文