摘要

We propose the application of iterative regularization for the development of ensemble methods for solving Bayesian inverse problems. In concrete, we construct (i) a variational iterative regularizing ensemble Levenberg-Marquardt method (IR-enLM) and (ii) a derivative-free iterative ensemble Kalman smoother (IR-ES). The aim of these methods is to provide a robust ensemble approximation of the Bayesian posterior. The proposed methods are based on fundamental ideas from iterative regularization methods that have been widely used for the solution of deterministic inverse problems (Katltenbacher et al. de Gruyter, Berlin 2008). In this work, we are interested in the application of the proposed ensemble methods for the solution of Bayesian inverse problems that arise in reservoir modeling applications. The proposed ensemble methods use key aspects of the regularizing Levenberg-Marquardt scheme developed by Hanke (Inverse Problems 13,79-95 1997) and that we recently applied for history matching in Iglesias (Comput. Geosci. 1-21 2013). Unlike most existing methods where the stopping criteria and regularization parameters are typically selected heuristically, in the proposed ensemble methods, the discrepancy principle is applied for (i) the selection of the regularization parameters and (ii) the early termination of the scheme. The discrepancy principle is key for the theory of iterative regularization, and the purpose of the present work is to apply this principle for the development of ensemble methods defined as iterative updates of solutions to linear ill-posed inverse problems. The regularizing and convergence properties of iterative regularization methods for deterministic inverse problems have long been established. However, the approximation properties of the proposed ensemble methods in the context of Bayesian inverse problems is an open problem. In the case where the forward operator is linear and the prior is Gaussian, we show that the tunable parameters of the proposed IR-enLM and IR-ES can be chosen so that the resulting schemes coincide with the standard randomized maximum likelihood (RML) and the ensemble smoother (ES), respectively. Therefore, the proposed methods sample from the posterior in the linear-Gaussian case. Similar to RML and ES methods, in the nonlinear case, one may not conclude that the proposed methods produce samples from the posterior. The present work provides a numerical investigation of the performance of the proposed ensemble methods at capturing the posterior. In particular, we aim at understanding the role of the tunable parameters that arise from the application of iterative regularization techniques. The numerical framework for our investigations consists of using a state-of-the art Markov chain Monte Carlo (MCMC) method for resolving the Bayesian posterior from synthetic experiments. The resolved posterior via MCMC then provides a gold standard against to which compare the proposed IR-enLM and IR-ES. Our numerical experiments show clear indication that the regularizing properties of the regularization methods applied for the computation of each ensemble have significant impact of the approximation properties of the proposed ensemble methods at capturing the Bayesian posterior. Furthermore, we provide a comparison of the proposed regularizing methods with respect to some unregularized methods that have been typically used in the literature. Our numerical experiments showcase the advantage of using iterative regularization for obtaining more robust and stable approximation of the posterior than unregularized methods.

  • 出版日期2015-2