摘要

The discrepancy between observed measurements and model predictionscan be used to improve either the model output alone or both the model output and the parameters that underlie the model. In the case of parameter estimation, methods exist that can efficiently calculate the gradient of the discrepancy to changes in the parameters, assuming that there are no uncertainties in addition to the unknown parameters. In the case of general nonlinear parameter estimation, many different parameter sets exist that locally minimize the discrepancy. In this case, the gradient must be regularized before it can be used by gradient-based minimization algorithms. This article proposes a method for calculating a gradient in the presence of additional model errors through the use of representer expansions. The representers are data driven basis functions that perform the regulariztion. All available data can be used during every iteration of the minimization scheme as is the case in the classical representer method (RM) However, the method proposed here also allows adaptive selection of different portions of the data during different iterations to reduce computation time, the user now has the freedom to choose the number of basis functions and reverse this choice at every iteration. The method also differs from the classic RM by the introduction of measurement representers in addition to state, adjoit and parameter representers and by the fact that no correction terms are calculated. Unlike the classic RM, where the minimization scheme is prescribed, the RM proposed here provides a gradient that can be used in any minimization algorithm.
The applicability of the modified method is illustrated with a synthetic example to estimate permeability values in an inverted-five-spot waterflooding problem.

  • 出版日期2010-3

全文