摘要

The value of learning an uncertain input in a decision model can be quantified by its partial expected value of perfect information (EVPI). This is commonly estimated via a 2-level nested Monte Carlo procedure in which the parameter of interest is sampled in an outer loop, and then conditional on this sampled value, the remaining parameters are sampled in an inner loop. This 2-level method can be difficult to implement if the joint distribution of the inner-loop parameters conditional on the parameter of interest is not easy to sample from. We present a simple alternative 1-level method for calculating partial EVPI for a single parameter that avoids the need to sample directly from the potentially problematic conditional distributions. We derive the sampling distribution of our estimator and show in a case study that it is both statistically and computationally more efficient than the 2-level method.

  • 出版日期2013-8