摘要

As a simple and compelling approach for estimating outof-sample prediction error, cross-validation naturally lends itself to the task of model comparison. However, even with moderate sample size, it can be surprisingly difficult to compare multilevel models based on predictive accuracy. Using a hierarchical model fit to large survey data with a battery of questions, we demonstrate that even though cross-validation might give good estimates of pointwise out-of-sample prediction error, it is not always a sensitive instrument for model comparison.

  • 出版日期2015