摘要

Concordance measures are frequently used for assessing the discriminative ability of risk prediction models. The interpretation of estimated concordance at external validation is difficult if the case-mix differs from the model development setting. We aimed to develop a concordance measure that provides insight into the influence of case-mix heterogeneity and is robust to censoring of time-to-event data. We first derived a model-based concordance (mbc) measure that allows for quantification of the influence of case-mix heterogeneity on discriminative ability of proportional hazards and logistic regression models. This mbc can also be calculated including a regression slope that calibrates the predictions at external validation (c-mbc), hence assessing the influence of overall regression coefficient validity on discriminative ability. We derived variance formulas for both mbc and c-mbc. We compared the mbc and the c-mbc with commonly used concordance measures in a simulation study and in two external validation settings. The mbc was asymptotically equivalent to a previously proposed resampling-based case-mix corrected c-index. The c-mbc remained stable at the true value with increasing proportions of censoring, while Harrell's c-index and to a lesser extent Uno's concordance measure increased unfavorably. Variance estimates of mbc and c-mbc were well in agreement with the simulated empirical variances. We conclude that the mbc is an attractive closed-form measure that allows for a straightforward quantification of the expected change in a model's discriminative ability due to case-mix heterogeneity. The c-mbc also reflects regression coefficient validity and is a censoring-robust alternative for the c-index when the proportional hazards assumption holds.

  • 出版日期2016-10-15