摘要

This paper aims to analyze agreement in the assessment of external chest compressions (ECC) by 3 human raters and dedicated feedback software. While 54 volunteer health workers (medical transport technicians), trained and experienced in cardiopulmonary resuscitation (CPR), performed a complete sequence of basic CPR maneuvers on a manikin incorporating feedback software (Laerdal PC v 4.2.1 Skill Reporting Software) (L), 3 expert CPR instructors (A, B, and C) visually assessed ECC, evaluating hand placement, compression depth, chest decompression, and rate. We analyzed the concordance among the raters (A, B, and C) and between the raters and L with Cohen's kappa coefficient (K), intraclass correlation coefficients (ICC), Bland-Altman plots, and survival-agreement plots. The agreement (expressed as Cohen's K and ICC) was >= 0.54 in only 3 instances and was <= 0.45 in more than half. Bland-Altman plots showed significant dispersion of the data. The survival-agreement plot showed a high degree of discordance between pairs of raters (A-L, B-L, and C-L) when the level of tolerance was set low. In visual assessment of ECC, there is a significant lack of agreement among accredited raters and significant dispersion and inconsistency in data, bringing into question the reliability and validity of this method of measurement.

  • 出版日期2017-3