摘要

Background: Many osteopathic educational institutions (OEIs) require students to complete research projects, which normally involve external assurance of academic standards. Different academic factors can lead to tension between teaching approaches and attitudes to criticality as a necessary competency for effective practice in evidence-informed healthcare. Lack of clarity about different purposes of student research can lead to varying interpretations of assessment criteria and inconsistent marking. Objective: A new card sorting was designed to enable analysis of opinions about appropriate standards of criticality in student research reports. Methods: Data was obtained from a convenience sample (n = 50) of participants attending four conference workshops. Participants read an abstract from a hypothetical student project and sorted cards containing project extracts into 'unacceptable', 'acceptable' or 'good' examples of criticality and recorded scores on marking grids. Results: Scores demonstrated poor inter-rater agreement (kappa < 0.20), especially for cards expected to show 'acceptable' levels of criticality, although participants in one workshop achieved 'fair' levels of agreement (K = 0.22-0.39). Conclusions: The workshops promoted discussion about the challenges of encouraging students to question underlying osteopathic principles but there was poor inter-rater agreement about appropriate levels of criticality. Heterogeneous workshop groups and anonymised data meant that differences between OEls and confounding factors such as linguistic variables and levels of experience could not be assessed. Further studies should explore different pedagogical approaches and assessment values to address inequalities in assessments, develop agreed standards for academic practice, enhance research education outcomes and support the long term development of a credible evidence base for osteopathic practice.

  • 出版日期2014-3