摘要

In a modern computing curriculum, large-project courses are essential to give students hands-on experience of working in a realistic software engineering project. Assessing such projects is, however, extremely challenging. There are various aspects and trade-offs of assessments that can affect course quality. Individual assessments may fairly grade individuals, but may lose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way that students work. Continuous feedback to students can enhance learning, but may be hard to combine with fair assessment. Most previous work focuses on some specific assessment aspect; in this article, we present an assessment model that consists of a collection of assessment activities, each covering different aspects. We have applied, developed, and improved these activities during a 7yr period. To evaluate the usefulness of the model, we perform questionnaire-based surveys over a 2yr period. Furthermore, we design and execute an experiment that studies to what extent students can perform fair peer assessment and to what degree the assessments of students and teachers agree. We analyze the results, discuss findings, and summarize lessons learned.

  • 出版日期2015-12