BMC Med Educ. 2026 Mar 2. doi: 10.1186/s12909-026-08892-7. Online ahead of print.
ABSTRACT
BACKGROUND: Reliable assessment and meaningful feedback are essential to effective learning in medical education. However, conventional unstructured evaluation of essay-type responses is highly vulnerable to rater bias, inter-rater variability, and nonspecific feedback. To address these limitations, we developed an Assessment-cum-Feedback Checklist to provide a structured, criterion-based approach to scoring and feedback. In this study, we evaluated the checklist’s effectiveness in enhancing the reliability, consistency, and clarity of assessment while exploring student and faculty perceptions of its educational value.
METHODS: We used a mixed-methods design. Sixty-two first-year MBBS students and four faculty members (two junior < 5 years’ experience; two senior > 10 years’ experience) participated. Two essay-type questions were assessed independently by all four teachers using both the conventional unstructured method and the checklist-based method. Quantitative analyses included descriptive statistics, Wilcoxon signed-rank tests, Levene’s test for equality of variance, intraclass correlation coefficients (ICC), and Bland-Altman analysis to compare variability and agreement across methods. Data were analysed using JASP (version 0.18.3.0) at a 5% significance level. Student perceptions were gathered using a structured questionnaire, and faculty perceptions were explored through in-depth interviews. Qualitative data were analysed using QCAmap (2020). Institutional Ethics Committee approval was obtained.
RESULTS: Checklist-based scoring demonstrated lower standard errors, standard deviations, and coefficients of variation, indicating improved precision and reduced subjective variability compared with the conventional method. Mean scores were lower with the checklist, and Bland-Altman analysis showed a negative bias, reflecting greater scoring stringency due to explicit criteria. ICC values increased notably with the checklist-particularly among senior teachers-demonstrating improved inter-rater reliability and tighter limits of agreement. Teachers reported that the checklist enhanced objectivity, reduced bias, clarified performance expectations, and standardized assessment practices. Students expressed strong support, citing improved clarity, transparency, and usefulness of feedback.
CONCLUSIONS: The Assessment-cum-Feedback Checklist was associated with measurable improvements in the reliability and consistency of essay-type assessment. Both faculty and students perceived the checklist-based approach to enhance clarity, transparency, and the usefulness of feedback by making assessment criteria explicit. With appropriate faculty orientation and iterative refinement, the checklist represents a promising and potentially adaptable tool for strengthening assessment and feedback practices in constructed-response formats in medical education.
PMID:41772543 | DOI:10.1186/s12909-026-08892-7