Categories
Nevin Manimala Statistics

Applying generalized theory to optimize the quality of high-stakes objective structured clinical examinations for undergraduate medical students: experience from the French medical school

BMC Med Educ. 2025 May 2;25(1):643. doi: 10.1186/s12909-025-07255-y.

ABSTRACT

BACKGROUND: The national OSCE examination has recently been adopted in France as a prerequisite for medical students to enter accredited graduate education programs. However, the reliability and generalizability of OSCE scores are not well explored taking into account the national examination blueprint.

METHOD: To obtain complementary information for monitoring and improving the quality of the OSCE we performed a pilot study applying generalizability (G-)theory on a sample of 6th-year undergraduate medical students (n = 73) who were assessed by 24 examiner pairs at three stations. Based on the national blueprint, three different scoring subunits (a dichotomous task-specific checklist evaluating clinical skills and behaviorally anchored scales evaluating generic skills and a global performance scale) were used to evaluate students and combined into a station score. A variance component analysis was performed using mixed modelling to identify the impact of different facets (station, student and student x station interactions) on the scoring subunits. The generalizability and dependability statistics were calculated.

RESULTS: There was no significant difference between mean scores attributable to different examiner pairs across the data. The examiner variance component was greater for the clinical skills score (14.4%) than for the generic skills (5.6%) and global performance scores (5.1%). The station variance component was largest for the clinical skills score, accounting for 22.9% of the total score variance, compared to 3% for the generic skills and 13.9% for global performance scores. The variance component related to student represented 12% of the total variance for clinicals skills, 17.4% for generic skills and 14.3% for global performance ratings. The combined generalizability coefficients across all the data were 0.59 for the clinical skills score, 0.93 for the generic skills score and 0.75 for global performance.

CONCLUSIONS: The combined estimates of relative reliability across all data are greater for generic skills scores and global performance ratings than for clinical skills scores. This is likely explained by the fact that content-specific tasks evaluated using checklists produce greater variability in scores than scales evaluating broader competencies. This work can be valuable to other teaching institutions, as monitoring the sources of errors is a principal quality control strategy to ensure valid interpretations of the students’ scores.

PMID:40317009 | DOI:10.1186/s12909-025-07255-y

By Nevin Manimala

Portfolio Website for Nevin Manimala