Categories
Nevin Manimala Statistics

Comparative Analysis of Artificial Intelligence-Generated and Human-Written Personal Statements in Emergency Medicine Applications

Cureus. 2025 Jul 26;17(7):e88818. doi: 10.7759/cureus.88818. eCollection 2025 Jul.

ABSTRACT

Introduction Personal statements (PSs) have long been part of the Electronic Residency Application Service (ERAS) application; however, there exist only limited guidelines for their creation and even fewer for their role in the application review process. Applicants invest significant time in writing their PSs, and still, program directors rank PSs among the least important factors in interview and rank order list decisions. The emergence of generative artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT, has introduced questions of ethics and originality across all aspects of education, particularly in the generation of free-form documents such as the PS. This study evaluates whether AI-generated PSs are distinguishable from authentic ones written by applicants and their perceived impact on residency selection. Methods Five AI-generated PSs were created using ChatGPT, incorporating applicant location, hobbies, and career background. Five de-identified, authentic PSs randomly selected from incoming emergency medicine (EM) interns were used for comparison. A Qualtrics survey was distributed electronically to the Council of Residency Directors (CORD) community. Respondents rated the PSs on writing quality, ability to convey personal attributes, and perceived influence on interview decisions. Statistical analyses (ANOVA and Wilcoxon tests) were used to assess differences between AI-generated and authentic statements. Results A total of 66 responses were collected over a two-month period. Of these, eight respondents did not regularly review ERAS applications, and 28 did not complete the survey beyond the initial question, leaving 30 responses for analysis. There were no statistically significant differences between AI-generated and authentic PSs in grammar and writing style (p = 0.5897), expression of personal attributes (p = 0.6827), overall quality (p = 0.2757), or perceived influence on interview decisions (p = 0.5457). Free-text comments reflected skepticism about the value of the PS in the selection process. Conclusion AI-generated PSs performed comparably to authentic ones, potentially further challenging the relevance of PSs in residency applications. These findings suggest an inherent lack of originality in the PS and may support re-evaluating the role of the PS and even exploring more meaningful ways to assess applicant fit in the residency selection process. Novel methods, such as structured interviews, standardized video responses, personality inventories, or situational judgment tests, may be considered to supplement the role intended for the PS.

PMID:40735658 | PMC:PMC12305746 | DOI:10.7759/cureus.88818

By Nevin Manimala

Portfolio Website for Nevin Manimala