JMIR Form Res. 2026 Mar 9;10:e76838. doi: 10.2196/76838.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) is increasingly influencing medical student education, with AI-driven chatbots, such as ChatGPT, emerging as powerful study tools. While these technologies offer numerous benefits, they also pose challenges that warrant the adaptation of medical school curricula.
OBJECTIVE: This study examines medical students’ perceptions and use of ChatGPT. We hypothesize that ChatGPT is widely used for academic support, but concerns remain regarding reliability and academic integrity.
METHODS: We conducted a cross-sectional study from August 25 to December 10, 2024, in the United States. Students in all years of medical training who were enrolled in accredited allopathic or osteopathic medical schools were eligible to participate. Data were collected using an anonymous online questionnaire, which was distributed through institutional mailing lists. Overall, 188 schools were reached, of which 14 (7.4%) responded and agreed to distribute the survey. A total of 177 participants completed the survey. Survey items consisted primarily of Likert-scale and multiple-choice questions. Primary outcome measures included self-reported frequency of ChatGPT use, perceived usefulness of ChatGPT, and ChatGPT use habits.
RESULTS: Overall, 98.9% (175/177) of participants had heard of ChatGPT, with 88.7% (157/177) reporting having used it; 62.7% (111/177) identified as female, and 52% (92/177) had completed at least 1 block of clinical rotations. Medical students most often used ChatGPT to understand complex medical concepts, prepare for exams, and generate study materials. Moreover, 46.5% (73/157) used it to help complete medical school assignments. Medical students also reported using it clinically, with the most common use being to generate differential diagnoses. Notably, 21.0% (33/157) of participants responded having used ChatGPT to help write clinical notes. Moreover, 73.9% (116/157) reported that their experience with ChatGPT improved their overall perception of AI’s potential to assist in medical practice, and 86.6% (135/157) believed that having ChatGPT as a resource would make them more effective physicians. Statistical analyses were performed using the Pearson chi-square test with α=.05. Students who reported moderate or advanced baseline understanding of AI were more likely to practice conscientious use habits, such as cross-checking (odds ratio [OR] 2.31, 95% CI 1.08-4.97) and editing (OR 2.45, 95% CI 1.05-5.71) ChatGPT output before using it, than those who reported a basic or limited understanding.
CONCLUSIONS: Our study is among the few to examine medical student perceptions of ChatGPT at a national level. We examined responsible use habits to identify areas in which reliance on this technology may lead users astray. We found that ChatGPT is being used to complete academic assignments and write clinical notes, raising concerns about information verification, AI literacy, patient confidentiality, and ethical use. Together, these findings highlight the need for structured AI education to help students leverage these technologies effectively while mitigating risks associated with misinformation and overreliance on AI.
PMID:41802232 | DOI:10.2196/76838