JMIR Form Res. 2025 Nov 11;9:e81484. doi: 10.2196/81484.
ABSTRACT
BACKGROUND: Large language models such as ChatGPT offer significant opportunities for medical education. However, empirical data on actual usage patterns, perceived benefits, and limitations among medical students remain limited.
OBJECTIVE: This study aimed to assess how medical students in Germany use ChatGPT, their perceptions of its educational value, and the challenges and concerns associated with its use.
METHODS: A cross-sectional 17-item online survey was conducted between May and August 2024 among medical students from Philipps University Marburg, Germany. A mixed methods approach was applied, combining descriptive and inferential statistical analysis with qualitative content analysis of open-ended responses.
RESULTS: A total of 84 fully completed surveys were included in the analysis (response rate: 26.7%; 315 surveys started). Overall, 76.2% (64/84) of the participants reported having used ChatGPT for medical education, with significantly higher usage during exam periods (P=.003). Preclinical students reported higher overall usage than clinical students (P=.02). ChatGPT was primarily used for summarizing information by 60.7% (51/84) of students, for literature research by 57.7% (49/84), and for clarifying concepts by 47.1% (40/84). A total of 70.2% (59/84) felt that it helped them save time, and 51.2% (43/84) reported an improved understanding of content. In contrast, only 31% (26/84) saw benefits for applying knowledge and 15.5% (13/84) for long-term knowledge retention. Qualitative responses highlighted clear benefits such as time savings and support in exam preparation, while also pointing to potential applications in clinical documentation and expressing concerns about misinformation and source transparency. However, 73.3% (55/75) expressed concerns about misinformation, and 72.6% (61/84) reported lacking confidence in their artificial intelligence (AI)-related skills. Only 41.7% (35/84) stated that they trust ChatGPT’s outputs. Students who used the tool more frequently also reported higher levels of trust in ChatGPT’s outputs (r=0.374, P<.001). Over 70% of respondents indicated a strong desire for increased integration of AI-related education and practical applications within the medical curriculum.
CONCLUSIONS: ChatGPT was already widely used among medical students, especially in exam preparation and the early stages of training. Students valued its efficiency and support for understanding complex material, but its long-term influence on learning is limited. Concerns about reliability, source transparency, and data privacy remain, and AI skills played a key role in shaping usage. These findings underscore the need to integrate structured, practice-oriented AI education into medical training to support critical, informed, and ethical use of large language models.
PMID:41218187 | DOI:10.2196/81484