Nat Med. 2026 Jan 19. doi: 10.1038/s41591-025-04176-7. Online ahead of print.
ABSTRACT
Patient-facing large language models (LLMs) hold potential to streamline inefficient transitions from primary to specialist care. We developed the preassessment (PreA), an LLM chatbot co-designed with local stakeholders, to perform the general medical consultations for history-taking, preliminary diagnoses, and test ordering that would normally be performed by primary care providers and to generate referral reports for specialists. PreA was tested in a randomized controlled trial involving 111 specialists from 24 medical disciplines across two health centers, where 2,069 patients (1,141 women; 928 men) were randomly assigned to use PreA independently (PreA-only), use it with staff support (PreA-human), or not use it (No-PreA) before specialist consultation. The trial met its primary end points with the PreA-only group showing significantly reduced physician consultation duration (28.7% reduction; 3.14 ± 2.25 min) compared to the No-PreA group (4.41 ± 2.77 min; P < 0.001), alongside significant improvements in physician-perceived care coordination (mean scores 113.1% increase; 3.69 ± 0.90 versus 1.73 ± 0.95; P < 0.001) and patient-reported communication ease (mean scores 16.0% increase; 3.99 ± 0.62 versus 3.44 ± 0.97; P < 0.001). Equivalent outcomes between the PreA-only and PreA-human groups confirmed the autonomous operation capability. Co-designed PreA outperformed the same model with additional fine-tuning on local dialogues across clinical decision-making domains. Co-design with local stakeholders, compared to passive local data collecting, represents a more effective strategy for deploying LLMs to strengthen health systems and enhance patient-centered care in resource-limited settings. Chinese Clinical Trial Registry identifier: ChiCTR2400094159 .
PMID:41555035 | DOI:10.1038/s41591-025-04176-7