Categories
Nevin Manimala Statistics

Clinical Model Autophagy: The Risk of Interpretative Drift in Recursive Medical AI

JMIR Med Inform. 2026 Apr 21;14:e94813. doi: 10.2196/94813.

ABSTRACT

The rapid integration of large language models into electronic medical record systems introduces a critical theoretical vulnerability. Drawing on foundational computer science proofs of “model collapse,” this viewpoint introduces the concept of “Clinical Model Autophagy”-a systemic degradation of diagnostic integrity that occurs when clinical artificial intelligence (AI) models are recursively trained on unverified, AI-generated synthetic data. As these recursive models may progressively regress toward statistical means, they undergo “Interpretative Drift,” a clinically concerning phenomenon where rare pathological variances are systematically erased and complex diseases are homogenized into benign averages. To prevent the irreversible contamination of health care data ecosystems, the author urgently proposes the Data Purity Standard (DPS). The DPS mandates the cryptographic watermarking of all AI-assisted clinical entries for provenance tracking, alongside the establishment of “Human Vaults.” These physically segregated repositories of physician-verified heritage data will serve as immutable biological anchors to safely guide future AI training, ensuring the long-term reliability of digital health infrastructure.

PMID:42013455 | DOI:10.2196/94813

By Nevin Manimala

Portfolio Website for Nevin Manimala