JMIR Hum Factors. 2022 Jun 21;9(2):e35421. doi: 10.2196/35421.
The health care management and the medical practitioner literature lack a descriptive conceptual framework for understanding the dynamic and complex interactions between clinicians and artificial intelligence (AI) systems. As most of the existing literature has been investigating AI’s performance and effectiveness from a statistical (analytical) standpoint, there is a lack of studies ensuring AI’s ecological validity. In this study, we derived a framework that focuses explicitly on the interaction between AI and clinicians. The proposed framework builds upon well-established human factors models such as the technology acceptance model and expectancy theory. The framework can be used to perform quantitative and qualitative analyses (mixed methods) to capture how clinician-AI interactions may vary based on human factors such as expectancy, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns for patient safety. If leveraged, the proposed framework can help to identify factors influencing clinicians’ intention to use AI and, consequently, improve AI acceptance and address the lack of AI accountability while safeguarding the patients, clinicians, and AI technology. Overall, this paper discusses the concepts, propositions, and assumptions of the multidisciplinary decision-making literature, constituting a sociocognitive approach that extends the theories of distributed cognition and, thus, will account for the ecological validity of AI.