JMIR AI. 2026 May 12;5:e84315. doi: 10.2196/84315.
ABSTRACT
BACKGROUND: Regulation of artificial intelligence (AI) has been slow relative to the pace of its integration into health care. Several AI diagnostic tools for diabetic retinopathy (DR) have already received Food and Drug Administration (FDA) clearance, making it a timely and concrete example for exploring public perspectives on regulatory approval. The scope of FDA regulation of AI tools is being explored, and public attitudes about regulatory oversight should inform these discussions and are explored in this paper. Prior research suggests that comfort, trust, and political orientation shape views on government regulation and emerging technologies, potentially affecting support for oversight of AI in health care.
OBJECTIVE: This study assessed the perceived importance of FDA approval for AI-supported clinical decision support tools, with DR as the use case. We explored how comfort with AI tool developers, trust in data sharing, political affiliation, and demographic characteristics relate to the importance of FDA approval among US adults.
METHODS: A national survey was conducted in 2023 using the NORC AmeriSpeak Panel, a probability-based sample including 1787 respondents, with a subset of 982 participants answering questions about a use case describing an AI tool for identifying DR. Participants rated the importance of FDA approval for such tools on a 4-point Likert scale, with responses dichotomized between high and low perceived importance. Logistic regression models assessed associations between this outcome and predictors including comfort with AI tool developers, trust in data sharing, political affiliation, and demographic characteristics.
RESULTS: Among the 982 respondents presented with the DR use case, 658 (67%) indicated that FDA approval was “fairly” or “very” important. Statistically significant factors associated with the outcome (“It is important that the AI tool is approved by the FDA”) included higher comfort with using the tool (odds ratio [OR] 1.44, 95% CI 1.11-1.87; P=.006), comfort with developers from private companies (OR 1.38, 95% CI 1.09-1.76; P=.008), and hospitals (OR 1.60, 95% CI 1.25-2.05; P<.001). Trust in responsible data sharing (OR 1.25, 95% CI 1.05-1.5; P=.01) and higher education (OR 1.64, 95% CI 1.02-2.62; P=.04) also predicted higher support. Lean or strong Republicans (OR 0.43, 95%CI 0.3-0.6; P<.001) and Independents (OR 0.63, 95% CI 0.42-0.96; P=.03) were less likely to view FDA approval as important, as were Black (OR 0.50, 95% CI 0.34-0.77; P<.001) and Hispanic (OR 0.57, 95% CI 0.38-0.86; P=.007) respondents compared with White respondents.
CONCLUSIONS: This study offers insights into public attitudes regarding FDA oversight of AI-based clinical decision support tools. Findings highlight how comfort, trust, and lower confidence from marginalized communities and some political groups shape perceived importance of FDA approval, offering a point for broader applications in health care AI governance. These factors should be better considered as health systems work to ensure trustworthy implementation of new AI technologies.
PMID:42118568 | DOI:10.2196/84315