J Med Internet Res. 2026 May 4;28:e87158. doi: 10.2196/87158.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) integration in mobile health (mHealth) apps offers health care access opportunities in low-resource settings, yet opaque AI recommendations undermine trust and adoption. Existing explainable AI (XAI) frameworks, designed in Western contexts, fail to address the linguistic, cultural, and infrastructural realities of South Asian populations, creating barriers where users cannot understand AI recommendations, clinicians cannot validate outputs, and developers lack implementation guidance. Thus, understanding explainability requirements among educated, digitally literate populations provides foundational insights for future development of inclusive mHealth technologies.
OBJECTIVE: This study aims to (1) investigate stakeholder perceptions of trust and explainability in AI-driven mHealth in Bangladesh; (2) identify demographic predictors of trust; and (3) develop and propose a context-adapted framework benefiting developers, policymakers, clinicians, and end users in resource-constrained settings.
METHODS: This study used a sequential mixed methods design that combined a quantitative survey (n=137) with a qualitative phase involving 20 stakeholders. This qualitative cohort consisted of developers (n=4), XAI experts (n=6), and clinicians (n=10) who participated through either focus groups or individual interviews. We used statistical analysis to examine demographic predictors and applied thematic analysis to identify explainability needs specific to each stakeholder group.
RESULTS: Education level showed a significant effect on trust (F3, 133=2.81, P=.042). Completed undergraduate students reported lower trust (mean 3.14, SD 1.10) compared with current undergraduates (mean 3.66, SD 0.93), suggesting that undergraduate completion develops critical evaluation skills that may decrease uncritical acceptance of AI systems. Despite recognizing AI’s utility for preliminary guidance, users emphasized the necessity of human validation and expressed concerns about understanding AI’s decision-making logic. Interviews with different stakeholder groups revealed critical gaps. Developers acknowledged minimal explainability implementation in current mHealth apps, while medical professionals unanimously prioritized clinical judgment over automated outputs and advocated for physician-mediated AI systems. Synthesizing findings across all stakeholder groups revealed five core requirements: (1) Human-AI collaboration and clinical validation, (2) Transparent logic paths, (3) Contextual personalization, (4) Cultural and linguistic relevance, and (5) Trust calibration and ethical safeguards.
CONCLUSIONS: The framework bridges stakeholder misalignments and offers actionable guidance for design, deployment, and policy alignment in resource-constrained environments. By situating explainability within the sociocultural realities of South Asia, this research advances XAI beyond algorithmic transparency toward equity, inclusion, and user empowerment in digital health.
PMID:42081827 | DOI:10.2196/87158