J Med Internet Res. 2026 Apr 7;28:e82564. doi: 10.2196/82564.
ABSTRACT
BACKGROUND: Autism spectrum disorder (ASD) is often underdiagnosed in low- and middle-income countries due to limited specialist access, sociocultural stigma, and fragmented screening systems. Artificial intelligence (AI)-powered screening tools may improve early detection by enabling low-cost, accessible assessments. However, adoption depends on stakeholder trust, ethical safeguards, and alignment with local health system capacities.
OBJECTIVE: This study explored the feasibility, acceptability, and perceived ethical and practical enablers and barriers to implementing AI-powered tools for early ASD screening in Egypt, with attention to urban-rural disparities and integration into existing care pathways.
METHODS: We used a qualitative design with semistructured focus group discussions with 49 participants (21 parents of children with ASD and 28 health care professionals) recruited from urban and rural governorates. Discussions were audio-recorded, transcribed verbatim, and analyzed using Braun and Clarke’s reflexive thematic analysis, supported by NVivo software (Lumivero). Methodological integrity was ensured through reflexivity, triangulation, and peer debriefing. Thematic saturation was monitored across groups, and participant diversity was prioritized across contexts.
RESULTS: Five themes emerged: (1) AI as a supportive tool rather than a replacement for clinicians, emphasizing scalability and assistance for nonspecialists; (2) the need for cultural and contextual adaptation to ensure local relevance; (3) privacy, trust, and transparency concerns, including data security, consent, and algorithmic opacity; (4) reducing diagnostic inequities by addressing urban-rural disparities and strengthening community-based deployment; and (5) the preference for hybrid AI-human models, with conditions for adoption including cultural sensitivity, human oversight, and digital literacy support. Counts (n/N) of parents and health care professionals contributing to each theme were used descriptively as indicators of pattern salience rather than as statistical estimates of prevalence. Participants expressed cautious optimism, with parents emphasizing accessibility and speed, while health care professionals highlighted concerns about reliability, cultural adaptation, and data governance.
CONCLUSIONS: AI-powered ASD screening has potential to advance equitable early detection in underserved areas. Adoption requires transparent data governance, integration into hybrid human-AI models, culturally adaptive design, and targeted digital literacy initiatives. These findings provide an evidence-based roadmap for policymakers, technologists, and health system leaders to implement AI screening tools that are ethically sound, contextually relevant, and equity-focused.
PMID:41945920 | DOI:10.2196/82564