NAR Genom Bioinform. 2026 Feb 11;8(1):lqag012. doi: 10.1093/nargab/lqag012. eCollection 2026 Mar.
ABSTRACT
Synthetic data (SD) has become an increasingly important asset in the life sciences, helping address data scarcity, privacy concerns, and barriers to data access. Creating artificial datasets that mirror the characteristics of real data allows researchers to develop and validate computational methods in controlled environments. Despite its promise, the adoption of SD in life sciences hinges on rigorous evaluation metrics designed to assess their fidelity and reliability. To explore the current landscape of SD evaluation metrics in distinct life sciences domains, the ELIXIR Machine Learning Focus Group performed a systematic review of the scientific literature following the PRISMA guidelines. Six critical domains were examined to identify current practices for assessing SD. Findings reveal that, while generation methods are rapidly evolving, systematic evaluation is often overlooked, limiting researchers’ ability to compare, validate, and trust synthetic datasets across different domains. This systematic review underscores the urgent need for robust, standardized evaluation approaches that not only bolster confidence in SD but also guide its effective and responsible implementation. By laying the groundwork for establishing domain-specific yet interoperable standards, this scoping review paves the way for future initiatives aimed at enhancing the role of SD in scientific discovery, clinical practice and beyond.
PMID:41685350 | PMC:PMC12891913 | DOI:10.1093/nargab/lqag012