Nature. 2026 Apr;652(8108):151-156. doi: 10.1038/s41586-026-10251-x. Epub 2026 Apr 1.
ABSTRACT
Science aspires to be cumulative. Reproducibility efforts strengthen science by testing the reliability of published findings, promoting self-correction, and informing policy-making1. Computational reproductions, whereby independent researchers reproduce the results of published studies, are an essential diagnostic tool2-10. Such efforts should have greater visibility11-16. However, little social science reproduction and robustness has been conducted at scale10,13,17-23. Here we reproduced original analyses and conducted robustness checks of 110 articles that were published in leading economics and political science journals with mandatory data and code sharing policies17,18. We found that more than 85% of published claims were computationally reproducible. In robustness checks, our reanalyses showed that 72% of statistically significant estimates remain significant and in the same direction, and the median reproduced effect size is nearly the same as the originally published effect size (that is, 99% of the published effect size). Additionally, 6 independent research teams examined 12 pre-specified hypotheses about determinants of robustness. Research teams with more experience found lower levels of robustness, and robustness did not correlate with author characteristics or data availability.
PMID:41922705 | DOI:10.1038/s41586-026-10251-x