Nat Hum Behav. 2025 Nov 17. doi: 10.1038/s41562-025-02348-6. Online ahead of print.
ABSTRACT
Computational modelling is a powerful tool for uncovering hidden processes in observed data, yet it faces underappreciated challenges. Among these, determining appropriate sample sizes for computational studies remains a critical but overlooked issue, particularly for model selection analyses. Here we introduce a power analysis framework for Bayesian model selection, a method widely used to choose the best model among alternatives. Our framework reveals that while power increases with sample size, it decreases as more models are considered. Using this framework, we empirically demonstrate that psychology and human neuroscience studies often suffer from low statistical power in model selection. A total of 41 of 52 studies reviewed had less than 80% probability of correctly identifying the true model. The field also heavily relies on fixed effects model selection, which we demonstrate has serious statistical issues, including high false positive rates and pronounced sensitivity to outliers.
PMID:41249814 | DOI:10.1038/s41562-025-02348-6