Nevin Manimala Statistics

Assessing the Evidential Value of Mental Fatigue and Exercise Research

Sports Med. 2023 Sep 8. doi: 10.1007/s40279-023-01926-w. Online ahead of print.


It has often been reported that mental exertion, presumably leading to mental fatigue, can negatively affect exercise performance; however, recent findings have questioned the strength of the effect. To further complicate this issue, an overlooked problem might be the presence of publication bias in studies using underpowered designs, which is known to inflate false positive report probability and effect size estimates. Altogether, the presence of bias is likely to reduce the evidential value of the published literature on this topic, although it is unknown to what extent. The purpose of the current work was to assess the evidential value of studies published to date on the effect of mental exertion on exercise performance by assessing the presence of publication bias and the observed statistical power achieved by these studies. A traditional meta-analysis revealed a Cohen’s dz effect size of – 0.54, 95% CI [- 0.68, – 0.40], p < .001. However, when we applied methods for estimating and correcting for publication bias (based on funnel plot asymmetry and observed p-values), we found that the bias-corrected effect size became negligible with most of publication-bias methods and decreased to – 0.36 in the more optimistic of all the scenarios. A robust Bayesian meta-analysis found strong evidence in favor of publication bias, BFpb > 1000, and inconclusive evidence in favor of the effect, adjusted dz = 0.01, 95% CrI [- 0.46, 0.37], BF10 = 0.90. Furthermore, the median observed statistical power assuming the unadjusted meta-analytic effect size (i.e., – 0.54) as the true effect size was 39% (min = 19%, max = 96%), indicating that, on average, these studies only had a 39% chance of observing a significant result if the true effect was Cohen’s dz = – 0.54. If the more optimistic adjusted effect size (- 0.36) was assumed as the true effect, the median statistical power was just 20%. We conclude that the current literature is a useful case study for illustrating the dangers of conducting underpowered studies to detect the effect size of interest.

PMID:37682411 | DOI:10.1007/s40279-023-01926-w

By Nevin Manimala

Portfolio Website for Nevin Manimala