Small effect size leads to reproducibility failure in resting-state fMRI studies
Di Nardo Federica,
Brett W. Fling,
Fay B. Horak,
Martin J McKeown,
Shirley YY Pang,
Z. Jane Wang,
Xi-Nian N. Zuo,
Posted 20 Mar 2018
bioRxiv DOI: 10.1101/285171
Posted 20 Mar 2018
Thousands of papers using resting-state functional magnetic resonance imaging (RS-fMRI) have been published on brain disorders. Results in each paper may have survived correction for multiple comparison. However, since there have been no robust results from large scale meta-analysis, we do not know how many of published results are truly positives. The present meta-analytic work included 60 original studies, with 57 studies (4 datasets, 2266 participants) that used a between-group design and 3 studies (1 dataset, 107 participants) that employed a within-group design. To evaluate the effect size of brain disorders, a very large neuroimaging dataset ranging from neurological to psychiatric disorders together with healthy individuals have been analyzed. Parkinson's disease off levodopa (PD-off) included 687 participants from 15 studies. PD on levodopa (PD-on) included 261 participants from 9 studies. Autism spectrum disorder (ASD) included 958 participants from 27 studies. The meta-analyses of a metric named amplitude of low frequency fluctuation (ALFF) showed that the effect size (Hedges'g) was 0.19 - 0.39 for the 4 datasets using between-group design and 0.46 for the dataset using within-group design. The effect size of PD-off, PD-on and ASD were 0.23, 0.39, and 0.19, respectively. Using the meta-analysis results as the robust results, the between-group design results of each study showed high false negative rates (median 99%), high false discovery rates (median 86%), and low accuracy (median 1%), regardless of whether stringent or liberal multiple comparison correction was used. The findings were similar for 4 RS-fMRI metrics including ALFF, regional homogeneity, and degree centrality, as well as for another widely used RS-fMRI metric namely seed-based functional connectivity. These observations suggest that multiple comparison correction does not control for false discoveries across multiple studies when the effect sizes are relatively small. Meta-analysis on un-thresholded t-maps is critical for the recovery of ground truth. We recommend that to achieve high reproducibility through meta-analysis, the neuroimaging research field should share raw data or, at minimum, provide un-thresholded statistical images.
- Downloaded 1,134 times
- Download rankings, all-time:
- Site-wide: 24,656
- In neuroscience: 3,003
- Year to date:
- Site-wide: 29,431
- Since beginning of last month:
- Site-wide: 68,817
Downloads over time
Distribution of downloads per paper, site-wide
- 27 Nov 2020: The website and API now include results pulled from medRxiv as well as bioRxiv.
- 18 Dec 2019: We're pleased to announce PanLingua, a new tool that enables you to search for machine-translated bioRxiv preprints using more than 100 different languages.
- 21 May 2019: PLOS Biology has published a community page about Rxivist.org and its design.
- 10 May 2019: The paper analyzing the Rxivist dataset has been published at eLife.
- 1 Mar 2019: We now have summary statistics about bioRxiv downloads and submissions.
- 8 Feb 2019: Data from Altmetric is now available on the Rxivist details page for every preprint. Look for the "donut" under the download metrics.
- 30 Jan 2019: preLights has featured the Rxivist preprint and written about our findings.
- 22 Jan 2019: Nature just published an article about Rxivist and our data.
- 13 Jan 2019: The Rxivist preprint is live!