Rxivist logo

A systematic review of sample size and power in leading neuroscience journals

By Alice R Carter, Kate Tilling, Marcus R. Munafò

Posted 23 Nov 2017
bioRxiv DOI: 10.1101/217596

Adequate sample size is key to reproducible research findings: low statistical power can increase the probability that a statistically significant result is a false positive. Journals are increasingly adopting methods to tackle issues of reproducibility, such as by introducing reporting checklists. We conducted a systematic review comparing articles submitted to Nature Neuroscience in the 3 months prior to checklists (n=36) that were subsequently published with articles submitted to Nature Neuroscience in the 3 months immediately after checklists (n=45), along with a comparison journal Neuroscience in this same 3-month period (n=123). We found that although the proportion of studies commenting on sample sizes increased after checklists (22% vs 53%), the proportion reporting formal power calculations decreased (14% vs 9%). Using sample size calculations for 80% power and a significance level of 5%, we found little evidence that sample sizes were adequate to achieve this level of statistical power, even for large effect sizes. Our analysis suggests that reporting checklists may not improve the use and reporting of formal power calculations.

Download data

  • Downloaded 1,025 times
  • Download rankings, all-time:
    • Site-wide: 10,401 out of 85,056
    • In scientific communication and education: 114 out of 591
  • Year to date:
    • Site-wide: 26,784 out of 85,056
  • Since beginning of last month:
    • Site-wide: 21,821 out of 85,056

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide


Sign up for the Rxivist weekly newsletter! (Click here for more details.)