Rxivist logo

The Eighty Five Percent Rule for Optimal Learning

By Robert C. Wilson, Amitai Shenhav, Mark Straccia, Jonathan D. Cohen

Posted 27 Jan 2018
bioRxiv DOI: 10.1101/255182 (published DOI: 10.1038/s41467-019-12552-4)

Researchers and educators have long wrestled with the question of how best to teach their clients be they human, animal or machine. Here we focus on the role of a single variable, the difficulty of training, and examine its effect on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks, in which ambiguous stimuli must be sorted into one of two classes. For all of these gradient-descent based learning algorithms we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this 'Eighty Five Percent Rule' for artificial neural networks used in AI and biologically plausible neural networks thought to describe human and animal learning.

Download data

  • Downloaded 6,827 times
  • Download rankings, all-time:
    • Site-wide: 1,208
    • In animal behavior and cognition: 6
  • Year to date:
    • Site-wide: 27,976
  • Since beginning of last month:
    • Site-wide: 27,976

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide


Sign up for the Rxivist weekly newsletter! (Click here for more details.)