Rxivist logo

Characterization of deep neural network features by decodability from human brain activity

By Tomoyasu Horikawa, Shuntaro C. Aoki, Mitsuaki Tsukamoto, Yukiyasu Kamitani

Posted 23 Sep 2018
bioRxiv DOI: 10.1101/424168 (published DOI: 10.1038/sdata.2019.12)

Achievements of near human-level performances in object recognition by deep neural networks (DNNs) have triggered a flood of comparative studies between the brain and DNNs. Using a DNN as a proxy for hierarchical visual representations, our recent study found that human brain activity patterns measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into DNN feature values given the same inputs. However, not all DNN features are equally decoded, indicating a gap between the DNN and human vision. Here, we present a dataset derived through the DNN feature decoding analyses including fMRI signals of five human subjects during image viewing, decoded feature values of DNNs (AlexNet and VGG19), and decoding accuracies of individual DNN features with their rankings. The decoding accuracies of individual features were highly correlated between subjects, suggesting the systematic differences between the brain and DNNs. We hope the present dataset will contribute to reveal the gap between the brain and DNNs and provide an opportunity to make use of the decoded features for further applications.

Download data

  • Downloaded 615 times
  • Download rankings, all-time:
    • Site-wide: 22,475 out of 84,639
    • In neuroscience: 3,765 out of 15,079
  • Year to date:
    • Site-wide: 29,840 out of 84,639
  • Since beginning of last month:
    • Site-wide: 26,818 out of 84,639

Altmetric data


Downloads over time

Distribution of downloads per paper, site-wide


PanLingua

Sign up for the Rxivist weekly newsletter! (Click here for more details.)


News