Rxivist logo

Autoencoder networks extract latent variables and encode these variables in their connectomes

By Matthew Farrell, Stefano Recanatesi, R. Clay Reid, Stefan Mihalas, Eric Shea-Brown

Posted 05 Mar 2020
bioRxiv DOI: 10.1101/2020.03.04.977702 (published DOI: 10.1016/j.neunet.2021.03.010)

Spectacular advances in imaging and data processing techniques are revealing a wealth of information about brain connectomes. This raises an exciting scientific opportunity: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that -- even with well constrained neural dynamics -- there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to ''input'' weights can be undone by applying the inverse modifications to the ''output'' weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.

Download data

  • Downloaded 1,374 times
  • Download rankings, all-time:
    • Site-wide: 16,599
    • In neuroscience: 1,897
  • Year to date:
    • Site-wide: 12,231
  • Since beginning of last month:
    • Site-wide: 22,297

Altmetric data


Downloads over time

Distribution of downloads per paper, site-wide


PanLingua

News