Rxivist logo

A deep generative model of 3D single-cell organization

By Rory Donovan-Maiye, Jackson Brown, Caleb Chan, Liya Ding, Calysta Yan, Nathalie Gaudreault, Julie Theriot, Mary Maleckar, Theo Knijnenburg, Gregory Johnson

Posted 09 Jun 2021
bioRxiv DOI: 10.1101/2021.06.09.447725

We introduce a framework for end-to-end integrative modeling of 3D single-cell multi-channel fluorescent image data of diverse subcellular structures. We employ stacked conditional {beta}-variational autoencoders to first learn a latent representation of cell morphology, and then learn a latent representation of subcellular structure localization which is conditioned on the learned cell morphology. Our model is flexible and can be trained on images of arbitrary subcellular structures and at varying degrees of sparsity and reconstruction fidelity. We train our full model on 3D cell image data and explore design trade-offs in the 2D setting. Once trained, our model can be used to impute structures in cells where they were not imaged and to quantify the variation in the location of all subcellular structures by generating plausible instantiations of each structure in arbitrary cell geometries. We apply our trained model to a small drug perturbation screen to demonstrate its applicability to new data. We show how the latent representations of drugged cells differ from unperturbed cells as expected by on-target effects of the drugs.

Download data

  • Downloaded 441 times
  • Download rankings, all-time:
    • Site-wide: 87,211
    • In cell biology: 3,969
  • Year to date:
    • Site-wide: 18,873
  • Since beginning of last month:
    • Site-wide: 46,931

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide