Rxivist logo

Segmentation-Enhanced CycleGAN

By Michal Januszewski, Viren Jain

Posted 13 Feb 2019
bioRxiv DOI: 10.1101/548081

Algorithmic reconstruction of neurons from volume electron microscopy data traditionally requires training machine learning models on dataset-specific ground truth annotations that are expensive and tedious to acquire. We enhanced the training procedure of an unsupervised image-to-image translation method with additional components derived from an automated neuron segmentation approach. We show that this method, Segmentation-Enhanced CycleGAN (SECGAN), enables near perfect reconstruction accuracy on a benchmark connectomics segmentation dataset despite operating in a "zero-shot" setting in which the segmentation model was trained using only volumetric labels from a different dataset and imaging method. By reducing or eliminating the need for novel ground truth annotations, SECGANs alleviate one of the main practical burdens involved in pursuing automated reconstruction of volume electron microscopy data.

Download data

  • Downloaded 2,753 times
  • Download rankings, all-time:
    • Site-wide: 1,969 out of 83,820
    • In neuroscience: 272 out of 14,934
  • Year to date:
    • Site-wide: 2,147 out of 83,820
  • Since beginning of last month:
    • Site-wide: 2,514 out of 83,820

Altmetric data

Downloads over time

Distribution of downloads per paper, site-wide


Sign up for the Rxivist weekly newsletter! (Click here for more details.)