Rxivist logo

Rxivist combines preprints from bioRxiv with data from Twitter to help you find the papers being discussed in your field. Currently indexing 57,506 bioRxiv papers from 264,779 authors.

Most downloaded bioRxiv papers, all time

in category neuroscience

9,916 results found. For more information, click each entry to expand.

1: Deep image reconstruction from human brain activity
more details view paper

Posted to bioRxiv 28 Dec 2017

Deep image reconstruction from human brain activity
120,695 downloads neuroscience

Guohua Shen, Tomoyasu Horikawa, Kei Majima, Yukiyasu Kamitani

Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases or to the matching to exemplars. Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features. Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.

2: Could a neuroscientist understand a microprocessor?
more details view paper

Posted to bioRxiv 26 May 2016

Could a neuroscientist understand a microprocessor?
100,796 downloads neuroscience

Eric Jonas, Konrad P Kording

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

3: An integrated brain-machine interface platform with thousands of channels
more details view paper

Posted to bioRxiv 17 Jul 2019

An integrated brain-machine interface platform with thousands of channels
65,513 downloads neuroscience

Elon Musk, Neuralink

Brain-machine interfaces (BMIs) hold promise for the restoration of sensory and motor function and the treatment of neurological disorders, but clinical BMIs have not yet been widely adopted, in part because modest channel counts have limited their potential. In this white paper, we describe Neuralink’s first steps toward a scalable high-bandwidth BMI system. We have built arrays of small and flexible electrode “threads”, with as many as 3,072 electrodes per array distributed across 96 threads. We have also built a neurosurgical robot capable of inserting six threads (192 electrodes) per minute. Each thread can be individually inserted into the brain with micron precision for avoidance of surface vasculature and targeting specific brain regions. The electrode array is packaged into a small implantable device that contains custom chips for low-power on-board amplification and digitization: the package for 3,072 channels occupies less than (23 × 18.5 × 2) mm3. A single USB-C cable provides full-bandwidth data streaming from the device, recording from all channels simultaneously. This system has achieved a spiking yield of up to 70% in chronically implanted electrodes. Neuralink’s approach to BMI has unprecedented packaging density and scalability in a clinically relevant package.

4: Towards an integration of deep learning and neuroscience
more details view paper

Posted to bioRxiv 13 Jun 2016

Towards an integration of deep learning and neuroscience
26,682 downloads neuroscience

Adam Henry Marblestone, Greg Wayne, Konrad P Kording

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.

5: Prefrontal cortex as a meta-reinforcement learning system
more details view paper

Posted to bioRxiv 06 Apr 2018

Prefrontal cortex as a meta-reinforcement learning system
24,378 downloads neuroscience

Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, Matthew Botvinick

Over the past twenty years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine 'stamps in' associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. In the present work, we draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research.

6: Sex Differences In The Adult Human Brain: Evidence From 5,216 UK Biobank Participants
more details view paper

Posted to bioRxiv 04 Apr 2017

Sex Differences In The Adult Human Brain: Evidence From 5,216 UK Biobank Participants
22,158 downloads neuroscience

Stuart J Ritchie, Simon R Cox, Xueyi Shen, Michael V Lombardo, Lianne M Reus, Clara Alloza, Mathew A Harris, Helen L Alderson, Stuart Hunter, Emma Neilson, David C. M. Liewald, Bonnie Auyeung, Heather C Whalley, Stephen M Lawrie, Catharine R Gale, Mark E Bastin, Andrew M McIntosh, Ian J Deary

Sex differences in the human brain are of interest, for example because of sex differences in the observed prevalence of psychiatric disorders and in some psychological traits. We report the largest single-sample study of structural and functional sex differences in the human brain (2,750 female, 2,466 male participants; 44-77 years). Males had higher volumes, surface areas, and white matter fractional anisotropy; females had thicker cortices and higher white matter tract complexity. There was considerable distributional overlap between the sexes. Subregional differences were not fully attributable to differences in total volume or height. There was generally greater male variance across structural measures. Functional connectome organization showed stronger connectivity for males in unimodal sensorimotor cortices, and stronger connectivity for females in the default mode network. This large-scale study provides a foundation for attempts to understand the causes and consequences of sex differences in adult brain structure and function.

7: Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World
more details view paper

Posted to bioRxiv 12 Jul 2017

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World
20,418 downloads neuroscience

Jeff Hawkins, Subutai Ahmad, Yuwei Cui

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

8: The hippocampus as a predictive map
more details view paper

Posted to bioRxiv 28 Dec 2016

The hippocampus as a predictive map
14,460 downloads neuroscience

Kimberly Lauren Stachenfeld, Matthew M. Botvinick, Samuel Gershman

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

9: A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex
more details view paper

Posted to bioRxiv 13 Oct 2018

A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex
13,541 downloads neuroscience

Jeff Hawkins, Marcus Lewis, Mirko Klukas, Scott Purdy, Subutai Ahmad

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework.

10: Using DeepLabCut for 3D markerless pose estimation across species and behaviors
more details view paper

Posted to bioRxiv 24 Nov 2018

Using DeepLabCut for 3D markerless pose estimation across species and behaviors
13,306 downloads neuroscience

Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, Mackenzie W. Mathis

Noninvasive behavioral tracking of animals during experiments is crucial to many scientific pursuits. Extracting the poses of animals without using markers is often essential for measuring behavioral effects in biomechanics, genetics, ethology & neuroscience. Yet, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open source toolbox called DeepLabCut that builds on a state-of-the-art human pose estimation algorithm to allow a user to train a deep neural network using limited training data to precisely track user-defined features that matches human labeling accuracy. Here, with this paper we provide an updated toolbox that is self contained within a Python package that includes new features such as graphical user interfaces and active-learning based network refinement. Lastly, we provide a step-by-step guide for using DeepLabCut.

11: The successor representation in human reinforcement learning
more details view paper

Posted to bioRxiv 27 Oct 2016

The successor representation in human reinforcement learning
11,734 downloads neuroscience

Ida Momennejad, Evan M. Russek, Jin H. Cheong, Matthew M. Botvinick, Nathaniel D. Daw, Samuel Gershman

Theories of reward learning in neuroscience have focused on two families of algorithms, thought to capture deliberative vs. habitual choice. Model-based algorithms compute the value of candidate actions from scratch, whereas model-free algorithms make choice more efficient but less flexible by storing pre-computed action values. We examine an intermediate algorithmic family, the successor representation (SR), which balances flexibility and efficiency by storing partially computed action values: predictions about future events. These pre-computation strategies differ in how they update their choices following changes in a task. SR's reliance on stored predictions about future states predicts a unique signature of insensitivity to changes in the task's sequence of events, but flexible adjustment following changes to rewards. We provide evidence for such differential sensitivity in two behavioral studies with humans. These results suggest that the SR is a computational substrate for semi-flexible choice in humans, introducing a subtler, more cognitive notion of habit.

12: Panoptic vDISCO imaging reveals neuronal connectivity, remote trauma effects and meningeal vessels in intact transparent mice
more details view paper

Posted to bioRxiv 23 Jul 2018

Panoptic vDISCO imaging reveals neuronal connectivity, remote trauma effects and meningeal vessels in intact transparent mice
11,731 downloads neuroscience

Ruiyao Cai, Chenchen Pan, Alireza Ghasemigharagoz, Mihail I. Todorov, Benjamin Foerstera, Shan Zhao, Harsharan S. Bhatia, Leander Mrowka, Delphine Theodorou, Markus Rempfler, Anna Xavier, Benjamin T. Kress, Corinne Benakis, Arthur Liesz, Bjoern Menze, Martin Kerschensteiner, Maiken Nedergaard, Ali Erturk

Analysis of entire transparent rodent bodies could provide holistic information on biological systems in health and disease. However, it has been challenging to reliably image and quantify signal from endogenously expressed fluorescent proteins in large cleared mouse bodies due to the low signal contrast. Here, we devised a pressure driven, nanobody based whole-body immunolabeling technology to enhance the signal of fluorescent proteins by up to two orders of magnitude. This allowed us to image subcellular details in transparent mouse bodies through bones and highly autofluorescent tissues, and perform quantifications. We visualized for the first-time whole-body neuronal connectivity of an entire adult mouse and discovered that brain trauma induces degeneration of peripheral axons. We also imaged meningeal lymphatic vessels and immune cells through the intact skull and vertebra in naive animals and trauma models. Thus, our new approach can provide an unbiased holistic view of biological events affecting the nervous system and the rest of the body.

13: Deep neural networks: a new framework for modelling biological vision and brain information processing
more details view paper

Posted to bioRxiv 26 Oct 2015

Deep neural networks: a new framework for modelling biological vision and brain information processing
10,814 downloads neuroscience

Nikolaus Kriegeskorte

Recent advances in neural network modelling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals and not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build neurobiologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

14: Deep Neural Networks In Computational Neuroscience
more details view paper

Posted to bioRxiv 04 May 2017

Deep Neural Networks In Computational Neuroscience
10,371 downloads neuroscience

Tim Christian Kietzmann, Patrick McClure, Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behaviour. At the heart of the field are its models, i.e. mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioural responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term 'neural network' suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g. visual object and auditory speech recognition) to cognitive tasks (e.g. machine translation), and on to motor control (e.g. playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviours, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

15: Spontaneous behaviors drive multidimensional, brain-wide population activity
more details view paper

Posted to bioRxiv 22 Apr 2018

Spontaneous behaviors drive multidimensional, brain-wide population activity
10,347 downloads neuroscience

Carsen Stringer, Marius Pachitariu, Nicholas Steinmetz, Charu Bai Reddy, Matteo Carandini, Kenneth D. Harris

Cortical responses to sensory stimuli are highly variable, and sensory cortex exhibits intricate spontaneous activity even without external sensory input. Cortical variability and spontaneous activity have been variously proposed to represent random noise, recall of prior experience, or encoding of ongoing behavioral and cognitive variables. Here, by recording over 10,000 neurons in mouse visual cortex, we show that spontaneous activity reliably encodes a high-dimensional latent state, which is partially related to the ongoing behavior of the mouse and is represented not just in visual cortex but across the forebrain. Sensory inputs do not interrupt this ongoing signal, but add onto it a representation of visual stimuli in orthogonal dimensions. Thus, visual cortical population activity, despite its apparently noisy structure, reliably encodes an orthogonal fusion of sensory and multidimensional behavioral information.

16: A suite of transgenic driver and reporter mouse lines with enhanced brain cell type targeting and functionality
more details view paper

Posted to bioRxiv 25 Nov 2017

A suite of transgenic driver and reporter mouse lines with enhanced brain cell type targeting and functionality
9,176 downloads neuroscience

Tanya L Daigle, Linda Madisen, Travis A Hage, Matthew T Valley, Ulf Knoblich, Rylan S Larsen, Marc M Takeno, Lawrence Huang, Hong Gu, Rachael Larsen, Maya Mills, Alice Bosma-Moody, La'Akea Siverts, Miranda Walker, Lucas T Graybuck, Zizhen Yao, Olivia Fong, Emma Garren, Garreck Lenz, Mariya Chavarha, Julie Pendergraft, James Harrington, Karla E Hirokawa, Julie A Harris, Medea McGraw, Douglas R Ollerenshaw, Kimberly Smith, Baker A Baker, Jonathan T Ting, Susan M Sunkin, Jerome Lecoq, Michael Z Lin, Edward S Boyden, Gabe J Murphy, Nuno da Costa, Jack Waters, Lu Li, Bosiljka Tasic, Hongkui Zeng

Modern genetic approaches are powerful in providing access to diverse types of neurons within the mammalian brain and greatly facilitating the study of their function. We here report a large set of driver and reporter transgenic mouse lines, including 23 new driver lines targeting a variety of cortical and subcortical cell populations and 26 new reporter lines expressing an array of molecular tools. In particular, we describe the TIGRE2.0 transgenic platform and introduce Cre-dependent reporter lines that enable optical physiology, optogenetics, and sparse labeling of genetically-defined cell populations. TIGRE2.0 reporters broke the barrier in transgene expression level of single-copy targeted-insertion transgenesis in a wide range of neuronal types, along with additional advantage of a simplified breeding strategy compared to our first-generation TIGRE lines. These novel transgenic lines greatly expand the repertoire of high-precision genetic tools available to effectively identify, monitor, and manipulate distinct cell types in the mouse brain.

17: Molecular architecture of the mouse nervous system
more details view paper

Posted to bioRxiv 05 Apr 2018

Molecular architecture of the mouse nervous system
9,144 downloads neuroscience

Amit Zeisel, Hannah Hochgerner, Peter Lönnerberg, Anna Johnsson, Fatima Memic, Job van der Zwan, Martin Haring, Emelie Braun, Lars E Borm, Gioele La Manno, Simone Codeluppi, Alessandro Furlan, Nathan Skene, Kenneth D. Harris, Jens Hjerling Leffler, Ernest Arenas, Patrik Ernfors, Ulrika Marklund, Sten Linnarsson

The mammalian nervous system executes complex behaviors controlled by specialised, precisely positioned and interacting cell types. Here, we used RNA sequencing of half a million single cells to create a detailed census of cell types in the mouse nervous system. We mapped cell types spatially and derived a hierarchical, data-driven taxonomy. Neurons were the most diverse, and were grouped by developmental anatomical units, and by the expression of neurotransmitters and neuropeptides. Neuronal diversity was driven by genes encoding cell identity, synaptic connectivity, neurotransmission and membrane conductance. We discovered several distinct, regionally restricted, astrocytes types, which obeyed developmental boundaries and correlated with the spatial distribution of key glutamate and glycine neurotransmitters. In contrast, oligodendrocytes showed a loss of regional identity, followed by a secondary diversi cation. The resource presented here lays a solid foundation for understanding the molecular architecture of the mammalian nervous system, and enables genetic manipulation of specific cell types.

18: Suite2p: beyond 10,000 neurons with standard two-photon microscopy
more details view paper

Posted to bioRxiv 30 Jun 2016

Suite2p: beyond 10,000 neurons with standard two-photon microscopy
8,997 downloads neuroscience

Marius Pachitariu, Carsen Stringer, Mario Dipoppa, Sylvia Schröder, L. Federico Rossi, Henry Dalgleish, Matteo Carandini, Kenneth D. Harris

Two-photon microscopy of calcium-dependent sensors has enabled unprecedented recordings from vast populations of neurons. While the sensors and microscopes have matured over several generations of development, computational methods to process the resulting movies remain inefficient and can give results that are hard to interpret. Here we introduce Suite2p: a fast, accurate and complete pipeline that registers raw movies, detects active cells, extracts their calcium traces and infers their spike times. Suite2p runs on standard workstations, operates faster than real time, and recovers ~2 times more cells than the previous state-of-the-art method. Its low computational load allows routine detection of ~10,000 cells simultaneously with standard two-photon resonant-scanning microscopes. Recordings at this scale promise to reveal the fine structure of activity in large populations of neurons or large populations of subcellular structures such as synaptic boutons.

19: Bright and photostable chemigenetic indicators for extended in vivo voltage imaging
more details view paper

Posted to bioRxiv 06 Oct 2018

Bright and photostable chemigenetic indicators for extended in vivo voltage imaging
8,693 downloads neuroscience

Ahmed S. Abdelfattah, Takashi Kawashima, Amrita Singh, Ondrej Novak, Hui Liu, Yichun Shuai, Yi-Chieh Huang, Jonathan B. Grimm, Ronak Patel, Johannes Friedrich, Brett D. Mensh, Liam Paninski, John J Macklin, Kaspar Podgorski, Bei-Jung Lin, Tsai-Wen Chen, Glenn C. Turner, Zhe Liu, Minoru Koyama, Karel Svoboda, Misha B Ahrens, Luke D. Lavis, Eric R Schreiter

Imaging changes in membrane potential using genetically encoded fluorescent voltage indicators (GEVIs) has great potential for monitoring neuronal activity with high spatial and temporal resolution. Brightness and photostability of fluorescent proteins and rhodopsins have limited the utility of existing GEVIs. We engineered a novel GEVI, Voltron, that utilizes bright and photostable synthetic dyes instead of protein-based fluorophores, extending the combined duration of imaging and number of neurons imaged simultaneously by more than tenfold relative to existing GEVIs. We used Voltron for in vivo voltage imaging in mice, zebrafish, and fruit flies. In mouse cortex, Voltron allowed single-trial recording of spikes and subthreshold voltage signals from dozens of neurons simultaneously, over 15 minutes of continuous imaging. In larval zebrafish, Voltron enabled the precise correlation of spike timing with behavior.

20: A Single-Cell Atlas of Cell Types, States, and Other Transcriptional Patterns from Nine Regions of the Adult Mouse Brain
more details view paper

Posted to bioRxiv 10 Apr 2018

A Single-Cell Atlas of Cell Types, States, and Other Transcriptional Patterns from Nine Regions of the Adult Mouse Brain
8,475 downloads neuroscience

Arpiar Saunders, Evan Macosko, Alec Wysoker, Melissa Goldman, Fenna Krienen, Heather de Rivera, Elizabeth Bien, Matthew Baum, Shuyu Wang, Aleks Goeva, James Nemesh, Nolan Kamitaki, Sara Brumbaugh, David Kulp, Steven A McCarroll

The mammalian brain is composed of diverse, specialized cell populations, few of which we fully understand. To more systematically ascertain and learn from cellular specializations in the brain, we used Drop-seq to perform single-cell RNA sequencing of 690,000 cells sampled from nine regions of the adult mouse brain: frontal and posterior cortex (156,000 and 99,000 cells, respectively), hippocampus (113,000), thalamus (89,000), cerebellum (26,000), and all of the basal ganglia - the striatum (77,000), globus pallidus externus/nucleus basalis (66,000), entopeduncular/subthalamic nuclei (19,000), and the substantia nigra/ventral tegmental area (44,000). We developed computational approaches to distinguish biological from technical signals in single-cell data, then identified 565 transcriptionally distinct groups of cells, which we annotate and present through interactive online software we developed for visualizing and re-analyzing these data (DropViz). Comparison of cell classes and types across regions revealed features of brain organization. These included a neuronal gene-expression module for synthesizing axonal and presynaptic components; widely shared patterns in the combinatorial co-deployment of voltage-gated ion channels by diverse neuronal populations; functional distinctions among cells of the brain vasculature; and specialization of glutamatergic neurons across cortical regions to a degree not observed in other neuronal or non-neuronal populations. We describe systematic neuronal classifications for two complex, understudied regions of the basal ganglia, the globus pallidus externus and substantia nigra reticulata. In the striatum, where neuron types have been intensely researched, our data reveal a previously undescribed population of striatal spiny projection neurons (SPNs) comprising 4% of SPNs. The adult mouse brain cell atlas can serve as a reference for analyses of development, disease, and evolution.

Previous page 1 2 3 4 5 . . . 496 Next page

Sign up for the Rxivist weekly newsletter! (Click here for more details.)


News