Neural system identification for large populations separating "what" and "where"

11/07/2017
by   David A. Klindt, et al.
0

Neuroscientists classify neurons into different types that perform similar computations at different locations in the visual field. Traditional methods for neural system identification do not capitalize on this separation of 'what' and 'where'. Learning deep convolutional feature spaces that are shared among many neurons provides an exciting path forward, but the architectural design needs to account for data limitations: While new experimental techniques enable recordings from thousands of neurons, experimental time is limited so that one can sample only a small fraction of each neuron's response space. Here, we show that a major bottleneck for fitting convolutional neural networks (CNNs) to neural data is the estimation of the individual receptive field locations, a problem that has been scratched only at the surface thus far. We propose a CNN architecture with a sparse readout layer factorizing the spatial (where) and feature (what) dimensions. Our network scales well to thousands of neurons and short recordings and can be trained end-to-end. We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2018

A rotation-equivariant convolutional neural network model of primary visual cortex

Classical models describe primary visual cortex (V1) as a filter bank of...
research
08/05/2017

Depth Adaptive Deep Neural Network for Semantic Segmentation

In this work, we present the depth-adaptive deep neural network using a ...
research
03/15/2019

Selective Kernel Networks

In standard Convolutional Neural Networks (CNNs), the receptive fields o...
research
11/29/2021

Image denoising by Super Neurons: Why go deep?

Classical image denoising methods utilize the non-local self-similarity ...
research
06/07/2018

Correspondence of Deep Neural Networks and the Brain for Visual Textures

Deep convolutional neural networks (CNNs) trained on objects and scenes ...
research
10/22/2020

Factorized Neural Processes for Neural Processes: K-Shot Prediction of Neural Responses

In recent years, artificial neural networks have achieved state-of-the-a...
research
11/25/2021

Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks

Understanding how activity in neural circuits reshapes following task le...

Please sign up or login with your details

Forgot password? Click here to reset