Universality and individuality in neural dynamics across large populations of recurrent networks

07/19/2019
by   Niru Maheswaranathan, et al.
1

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold---the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics---often appears universal across all architectures.

READ FULL TEXT

page 3

page 4

page 8

page 9

page 10

page 11

page 12

page 13

research
06/25/2019

Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

Recurrent neural networks (RNNs) are a widely used tool for modeling seq...
research
04/29/2019

Recurrent Neural Networks in the Eye of Differential Equations

To understand the fundamental trade-offs between training stability, tem...
research
02/27/2023

Analyzing Populations of Neural Networks via Dynamical Model Embedding

A core challenge in the interpretation of deep neural networks is identi...
research
11/01/2021

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

Recurrent neural networks (RNNs) are powerful models for processing time...
research
12/07/2022

Expressive architectures enhance interpretability of dynamics-based neural population models

Artificial neural networks that can recover latent dynamics from recorde...
research
04/13/2018

Neural Trajectory Analysis of Recurrent Neural Network In Handwriting Synthesis

Recurrent neural networks (RNNs) are capable of learning to generate hig...
research
06/14/2018

Insights on representational similarity in neural networks with canonical correlation

Comparing different neural network representations and determining how r...

Please sign up or login with your details

Forgot password? Click here to reset