BrainBERT: Self-supervised representation learning for intracranial recordings

02/28/2023
by   Christopher Wang, et al.
0

We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience. Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, i.e., decoding neural data, with higher accuracy and with much less data by being pretrained in an unsupervised manner on a large corpus of unannotated neural recordings. Our approach generalizes to new subjects with electrodes in new positions and to unrelated tasks showing that the representations robustly disentangle the neural signal. Just like in NLP where one can study language by investigating what a language model learns, this approach opens the door to investigating the brain by what a model of the brain learns. As a first step along this path, we demonstrate a new analysis of the intrinsic dimensionality of the computations in different areas of the brain. To construct these representations, we combine a technique for producing super-resolution spectrograms of neural data with an approach designed for generating contextual representations of audio by masking. In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.

READ FULL TEXT

page 2

page 3

page 7

page 14

page 16

page 19

page 20

page 21

research
05/27/2022

Self-supervised models of audio effectively explain human cortical responses to speech

Self-supervised language models are very effective at predicting high-le...
research
12/15/2022

Joint processing of linguistic properties in brains and language models

Language models have been shown to be very effective in predicting brain...
research
08/25/2022

Decoding speech from non-invasive brain recordings

Decoding language from brain activity is a long-awaited goal in both hea...
research
05/28/2019

Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)

Neural network models for NLP are typically implemented without the expl...
research
05/26/2020

BHN: A Brain-like Heterogeneous Network

The human brain works in an unsupervised way, and more than one brain re...
research
04/22/2021

Low Anisotropy Sense Retrofitting (LASeR) : Towards Isotropic and Sense Enriched Representations

Contextual word representation models have shown massive improvements on...
research
10/25/2021

Lhotse: a speech data representation library for the modern deep learning ecosystem

Speech data is notoriously difficult to work with due to a variety of co...

Please sign up or login with your details

Forgot password? Click here to reset