The Multiscale Surface Vision Transformer

03/21/2023
by   Simon Dahan, et al.
0

Surface meshes are a favoured domain for representing structural and functional information on the human cortex, but their complex topology and geometry pose significant challenges for deep learning analysis. While Transformers have excelled as domain-agnostic architectures for sequence-to-sequence learning, notably for structures where the translation of the convolution operation is non-trivial, the quadratic cost of the self-attention operation remains an obstacle for many dense prediction tasks. Inspired by some of the latest advances in hierarchical modelling with vision transformers, we introduce the Multiscale Surface Vision Transformer (MS-SiT) as a backbone architecture for surface deep learning. The self-attention mechanism is applied within local-mesh-windows to allow for high-resolution sampling of the underlying data, while a shifted-window strategy improves the sharing of information between windows. Neighbouring patches are successively merged, allowing the MS-SiT to learn hierarchical representations suitable for any prediction task. Results demonstrate that the MS-SiT outperforms existing surface deep learning methods for neonatal phenotyping prediction tasks using the Developing Human Connectome Project (dHCP) dataset. Furthermore, building the MS-SiT backbone into a U-shaped architecture for surface segmentation demonstrates competitive results on cortical parcellation using the UK Biobank (UKB) and manually-annotated MindBoggle datasets. Code and trained models are publicly available at https://github.com/metrics-lab/surface-vision-transformers .

READ FULL TEXT

page 7

page 14

research
05/31/2022

Surface Analysis with Vision Transformers

The extension of convolutional neural networks (CNNs) to non-Euclidean g...
research
04/07/2022

Surface Vision Transformers: Flexible Attention-Based Modelling of Biomedical Surfaces

Recent state-of-the-art performances of Vision Transformers (ViT) in com...
research
08/10/2023

Surface Masked AutoEncoder: Self-Supervision for Cortical Imaging Data

Self-supervision has been widely explored as a means of addressing the l...
research
03/30/2022

Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis

The extension of convolutional neural networks (CNNs) to non-Euclidean g...
research
05/26/2022

Green Hierarchical Vision Transformer for Masked Image Modeling

We present an efficient approach for Masked Image Modeling (MIM) with hi...
research
04/06/2022

MixFormer: Mixing Features across Windows and Dimensions

While local-window self-attention performs notably in vision tasks, it s...
research
03/21/2023

Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

Transformers are state-of-the-art networks for most sequence processing ...

Please sign up or login with your details

Forgot password? Click here to reset