ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

06/07/2021
by   Yufei Xu, et al.
12

Transformers have shown great potential in various computer vision tasks owing to their strong capability in modeling long-range dependency using the self-attention mechanism. Nevertheless, vision transformers treat an image as 1D sequence of visual tokens, lacking an intrinsic inductive bias (IB) in modeling local visual structures and dealing with scale variance. Alternatively, they require large-scale training data and longer training schedules to learn the IB implicitly. In this paper, we propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, , ViTAE. Technically, ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context by using multiple convolutions with different dilation rates. In this way, it acquires an intrinsic scale invariance IB and is able to learn robust feature representation for objects at various scales. Moreover, in each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network. Consequently, it has the intrinsic locality IB and is able to learn local features and global dependencies collaboratively. Experiments on ImageNet as well as downstream tasks prove the superiority of ViTAE over the baseline transformer and concurrent works. Source code and pretrained models will be available at GitHub.

READ FULL TEXT

page 4

page 9

page 13

page 14

page 15

page 16

page 17

research
07/13/2022

Pyramid Transformer for Traffic Sign Detection

Traffic sign detection is a vital task in the visual system of self-driv...
research
12/24/2021

SimViT: Exploring a Simple Vision Transformer with sliding windows

Although vision Transformers have achieved excellent performance as back...
research
03/29/2023

Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens

The fervor for Non-Fungible Tokens (NFTs) attracted countless creators, ...
research
11/25/2022

Adaptive Attention Link-based Regularization for Vision Transformers

Although transformer networks are recently employed in various vision ta...
research
06/17/2022

CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer

Transformer has achieved great successes in learning vision and language...
research
10/12/2022

S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces

Visual data such as images and videos are typically modeled as discretiz...
research
12/09/2022

Mitigation of Spatial Nonstationarity with Vision Transformers

Spatial nonstationarity, the location variance of features' statistical ...

Please sign up or login with your details

Forgot password? Click here to reset