Convolutional Bypasses Are Better Vision Transformer Adapters

07/14/2022
by   Shibo Jie, et al.
0

The pretrain-then-finetune paradigm has been widely adopted in computer vision. But as the size of Vision Transformer (ViT) grows exponentially, the full finetuning becomes prohibitive in view of the heavier storage overhead. Motivated by parameter-efficient transfer learning (PETL) on language transformers, recent studies attempt to insert lightweight adaptation modules (e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune these modules while the pretrained weights are frozen. However, these modules were originally proposed to finetune language models. Although ported well to ViT, their design lacks prior knowledge for visual tasks. In this paper, we propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation modules, introducing only a small amount (less than 0.5 of trainable parameters to adapt the large ViT. Different from other PETL methods, Convpass benefits from the hard-coded inductive bias of convolutional layers and thus is more suitable for visual tasks, especially in the low-data regime. Experimental results on VTAB-1k benchmark and few-shot learning datasets demonstrate that Convpass outperforms current language-oriented adaptation modules, demonstrating the necessity to tailor vision-oriented adaptation modules for vision models.

READ FULL TEXT

page 3

page 4

page 7

research
04/26/2023

PVP: Pre-trained Visual Parameter-Efficient Tuning

Large-scale pre-trained transformers have demonstrated remarkable succes...
research
06/09/2022

Neural Prompt Search

The size of vision models has grown exponentially over the last few year...
research
07/06/2023

Vision Language Transformers: A Survey

Vision language tasks, such as answering questions about or generating c...
research
12/06/2022

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Recent work has explored the potential to adapt a pre-trained vision tra...
research
03/27/2023

Learning Expressive Prompting With Residuals for Vision Transformers

Prompt learning is an efficient approach to adapt transformers by insert...
research
06/26/2023

Composing Parameter-Efficient Modules with Arithmetic Operations

As an efficient alternative to conventional full finetuning, parameter-e...
research
05/03/2022

Mixed-effects transformers for hierarchical adaptation

Language use differs dramatically from context to context. To some degre...

Please sign up or login with your details

Forgot password? Click here to reset