LGViT: Dynamic Early Exiting for Accelerating Vision Transformer

08/01/2023
by   Guanyu Xu, et al.
0

Recently, the efficient deployment and acceleration of powerful vision transformers (ViTs) on resource-limited edge devices for providing multimedia services have become attractive tasks. Although early exiting is a feasible solution for accelerating inference, most works focus on convolutional neural networks (CNNs) and transformer models in natural language processing (NLP).Moreover, the direct application of early exiting methods to ViTs may result in substantial performance degradation. To tackle this challenge, we systematically investigate the efficacy of early exiting in ViTs and point out that the insufficient feature representations in shallow internal classifiers and the limited ability to capture target semantic information in deep internal classifiers restrict the performance of these methods. We then propose an early exiting framework for general ViTs termed LGViT, which incorporates heterogeneous exiting heads, namely, local perception head and global aggregation head, to achieve an efficiency-accuracy trade-off. In particular, we develop a novel two-stage training scheme, including end-to-end training and self-distillation with the backbone frozen to generate early exiting ViTs, which facilitates the fusion of global and local information extracted by the two types of heads. We conduct extensive experiments using three popular ViT backbones on three vision datasets. Results demonstrate that our LGViT can achieve competitive performance with approximately 1.8 × speed-up.

READ FULL TEXT
research
06/02/2021

Container: Context Aggregation Network

Convolutional neural networks (CNNs) are ubiquitous in computer vision, ...
research
04/27/2022

DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers

Transformers are successfully applied to computer vision due to their po...
research
02/14/2022

Handcrafted Histological Transformer (H2T): Unsupervised Representation of Whole Slide Images

Diagnostic, prognostic and therapeutic decision-making of cancer in path...
research
09/10/2023

DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices

Recent years have witnessed the great success of vision transformer (ViT...
research
06/09/2021

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

The problem of reducing processing time of large deep learning models is...
research
12/29/2020

Accelerating Pre-trained Language Models via Calibrated Cascade

Dynamic early exiting aims to accelerate pre-trained language models' (P...

Please sign up or login with your details

Forgot password? Click here to reset