Separable Self-attention for Mobile Vision Transformers

06/06/2022
by   Sachin Mehta, et al.
10

Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k^2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTv2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTv2 achieves a top-1 accuracy of 75.6 dataset, outperforming MobileViT by about 1 on a mobile device. Our source code is available at: <https://github.com/apple/ml-cvnets>

READ FULL TEXT

page 9

page 15

research
03/29/2022

SepViT: Separable Vision Transformer

Vision Transformers have witnessed prevailing success in a series of vis...
research
01/27/2021

Bottleneck Transformers for Visual Recognition

We present BoTNet, a conceptually simple yet powerful backbone architect...
research
07/18/2023

RepViT: Revisiting Mobile CNN From ViT Perspective

Recently, lightweight Vision Transformers (ViTs) demonstrate superior pe...
research
05/27/2022

X-ViT: High Performance Linear Vision Transformer without Softmax

Vision transformers have become one of the most important models for com...
research
08/21/2023

Spatial Transform Decoupling for Oriented Object Detection

Vision Transformers (ViTs) have achieved remarkable success in computer ...
research
09/05/2023

Compressing Vision Transformers for Low-Resource Visual Learning

Vision transformer (ViT) and its variants have swept through visual lear...
research
05/26/2022

Fast Vision Transformers with HiLo Attention

Vision Transformers (ViTs) have triggered the most recent and significan...

Please sign up or login with your details

Forgot password? Click here to reset