LambdaNetworks: Modeling Long-Range Interactions Without Attention

02/17/2021
by   Irwan Bello, et al.
13

We present lambda layers – an alternative framework to self-attention – for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and COCO instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x faster than the popular EfficientNets on modern machine learning accelerators. When training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to a 9.5x speed-up over the corresponding EfficientNet checkpoints.

READ FULL TEXT
research
04/22/2019

Attention Augmented Convolutional Networks

Convolutional networks have been the paradigm of choice in many computer...
research
03/23/2021

Scaling Local Self-Attention For Parameter Efficient Visual Backbones

Self-attention has the promise of improving computer vision systems due ...
research
10/06/2020

Global Self-Attention Networks for Image Recognition

Recently, a series of works in computer vision have shown promising resu...
research
11/04/2021

Attention on Classification for Fire Segmentation

Detection and localization of fire in images and videos are important in...
research
11/24/2021

MorphMLP: A Self-Attention Free, MLP-Like Backbone for Image and Video

Self-attention has become an integral component of the recent network ar...
research
11/23/2022

EurNet: Efficient Multi-Range Relational Modeling of Spatial Multi-Relational Data

Modeling spatial relationship in the data remains critical across many d...
research
12/29/2020

Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps

Modern neural network architectures use structured linear transformation...

Please sign up or login with your details

Forgot password? Click here to reset