Dual-Flattening Transformers through Decomposed Row and Column Queries for Semantic Segmentation

by   Ying Wang, et al.

It is critical to obtain high resolution features with long range dependency for dense prediction tasks such as semantic segmentation. To generate high-resolution output of size H× W from a low-resolution feature map of size h× w (hw≪ HW), a naive dense transformer incurs an intractable complexity of 𝒪(hwHW), limiting its application on high-resolution dense prediction. We propose a Dual-Flattening Transformer (DFlatFormer) to enable high-resolution output by reducing complexity to 𝒪(hw(H+W)) that is multiple orders of magnitude smaller than the naive dense transformer. Decomposed queries are presented to retrieve row and column attentions tractably through separate transformers, and their outputs are combined to form a dense feature map at high resolution. To this end, the input sequence fed from an encoder is row-wise and column-wise flattened to align with decomposed queries by preserving their row and column structures, respectively. Row and column transformers also interact with each other to capture their mutual attentions with the spatial crossings between rows and columns. We also propose to perform attentions through efficient grouping and pooling to further reduce the model complexity. Extensive experiments on ADE20K and Cityscapes datasets demonstrate the superiority of the proposed dual-flattening transformer architecture with higher mIoUs.


page 4

page 8


ASSET: Autoregressive Semantic Scene Editing with Transformers at High Resolutions

We present ASSET, a neural architecture for automatically modifying an i...

ScaleFormer: Revisiting the Transformer-based Backbones from a Scale-wise Perspective for Medical Image Segmentation

Recently, a variety of vision transformers have been developed as their ...

Vision Transformers for Dense Prediction

We introduce dense vision transformers, an architecture that leverages v...

Deep is a Luxury We Don't Have

Medical images come in high resolutions. A high resolution is vital for ...

HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features

We present HoHoNet, a versatile and efficient framework for holistic und...

SAViR-T: Spatially Attentive Visual Reasoning with Transformers

We present a novel computational model, "SAViR-T", for the family of vis...

Coordinate descent heuristics for the irregular strip packing problem of rasterized shapes

We consider the irregular strip packing problem of rasterized shapes, wh...

Please sign up or login with your details

Forgot password? Click here to reset