FsaNet: Frequency Self-attention for Semantic Segmentation

11/28/2022
by   Fengyu Zhang, et al.
0

Considering the spectral properties of images, we propose a new self-attention mechanism with highly reduced computational complexity, up to a linear rate. To better preserve edges while promoting similarity within objects, we propose individualized processes over different frequency bands. In particular, we study a case where the process is merely over low-frequency components. By ablation study, we show that low frequency self-attention can achieve very close or better performance relative to full frequency even without retraining the network. Accordingly, we design and embed novel plug-and-play modules to the head of a CNN network that we refer to as FsaNet. The frequency self-attention 1) takes low frequency coefficients as input, 2) can be mathematically equivalent to spatial domain self-attention with linear structures, 3) simplifies token mapping (1×1 convolution) stage and token mixing stage simultaneously. We show that the frequency self-attention requires 87.29%∼ 90.04% less memory, 96.13%∼ 98.07% less FLOPs, and 97.56%∼ 98.18% in run time than the regular self-attention. Compared to other ResNet101-based self-attention networks, FsaNet achieves a new state-of-the-art result (83.0% mIoU) on Cityscape test dataset and competitive results on ADE20k and VOCaug.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro