N-Gram in Swin Transformers for Efficient Lightweight Image Super-Resolution

11/21/2022
by   Haram Choi, et al.
0

While some studies have proven that Swin Transformer (SwinT) with window self-attention (WSA) is suitable for single image super-resolution (SR), SwinT ignores the broad regions for reconstructing high-resolution images due to window and shift size. In addition, many deep learning SR methods suffer from intensive computations. To address these problems, we introduce the N-Gram context to the image domain for the first time in history. We define N-Gram as neighboring local windows in SwinT, which differs from text analysis that views N-Gram as consecutive characters or words. N-Grams interact with each other by sliding-WSA, expanding the regions seen to restore degraded pixels. Using the N-Gram context, we propose NGswin, an efficient SR network with SCDP bottleneck taking all outputs of the hierarchical encoder. Experimental results show that NGswin achieves competitive performance while keeping an efficient structure, compared with previous leading methods. Moreover, we also improve other SwinT-based SR methods with the N-Gram context, thereby building an enhanced model: SwinIR-NG. Our improved SwinIR-NG outperforms the current best lightweight SR approaches and establishes state-of-the-art results. Codes will be available soon.

READ FULL TEXT

page 17

page 18

page 19

page 20

page 21

page 23

page 24

page 25

research
08/17/2021

Light Field Image Super-Resolution with Transformers

Light field (LF) image super-resolution (SR) aims at reconstructing high...
research
05/19/2023

Efficient Mixed Transformer for Single Image Super-Resolution

Recently, Transformer-based methods have achieved impressive results in ...
research
01/24/2023

Image Super-Resolution using Efficient Striped Window Transformer

Recently, transformer-based methods have made impressive progress in sin...
research
08/25/2021

Efficient Transformer for Single Image Super-Resolution

Single image super-resolution task has witnessed great strides with the ...
research
03/17/2023

SRFormer: Permuted Self-Attention for Single Image Super-Resolution

Previous works have shown that increasing the window size for Transforme...
research
11/07/2017

Can Maxout Units Downsize Restoration Networks? - Single Image Super-Resolution Using Lightweight CNN with Maxout Units

Rectified linear units (ReLU) are well-known to be helpful in obtaining ...
research
03/11/2023

Recursive Generalization Transformer for Image Super-Resolution

Transformer architectures have exhibited remarkable performance in image...

Please sign up or login with your details

Forgot password? Click here to reset