RFAConv: Innovating Spatital Attention and Standard Convolutional Operation

04/06/2023
by   Xin Zhang, et al.
5

Spatial attention has been demonstrated to enable convolutional neural networks to focus on critical information to improve network performance, but it still has limitations. In this paper, we explain the effectiveness of spatial attention from a new perspective, it is that the spatial attention mechanism essentially solves the problem of convolutional kernel parameter sharing. However, the information contained in the attention map generated by spatial attention is still lacking for large-size convolutional kernels. So, we propose a new attention mechanism called Receptive-Field Attention (RFA). The Convolutional Block Attention Module (CBAM) and Coordinate Attention (CA) only focus on spatial features and cannot fully solve the problem of convolutional kernel parameter sharing, but in RFA, the receptive-field spatial feature not only is focused but also provide good attention weights for large-size convolutional kernels. The Receptive-Field Attention convolutional operation (RFAConv) designed by RFA can be considered a new way to replace the standard convolution and brings almost negligible computational cost and a number of parameters. Numerous experiments on Imagenet-1k, MS COCO, and VOC demonstrate the superior performance of our approach in classification, object detection, and semantic segmentation tasks. Importantly, we believe that for some current spatial attention mechanisms that focus only on spatial features, it is time to improve the performance of the network by focusing on receptive-field spatial features. The code and pre-trained models for the relevant tasks can be found at https://github.com/Liuchen1997/RFAConv

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset