TFS-ViT: Token-Level Feature Stylization for Domain Generalization

03/28/2023
by   Mehrdad Noori, et al.
0

Standard deep learning models such as convolutional neural networks (CNNs) lack the ability of generalizing to domains which have not been seen during training. This problem is mainly due to the common but often wrong assumption of such models that the source and target data come from the same i.i.d. distribution. Recently, Vision Transformers (ViTs) have shown outstanding performance for a broad range of computer vision tasks. However, very few studies have investigated their ability to generalize to new domains. This paper presents a first Token-level Feature Stylization (TFS-ViT) approach for domain generalization, which improves the performance of ViTs to unseen data by synthesizing new domains. Our approach transforms token features by mixing the normalization statistics of images from different domains. We further improve this approach with a novel strategy for attention-aware stylization, which uses the attention maps of class (CLS) tokens to compute and mix normalization statistics of tokens corresponding to different image regions. The proposed method is flexible to the choice of backbone model and can be easily applied to any ViT-based architecture with a negligible increase in computational complexity. Comprehensive experiments show that our approach is able to achieve state-of-the-art performance on five challenging benchmarks for domain generalization, and demonstrate its ability to deal with different types of domain shifts. The implementation is available at: https://github.com/Mehrdad-Noori/TFS-ViT_Token-level_Feature_Stylization.

READ FULL TEXT

page 1

page 4

page 6

page 8

research
10/12/2022

Token-Label Alignment for Vision Transformers

Data mixing strategies (e.g., CutMix) have shown the ability to greatly ...
research
03/11/2022

ActiveMLP: An MLP-like Architecture with Active Token Mixer

This paper presents ActiveMLP, a general MLP-like backbone for computer ...
research
02/04/2021

SelfNorm and CrossNorm for Out-of-Distribution Robustness

Normalization techniques are crucial in stabilizing and accelerating the...
research
08/20/2023

DomainDrop: Suppressing Domain-Sensitive Channels for Domain Generalization

Deep Neural Networks have exhibited considerable success in various visu...
research
02/08/2022

Uncertainty Modeling for Out-of-Distribution Generalization

Though remarkable progress has been achieved in various vision tasks, de...
research
10/09/2022

Attention Diversification for Domain Generalization

Convolutional neural networks (CNNs) have demonstrated gratifying result...
research
07/07/2022

A simple normalization technique using window statistics to improve the out-of-distribution generalization in medical images

Since data scarcity and data heterogeneity are prevailing for medical im...

Please sign up or login with your details

Forgot password? Click here to reset