Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training

03/20/2023
by   Yongyi Su, et al.
0

Deploying models on target domain data subject to distribution shift requires adaptation. Test-time training (TTT) emerges as a solution to this adaptation under a realistic scenario where access to full source domain data is not available, and instant inference on the target domain is required. Despite many efforts into TTT, there is a confusion over the experimental settings, thus leading to unfair comparisons. In this work, we first revisit TTT assumptions and categorize TTT protocols by two key factors. Among the multiple protocols, we adopt a realistic sequential test-time training (sTTT) protocol, under which we develop a test-time anchored clustering (TTAC) approach to enable stronger test-time feature learning. TTAC discovers clusters in both source and target domains and matches the target clusters to the source ones to improve adaptation. When source domain information is strictly absent (i.e. source-free) we further develop an efficient method to infer source domain distributions for anchored clustering. Finally, self-training (ST) has demonstrated great success in learning from unlabeled data and we empirically figure out that applying ST alone to TTT is prone to confirmation bias. Therefore, a more effective TTT approach is introduced by regularizing self-training with anchored clustering, and the improved model is referred to as TTAC++. We demonstrate that, under all TTT protocols, TTAC++ consistently outperforms the state-of-the-art methods on five TTT datasets, including corrupted target domain, selected hard samples, synthetic-to-real adaptation and adversarially attacked target domain. We hope this work will provide a fair benchmarking of TTT methods, and future research should be compared within respective protocols.

READ FULL TEXT

page 1

page 4

page 7

page 15

research
06/06/2022

Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering

Deploying models on target domain data subject to distribution shift req...
research
07/07/2022

Back to the Source: Diffusion-Driven Test-Time Adaptation

Test-time adaptation harnesses test inputs to improve the accuracy of a ...
research
04/28/2022

Covariance-aware Feature Alignment with Pre-computed Source Statistics for Test-time Adaptation

The accuracy of deep neural networks is degraded when the distribution o...
research
08/19/2023

On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion

Generalizing deep learning models to unknown target domain distribution ...
research
06/08/2023

Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization

In domain generalization (DG), the target domain is unknown when the mod...
research
04/10/2023

Revisiting Test Time Adaptation under Online Evaluation

This paper proposes a novel online evaluation protocol for Test Time Ada...
research
07/24/2022

Improving Test-Time Adaptation via Shift-agnostic Weight Regularization and Nearest Source Prototypes

This paper proposes a novel test-time adaptation strategy that adjusts t...

Please sign up or login with your details

Forgot password? Click here to reset