TTTFlow: Unsupervised Test-Time Training with Normalizing Flow

10/20/2022
by   David Osowiechi, et al.
0

A major problem of deep neural networks for image classification is their vulnerability to domain changes at test-time. Recent methods have proposed to address this problem with test-time training (TTT), where a two-branch model is trained to learn a main classification task and also a self-supervised task used to perform test-time adaptation. However, these techniques require defining a proxy task specific to the target application. To tackle this limitation, we propose TTTFlow: a Y-shaped architecture using an unsupervised head based on Normalizing Flows to learn the normal distribution of latent features and detect domain shifts in test examples. At inference, keeping the unsupervised head fixed, we adapt the model to domain-shifted examples by maximizing the log likelihood of the Normalizing Flow. Our results show that our method can significantly improve the accuracy with respect to previous works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption

An unresolved problem in Deep Learning is the ability of neural networks...
research
10/04/2022

Mixup for Test-Time Training

Test-time training provides a new approach solving the problem of domain...
research
05/09/2023

Adaptive Domain Generalization for Digital Pathology Images

In AI-based histopathology, domain shifts are common and well-studied. H...
research
05/18/2022

TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision

Nowadays, deep neural networks outperform humans in many tasks. However,...
research
07/07/2022

Back to the Source: Diffusion-Driven Test-Time Adaptation

Test-time adaptation harnesses test inputs to improve the accuracy of a ...
research
10/19/2021

Test time Adaptation through Perturbation Robustness

Data samples generated by several real world processes are dynamic in na...
research
05/29/2023

Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models

Misalignment between the outputs of a vision-language (VL) model and tas...

Please sign up or login with your details

Forgot password? Click here to reset