Unsupervised Domain Adaptation of Black-Box Source Models

01/08/2021
by   Haojian Zhang, et al.
0

Unsupervised domain adaptation (UDA) aims to learn a model for unlabeled data on a target domain by transferring knowledge from a labeled source domain. In the traditional UDA setting, labeled source data are assumed to be available for the use of model adaptation. Due to the increasing concerns for data privacy, source-free UDA is highly appreciated as a new UDA setting, where only a trained source model is assumed to be available, while the labeled source data remain private. However, exposing details of the trained source model for UDA use is prone to easily committed white-box attacks, which brings severe risks to the source tasks themselves. To address this issue, we advocate studying a subtly different setting, named Black-Box Unsupervised Domain Adaptation (B2UDA), where only the input-output interface of the source model is accessible in UDA; in other words, the source model itself is kept as a black-box one. To tackle the B2UDA task, we propose a simple yet effective method, termed Iterative Noisy Label Learning (IterNLL). IterNLL starts with getting noisy labels of the unlabeled target data from the black-box source model. It then alternates between learning improved target models from the target subset with more reliable labels and updating the noisy target labels. Experiments on benchmark datasets confirm the efficacy of our proposed method. Notably, IterNLL performs comparably with methods of the traditional UDA setting where the labeled source data are fully available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset