A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation
This work addresses the unsupervised domain adaptation problem, especially for the partial scenario where the class labels in the target domain are only a subset of those in the source domain. Such a partial transfer setting sounds realistic but challenging while existing methods always suffer from two key problems, i.e., negative transfer and uncertainty propagation. In this paper, we build on domain adversarial learning and propose a novel domain adaptation method BA^3US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS), respectively. On one hand, negative transfer results in that target samples are misclassified to the classes only present in the source domain. To address this issue, BAA aims to pursue the balance between label distributions across domains in a quite simple manner. Specifically, it randomly leverages a few source samples to augment the smaller target domain during domain alignment so that classes in different domains are symmetric. On the other hand, a source sample is denoted as uncertain if there is an incorrect class that has a relatively high prediction score. Such uncertainty is easily propagated to the unlabeled target data around it during alignment, which severely deteriorates the adaptation performance. Thus, AUS emphasizes uncertain samples and exploits an adaptive weighted complement entropy objective to expect that incorrect classes have the uniform and low prediction scores. Experimental results on multiple benchmarks demonstrate that BA^3US surpasses state-of-the-arts for partial domain adaptation tasks.
READ FULL TEXT