Learning Task-oriented Disentangled Representations for Unsupervised Domain Adaptation

by   Pingyang Dai, et al.

Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain. Many efforts have been made to address the mismatch between the distributions of training and testing data, but unfortunately, they ignore the task-oriented information across domains and are inflexible to perform well in complicated open-set scenarios. Many efforts have been made to eliminate the mismatch between the distributions of training and testing data by learning domain-invariant representations. However, the learned representations are usually not task-oriented, i.e., being class-discriminative and domain-transferable simultaneously. This drawback limits the flexibility of UDA in complicated open-set tasks where no labels are shared between domains. In this paper, we break the concept of task-orientation into task-relevance and task-irrelevance, and propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA. The dynamic disentangling network effectively disentangles data representations into two components: the task-relevant ones embedding critical information associated with the task across domains, and the task-irrelevant ones with the remaining non-transferable or disturbing information. These two components are regularized by a group of task-specific objective functions across domains. Such regularization explicitly encourages disentangling and avoids the use of generative models or decoders. Experiments in complicated, open-set scenarios (retrieval tasks) and empirical benchmarks (classification tasks) demonstrate that the proposed method captures rich disentangled information and achieves superior performance.


page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8


Domain Consistency Regularization for Unsupervised Multi-source Domain Adaptive Classification

Deep learning-based multi-source unsupervised domain adaptation (MUDA) h...

Unsupervised Domain Adaptation with Similarity Learning

The objective of unsupervised domain adaptation is to leverage features ...

Disentanglement by Cyclic Reconstruction

Deep neural networks have demonstrated their ability to automatically ex...

Domain-Invariant Adversarial Learning for Unsupervised Domain Adaption

Unsupervised domain adaption aims to learn a powerful classifier for the...

Dynamic Weighted Learning for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to improve the classification ...

Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification

Class imbalance naturally exists when train and test models in different...

Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog

Recent studies have shown remarkable success in end-to-end task-oriented...

Please sign up or login with your details

Forgot password? Click here to reset