Multi-target Unsupervised Domain Adaptation without Exactly Shared Categories
Unsupervised domain adaptation (UDA) aims to learn the unlabeled target domain by transferring the knowledge from the labeled source domain. To date, most of the existing works focus on the scenario of one source domain and one target domain (1S1T), and just a few works concern UDA of multiple source domains and one target domain (mS1T) for solving the insufficient knowledge problem with single source domain. While, to the best of our knowledge, almost no work concerns the scenario of one source domain and multiple target domains (1SmT). In the 1SmT, these unlabeled target domains may not necessarily share the same categories, therefore, in contrast to mS1T, 1SmT is more challenging. In this paper, we study such a new UDA scenario, and accordingly propose a UDA framework (PA-1SmT) through the model parameter adaptation among these target domains and the source domain. A key ingredient of our framework is that we firstly construct a model parameter dictionary which is shared not only between the source domain and the individual target domains but also among the multiple target domains. Then we use it to sparsely represent individual target parameters, which attains knowledge transfer among the domains. Such a new knowledge transfer is different from existing popular methods for UDA, such as subspace alignment, distribution matching etc., and can also be directly used for DA of privacy protection due to the fact that the knowledge is transferred just via the model parameters rather than data itself. Finally, our experimental results on three domain adaptation benchmark datasets demonstrate the superiority of our framework.
READ FULL TEXT