Domain Translation via Latent Space Mapping
In this paper, we investigate the problem of multi-domain translation: given an element a of domain A, we would like to generate a corresponding b sample in another domain B, and vice versa. Acquiring supervision in multiple domains can be a tedious task, also we propose to learn this translation from one domain to another when supervision is available as a pair (a,b)∼ A× B and leveraging possible unpaired data when only a∼ A or only b∼ B is available. We introduce a new unified framework called Latent Space Mapping () that exploits the manifold assumption in order to learn, from each domain, a latent space. Unlike existing approaches, we propose to further regularize each latent space using available domains by learning each dependency between pairs of domains. We evaluate our approach in three tasks performing i) synthetic dataset with image translation, ii) real-world task of semantic segmentation for medical images, and iii) real-world task of facial landmark detection.
READ FULL TEXT