Learning a Reversible Embedding Mapping using Bi-Directional Manifold Alignment

06/30/2021
by   Ashwinkumar Ganesan, et al.
0

We propose a Bi-Directional Manifold Alignment (BDMA) that learns a non-linear mapping between two manifolds by explicitly training it to be bijective. We demonstrate BDMA by training a model for a pair of languages rather than individual, directed source and target combinations, reducing the number of models by 50 (source to target) direction can successfully map words in the "reverse" (target to source) direction, yielding equivalent (or better) performance to standard unidirectional translation models where the source and target language is flipped. We also show how BDMA reduces the overall size of the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2018

Bi-Directional Neural Machine Translation with Synthetic Parallel Data

Despite impressive progress in high-resource settings, Neural Machine Tr...
research
09/26/2019

Task-Discriminative Domain Alignment for Unsupervised Domain Adaptation

Domain Adaptation (DA), the process of effectively adapting task models ...
research
03/29/2023

Bi-directional Training for Composed Image Retrieval via Text Prompt Learning

Composed image retrieval searches for a target image based on a multi-mo...
research
03/03/2022

UDAAN - Machine Learning based Post-Editing tool for Document Translation

We introduce UDAAN, an open-source post-editing tool that can reduce man...
research
11/08/2022

Third-Party Aligner for Neural Word Alignments

Word alignment is to find translationally equivalent words between sourc...
research
03/25/2018

P2P-NET: Bidirectional Point Displacement Network for Shape Transform

We introduce P2P-NET, a general-purpose deep neural network which learns...
research
03/25/2018

P2P-NET: Bidirectional Point Displacement Net for Shape Transform

We introduce P2P-NET, a general-purpose deep neural network which learns...

Please sign up or login with your details

Forgot password? Click here to reset