Improving Visually Grounded Sentence Representations with Self-Attention

12/02/2017
by   Kang Min Yoo, et al.
0

Sentence representation models trained only on language could potentially suffer from the grounding problem. Recent work has shown promising results in improving the qualities of sentence representations by jointly training them with associated image features. However, the grounding capability is limited due to distant connection between input sentences and image features by the design of the architecture. In order to further close the gap, we propose applying self-attention mechanism to the sentence encoder to deepen the grounding effect. Our results on transfer tasks show that self-attentive encoders are better for visual grounding, as they exploit specific words with strong visual associations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset