Adaptively Aligned Image Captioning via Adaptive Attention Time

09/19/2019
by   Lun Huang, et al.
13

Recent neural models for image captioning usually employs an encoder-decoder framework with attention mechanism. However, the attention mechanism in such a framework aligns one single (attended) image feature vector to one caption word, assuming one-to-one mapping from source image regions and target caption words, which is never possible. In this paper, we propose a novel attention model, namely Adaptive Attention Time (AAT), which can adaptively align source to target for image captioning. AAT allows the framework to learn how many attention steps to take to output a caption word at each decoding step. With AAT, image regions and caption words can be aligned adaptively in the decoding process: an image region can be mapped to arbitrary number of caption words while a caption word can also attend to arbitrary number of image regions. AAT is deterministic and differentiable, and doesn't introduce any noise to the parameter gradients. AAT is also generic and can be employed by any sequence-to-sequence learning task. In this paper, we empirically show that AAT improves over state-of-the-art methods on the task of image captioning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset