Text with Knowledge Graph Augmented Transformer for Video Captioning

by   Xin Gu, et al.

Video captioning aims to describe the content of videos using natural language. Although significant progress has been made, there is still much room to improve the performance for real-world applications, mainly due to the long-tail words challenge. In this paper, we propose a text with knowledge graph augmented transformer (TextKG) for video captioning. Notably, TextKG is a two-stream transformer, formed by the external stream and internal stream. The external stream is designed to absorb additional knowledge, which models the interactions between the additional knowledge, e.g., pre-built knowledge graph, and the built-in information of videos, e.g., the salient object regions, speech transcripts, and video captions, to mitigate the long-tail words challenge. Meanwhile, the internal stream is designed to exploit the multi-modality information in videos (e.g., the appearance of video frames, speech transcripts, and video captions) to ensure the quality of caption results. In addition, the cross attention mechanism is also used in between the two streams for sharing information. In this way, the two streams can help each other for more accurate results. Extensive experiments conducted on four challenging video captioning datasets, i.e., YouCookII, ActivityNet Captions, MSRVTT, and MSVD, demonstrate that the proposed method performs favorably against the state-of-the-art methods. Specifically, the proposed TextKG method outperforms the best published results by improving 18.7 on the YouCookII dataset.


page 3

page 8


Collaborative Three-Stream Transformers for Video Captioning

As the most critical components in a sentence, subject, predicate and ob...

Dual-Stream Transformer for Generic Event Boundary Captioning

This paper describes our champion solution for the CVPR2022 Generic Even...

An Exploration of Captioning Practices and Challenges of Individual Content Creators on YouTube for People with Hearing Impairments

Deaf and Hard-of-Hearing (DHH) audiences have long complained about capt...

MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning

Generating multi-sentence descriptions for videos is one of the most cha...

Object-aware Aggregation with Bidirectional Temporal Graph for Video Captioning

Video captioning aims to automatically generate natural language descrip...

An Integrated Approach for Video Captioning and Applications

Physical computing infrastructure, data gathering, and algorithms have r...

Two-Stream Video Classification with Cross-Modality Attention

Fusing multi-modality information is known to be able to effectively bri...

Please sign up or login with your details

Forgot password? Click here to reset