Back to Search Start Over

Learning Video-Text Aligned Representations for Video Captioning

Authors :
Yaya Shi
Haiyang Xu
Chunfeng Yuan
Bing Li
Weiming Hu
Zheng-Jun Zha
Source :
ACM Transactions on Multimedia Computing, Communications, and Applications. 19:1-21
Publication Year :
2023
Publisher :
Association for Computing Machinery (ACM), 2023.

Abstract

Video captioning requires that the model has the abilities of video understanding, video-text alignment, and text generation. Due to the semantic gap between vision and language, conducting video-text alignment is a crucial step to reduce the semantic gap, which maps the representations from the visual to the language domain. However, the existing methods often overlook this step, so the decoder has to directly take the visual representations as input, which increases the decoder’s workload and limits its ability to generate semantically correct captions. In this paper, we propose a video-text alignment module with a retrieval unit and an alignment unit to learn video-text aligned representations for video captioning. Specifically, we firstly propose a retrieval unit to retrieve sentences as additional input which is used as the semantic anchor between visual scene and language description. Then, we employ an alignment unit with the input of the video and retrieved sentences to conduct the video-text alignment. The representations of two modal inputs are aligned in a shared semantic space. The obtained video-text aligned representations are used to generate semantically correct captions. Moreover, retrieved sentences provide rich semantic concepts which are helpful for generating distinctive captions. Experiments on two public benchmarks, i.e., VATEX and MSR-VTT, demonstrate that our method outperforms state-of-the-art performances by a large margin. The qualitative analysis shows that our method generates correct and distinctive captions.

Details

ISSN :
15516865 and 15516857
Volume :
19
Database :
OpenAIRE
Journal :
ACM Transactions on Multimedia Computing, Communications, and Applications
Accession number :
edsair.doi...........87f161db6ba3d5a00eb7f297ddd77200
Full Text :
https://doi.org/10.1145/3546828