Back to Search Start Over

Semantic‐meshed and content‐guided transformer for image captioning.

Authors :
Li, Xuan
Zhang, Wenkai
Sun, Xian
Gao, Xin
Source :
IET Computer Vision (Wiley-Blackwell); Aug2022, Vol. 16 Issue 5, p431-444, 14p
Publication Year :
2022

Abstract

The transformer architecture has been the dominant framework for today's image captioning tasks because of its superior performance. However, existing methods based on transformer often lack the integrated use of multi‐level semantic information and are weak in maintaining the relevance of captions to the image. In this paper, a semantic‐meshed and content‐guided transformer network is introduced for image captioning to solve these problems. The semantic‐meshed mechanism allows the model to generate words by selecting semantic information of multiple interaction levels adaptively through attention‐based reconstruction. And the content‐guided module guides the words generation by using attribute features that represent the image content, which aims to keep the generated caption consistent with the main content of the image. Experiments on dataset on the MSCOCO captioning dataset are conducted to validate the authors' model and achieve superior results compared to other state‐of‐the‐art method approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17519632
Volume :
16
Issue :
5
Database :
Complementary Index
Journal :
IET Computer Vision (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
157816262
Full Text :
https://doi.org/10.1049/cvi2.12099