Back to Search Start Over

Self-Annotated Training for Controllable Image Captioning

Authors :
Zhu, Zhangzi
Wang, Tianlei
Qu, Hong
Publication Year :
2021

Abstract

The Controllable Image Captioning (CIC) task aims to generate captions conditioned on designated control signals. Several structure-related control signals are proposed to control the semantic structure of sentences, such as sentence length and Part-of-Speech tag sequences. However, due to the fact that the accuracy-based reward focuses mainly on contents rather than semantic structures, existing reinforcement training methods are not applicable to structure-related CIC models. The lack of reinforcement training leads to exposure bias and the inconsistency between the optimizing function and evaluation metrics. In this paper, we propose a novel reinforcement training method for structure-related control signals: Self-Annotated Training (SAT), to improve both the accuracy and controllability of CIC models. In SAT, a recursive annotation mechanism (RAM) is designed to force the input control signal to match the actual output sentence. Moreover, we propose an extra alignment reward to finetune the CIC model trained after SAT method, which further enhances the controllability of models. On the MSCOCO benchmark, we conduct extensive experiments on different structure-related control signals and on different baseline models, the results of which demonstrate the effectiveness and generalizability of our methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.08446
Document Type :
Working Paper