Back to Search Start Over

Whether you can locate or not? Interactive Referring Expression Generation

Authors :
Ye, Fulong
Long, Yuxing
Feng, Fangxiang
Wang, Xiaojie
Publication Year :
2023

Abstract

Referring Expression Generation (REG) aims to generate unambiguous Referring Expressions (REs) for objects in a visual scene, with a dual task of Referring Expression Comprehension (REC) to locate the referred object. Existing methods construct REG models independently by using only the REs as ground truth for model training, without considering the potential interaction between REG and REC models. In this paper, we propose an Interactive REG (IREG) model that can interact with a real REC model, utilizing signals indicating whether the object is located and the visual region located by the REC model to gradually modify REs. Our experimental results on three RE benchmark datasets, RefCOCO, RefCOCO+, and RefCOCOg show that IREG outperforms previous state-of-the-art methods on popular evaluation metrics. Furthermore, a human evaluation shows that IREG generates better REs with the capability of interaction.<br />Comment: 10 papges, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.09977
Document Type :
Working Paper