Back to Search Start Over

ESREAL: Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models

Authors :
Kim, Minchan
Kim, Minyeong
Bae, Junik
Choi, Suhwan
Kim, Sungkyung
Chang, Buru
Publication Year :
2024

Abstract

Hallucinations in vision-language models pose a significant challenge to their reliability, particularly in the generation of long captions. Current methods fall short of accurately identifying and mitigating these hallucinations. To address this issue, we introduce ESREAL, a novel unsupervised learning framework designed to suppress the generation of hallucinations through accurate localization and penalization of hallucinated tokens. Initially, ESREAL creates a reconstructed image based on the generated caption and aligns its corresponding regions with those of the original image. This semantic reconstruction aids in identifying both the presence and type of token-level hallucinations within the generated caption. Subsequently, ESREAL computes token-level hallucination scores by assessing the semantic similarity of aligned regions based on the type of hallucination. Finally, ESREAL employs a proximal policy optimization algorithm, where it selectively penalizes hallucinated tokens according to their token-level hallucination scores. Our framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2 by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved solely through signals derived from the image itself, without the need for any image-text pairs.<br />Comment: ECCV 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.16167
Document Type :
Working Paper