Back to Search Start Over

Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs

Authors :
Wang, Jialou
Zhu, Manli
Li, Yulei
Li, Honglei
Yang, Longzhi
Woo, Wai Lok
Publication Year :
2024

Abstract

Localization plays a crucial role in enhancing the practicality and precision of VQA systems. By enabling fine-grained identification and interaction with specific parts of an object, it significantly improves the system's ability to provide contextually relevant and spatially accurate responses, crucial for applications in dynamic environments like robotics and augmented reality. However, traditional systems face challenges in accurately mapping objects within images to generate nuanced and spatially aware responses. In this work, we introduce "Detect2Interact", which addresses these challenges by introducing an advanced approach for fine-grained object visual key field detection. First, we use the segment anything model (SAM) to generate detailed spatial maps of objects in images. Next, we use Vision Studio to extract semantic object descriptions. Third, we employ GPT-4's common sense knowledge, bridging the gap between an object's semantics and its spatial map. As a result, Detect2Interact achieves consistent qualitative results on object key field detection across extensive test cases and outperforms the existing VQA system with object detection by providing a more reasonable and finer visual representation.<br />Comment: Accepted to IEEE Intelligent Systems

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.01151
Document Type :
Working Paper