Back to Search Start Over

Surgical-VQLA++: Adversarial contrastive learning for calibrated robust visual question-localized answering in robotic surgery.

Authors :
Bai, Long
Wang, Guankun
Islam, Mobarakol
Seenivasan, Lalithkumar
Wang, An
Ren, Hongliang
Source :
Information Fusion. Jan2025, Vol. 113, pN.PAG-N.PAG. 1p.
Publication Year :
2025

Abstract

Medical visual question answering (VQA) bridges the gap between visual information and clinical decision-making, enabling doctors to extract understanding from clinical images and videos. In particular, surgical VQA can enhance the interpretation of surgical data, aiding in accurate diagnoses, effective education, and clinical interventions. However, the inability of VQA models to visually indicate the regions of interest corresponding to the given questions results in incomplete comprehension of the surgical scene. To tackle this, we propose the surgical visual question localized-answering (VQLA) for precise and context-aware responses to specific queries regarding surgical images. Furthermore, to address the strong demand for safety in surgical scenarios and potential corruptions in image acquisition and transmission, we propose a novel approach called Calibrated Co-Attention Gated Vision-Language (C 2 G-ViL) embedding to integrate and align multimodal information effectively. Additionally, we leverage the adversarial sample-based contrastive learning strategy to boost our performance and robustness. We also extend our EndoVis-18-VQLA and EndoVis-17-VQLA datasets to broaden the scope and application of our data. Extensive experiments on the aforementioned datasets demonstrate the remarkable performance and robustness of our solution. Our solution can effectively combat real-world image corruption. Thus, our proposed approach can serve as an effective tool for assisting surgical education, patient care, and enhancing surgical outcomes. Our code and data will be released at https://github.com/longbai1006/Surgical-VQLAPlus. • We propose a Surgical-VQLA++ framework to connect answering and localization. • We incorporate feature calibration and adversarial contrastive training techniques. • We expand our datasets by including additional queries related to surgical tools. • Extensive experiments prove the effectiveness and robustness of our solution. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15662535
Volume :
113
Database :
Academic Search Index
Journal :
Information Fusion
Publication Type :
Academic Journal
Accession number :
179501937
Full Text :
https://doi.org/10.1016/j.inffus.2024.102602