Back to Search Start Over

LLM-Assisted Multi-Teacher Continual Learning for Visual Question Answering in Robotic Surgery

Authors :
Du, Yuyang
Chen, Kexin
Zhan, Yue
Low, Chang Han
You, Tao
Islam, Mobarakol
Guo, Ziyu
Jin, Yueming
Chen, Guangyong
Heng, Pheng-Ann
Publication Year :
2024

Abstract

Visual question answering (VQA) is crucial for promoting surgical education. In practice, the needs of trainees are constantly evolving, such as learning more surgical types, adapting to different robots, and learning new surgical instruments and techniques for various surgeries. However, patient data privacy often restricts the availability of old data when updating the model, necessitating an exemplar-free continual learning (CL) setup. Prior CL studies overlooked two vital problems in the surgical domain: 1) large domain shifts from diverse surgical operations collected from multiple sources, and 2) severe data imbalance arising from the uneven presence of surgical instruments or activities. This paper proposes addressing these problems with a multimodal large language model (LLM) and an adaptive weight assignment methodology. We first develop a new multi-teacher CL framework that leverages a multimodal LLM as the additional teacher. The strong generalization ability of the LLM can bridge the knowledge gap when domain shifts and data imbalances occur. We then put forth a novel data processing method that transforms complex LLM embeddings into logits compatible with our CL framework. We further design an adaptive weight assignment approach that balances the generalization ability of the LLM and the domain expertise of the old CL model. Finally, to comprehensively test the effectiveness of our proposed method, we have also constructed two new surgical VQA datasets that are largely different from existing ones and could be valuable resources for future research. Extensive experimental results on the tested datasets demonstrate the superiority of our method to other advanced CL schemes.<br />Comment: This paper has been accapted by 2024 IEEE International Conference on Robotics and Automation (ICRA)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.16664
Document Type :
Working Paper