Back to Search Start Over

SAM4MLLM: Enhance Multi-Modal Large Language Model for Referring Expression Segmentation

Authors :
Chen, Yi-Chia
Li, Wei-Hua
Sun, Cheng
Wang, Yu-Chiang Frank
Chen, Chu-Song
Publication Year :
2024

Abstract

We introduce SAM4MLLM, an innovative approach which integrates the Segment Anything Model (SAM) with Multi-Modal Large Language Models (MLLMs) for pixel-aware tasks. Our method enables MLLMs to learn pixel-level location information without requiring excessive modifications to the existing model architecture or adding specialized tokens. We introduce an inquiry-based approach that can effectively find prompt points for SAM to perform segmentation based on MLLM. It combines detailed visual information with the powerful expressive capabilities of large language models in a unified language-based manner without additional computational overhead in learning. Experimental results on pubic benchmarks demonstrate the effectiveness of our approach.<br />Comment: ECCV 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.10542
Document Type :
Working Paper