Back to Search Start Over

SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model

Authors :
Cheng, An-Chieh
Yin, Hongxu
Fu, Yang
Guo, Qiushan
Yang, Ruihan
Kautz, Jan
Wang, Xiaolong
Liu, Sifei
Publication Year :
2024

Abstract

Vision Language Models (VLMs) have demonstrated remarkable performance in 2D vision and language tasks. However, their ability to reason about spatial arrangements remains limited. In this work, we introduce Spatial Region GPT (SpatialRGPT) to enhance VLMs' spatial perception and reasoning capabilities. SpatialRGPT advances VLMs' spatial understanding through two key innovations: (1) a data curation pipeline that enables effective learning of regional representation from 3D scene graphs, and (2) a flexible plugin module for integrating depth information into the visual encoder of existing VLMs. During inference, when provided with user-specified region proposals, SpatialRGPT can accurately perceive their relative directions and distances. Additionally, we propose SpatialRGBT-Bench, a benchmark with ground-truth 3D annotations encompassing indoor, outdoor, and simulated environments, for evaluating 3D spatial cognition in VLMs. Our results demonstrate that SpatialRGPT significantly enhances performance in spatial reasoning tasks, both with and without local region prompts. The model also exhibits strong generalization capabilities, effectively reasoning about complex spatial relations and functioning as a region-aware dense reward annotator for robotic tasks. Code, dataset, and benchmark will be released at https://www.anjiecheng.me/SpatialRGPT<br />Comment: Project Page: https://www.anjiecheng.me/SpatialRGPT

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.01584
Document Type :
Working Paper