Back to Search Start Over

A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM

Authors :
Han, ByungOk
Kim, Jaehong
Jang, Jinhyeok
Publication Year :
2024

Abstract

Vision-Language-Action (VLA) models are receiving increasing attention for their ability to enable robots to perform complex tasks by integrating visual context with linguistic commands. However, achieving efficient real-time performance remains challenging due to the high computational demands of existing models. To overcome this, we propose Dual Process VLA (DP-VLA), a hierarchical framework inspired by dual-process theory. DP-VLA utilizes a Large System 2 Model (L-Sys2) for complex reasoning and decision-making, while a Small System 1 Model (S-Sys1) handles real-time motor control and sensory processing. By leveraging Vision-Language Models (VLMs), the L-Sys2 operates at low frequencies, reducing computational overhead, while the S-Sys1 ensures fast and accurate task execution. Experimental results on the RoboCasa dataset demonstrate that DP-VLA achieves faster inference and higher task success rates, providing a scalable solution for advanced robotic applications.<br />Comment: 10 page

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.15549
Document Type :
Working Paper