Back to Search Start Over

Robots Can Multitask Too: Integrating a Memory Architecture and LLMs for Enhanced Cross-Task Robot Action Generation

Authors :
Ali, Hassan
Allgeuer, Philipp
Mazzola, Carlo
Belgiovine, Giulia
Kaplan, Burak Can
Gajdošech, Lukáš
Wermter, Stefan
Publication Year :
2024

Abstract

Large Language Models (LLMs) have been recently used in robot applications for grounding LLM common-sense reasoning with the robot's perception and physical abilities. In humanoid robots, memory also plays a critical role in fostering real-world embodiment and facilitating long-term interactive capabilities, especially in multi-task setups where the robot must remember previous task states, environment states, and executed actions. In this paper, we address incorporating memory processes with LLMs for generating cross-task robot actions, while the robot effectively switches between tasks. Our proposed dual-layered architecture features two LLMs, utilizing their complementary skills of reasoning and following instructions, combined with a memory model inspired by human cognition. Our results show a significant improvement in performance over a baseline of five robotic tasks, demonstrating the potential of integrating memory with LLMs for combining the robot's action and perception for adaptive task execution.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.13505
Document Type :
Working Paper