Back to Search Start Over

Agent AI: Surveying the Horizons of Multimodal Interaction

Authors :
Durante, Zane
Huang, Qiuyuan
Wake, Naoki
Gong, Ran
Park, Jae Sung
Sarkar, Bidipta
Taori, Rohan
Noda, Yusuke
Terzopoulos, Demetri
Choi, Yejin
Ikeuchi, Katsushi
Vo, Hoi
Fei-Fei, Li
Gao, Jianfeng
Publication Year :
2024

Abstract

Multi-modal AI systems will likely become a ubiquitous presence in our everyday lives. A promising approach to making these systems more interactive is to embody them as agents within physical and virtual environments. At present, systems leverage existing foundation models as the basic building blocks for the creation of embodied agents. Embedding agents within such environments facilitates the ability of models to process and interpret visual and contextual data, which is critical for the creation of more sophisticated and context-aware AI systems. For example, a system that can perceive user actions, human behavior, environmental objects, audio expressions, and the collective sentiment of a scene can be used to inform and direct agent responses within the given environment. To accelerate research on agent-based multimodal intelligence, we define "Agent AI" as a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data, and can produce meaningful embodied actions. In particular, we explore systems that aim to improve agents based on next-embodied action prediction by incorporating external knowledge, multi-sensory inputs, and human feedback. We argue that by developing agentic AI systems in grounded environments, one can also mitigate the hallucinations of large foundation models and their tendency to generate environmentally incorrect outputs. The emerging field of Agent AI subsumes the broader embodied and agentic aspects of multimodal interactions. Beyond agents acting and interacting in the physical world, we envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.03568
Document Type :
Working Paper