Back to Search Start Over

AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model

Authors :
Moon, Seungwhan
Madotto, Andrea
Lin, Zhaojiang
Nagarajan, Tushar
Smith, Matt
Jain, Shashank
Yeh, Chun-Fu
Murugesan, Prakash
Heidari, Peyman
Liu, Yue
Srinet, Kavya
Damavandi, Babak
Kumar, Anuj
Publication Year :
2023

Abstract

We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL inherits the powerful text-based reasoning abilities of the state-of-the-art LLMs including LLaMA-2 (70B), and converts modality-specific signals to the joint textual space through a pre-trained aligner module. To further strengthen the multimodal LLM's capabilities, we fine-tune the model with a multimodal instruction set manually collected to cover diverse topics and tasks beyond simple QAs. We conduct comprehensive empirical analysis comprising both human and automatic evaluations, and demonstrate state-of-the-art performance on various multimodal tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.16058
Document Type :
Working Paper