Back to Search Start Over

SALM: Speech-augmented Language Model with In-context Learning for Speech Recognition and Translation

Authors :
Chen, Zhehuai
Huang, He
Andrusenko, Andrei
Hrinchuk, Oleksii
Puvvada, Krishna C.
Li, Jason
Ghosh, Subhankar
Balam, Jagadeesh
Ginsburg, Boris
Publication Year :
2023

Abstract

We present a novel Speech Augmented Language Model (SALM) with {\em multitask} and {\em in-context} learning capabilities. SALM comprises a frozen text LLM, a audio encoder, a modality adapter module, and LoRA layers to accommodate speech input and associated task instructions. The unified SALM not only achieves performance on par with task-specific Conformer baselines for Automatic Speech Recognition (ASR) and Speech Translation (AST), but also exhibits zero-shot in-context learning capabilities, demonstrated through keyword-boosting task for ASR and AST. Moreover, {\em speech supervised in-context training} is proposed to bridge the gap between LLM training and downstream speech tasks, which further boosts the in-context learning ability of speech-to-text models. Proposed model is open-sourced via NeMo toolkit.<br />Comment: submit to ICASSP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.09424
Document Type :
Working Paper