Back to Search Start Over

Multiple Representation Transfer from Large Language Models to End-to-End ASR Systems

Authors :
Udagawa, Takuma
Suzuki, Masayuki
Kurata, Gakuto
Muraoka, Masayasu
Saon, George
Publication Year :
2023

Abstract

Transferring the knowledge of large language models (LLMs) is a promising technique to incorporate linguistic knowledge into end-to-end automatic speech recognition (ASR) systems. However, existing works only transfer a single representation of LLM (e.g. the last layer of pretrained BERT), while the representation of a text is inherently non-unique and can be obtained variously from different layers, contexts and models. In this work, we explore a wide range of techniques to obtain and transfer multiple representations of LLMs into a transducer-based ASR system. While being conceptually simple, we show that transferring multiple representations of LLMs can be an effective alternative to transferring only a single representation.<br />Comment: Accepted to ICASSP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.04031
Document Type :
Working Paper