Back to Search Start Over

Learning Multilingual Sentence Representations with Cross-lingual Consistency Regularization

Authors :
Gao, Pengzhi
Zhang, Liwen
He, Zhongjun
Wu, Hua
Wang, Haifeng
Publication Year :
2023

Abstract

Multilingual sentence representations are the foundation for similarity-based bitext mining, which is crucial for scaling multilingual neural machine translation (NMT) system to more languages. In this paper, we introduce MuSR: a one-for-all Multilingual Sentence Representation model that supports more than 220 languages. Leveraging billions of English-centric parallel corpora, we train a multilingual Transformer encoder, coupled with an auxiliary Transformer decoder, by adopting a multilingual NMT framework with CrossConST, a cross-lingual consistency regularization technique proposed in Gao et al. (2023). Experimental results on multilingual similarity search and bitext mining tasks show the effectiveness of our approach. Specifically, MuSR achieves superior performance over LASER3 (Heffernan et al., 2022) which consists of 148 independent multilingual sentence encoders.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.06919
Document Type :
Working Paper