Back to Search Start Over

Improving Multi-lingual Alignment Through Soft Contrastive Learning

Authors :
Park, Minsu
Choi, Seyeon
Choi, Chanyeol
Kim, Jun-Seong
Sohn, Jy-yong
Publication Year :
2024

Abstract

Making decent multi-lingual sentence representations is critical to achieve high performances in cross-lingual downstream tasks. In this work, we propose a novel method to align multi-lingual embeddings based on the similarity of sentences measured by a pre-trained mono-lingual embedding model. Given translation sentence pairs, we train a multi-lingual model in a way that the similarity between cross-lingual embeddings follows the similarity of sentences measured at the mono-lingual teacher model. Our method can be considered as contrastive learning with soft labels defined as the similarity between sentences. Our experimental results on five languages show that our contrastive loss with soft labels far outperforms conventional contrastive loss with hard labels in various benchmarks for bitext mining tasks and STS tasks. In addition, our method outperforms existing multi-lingual embeddings including LaBSE, for Tatoeba dataset. The code is available at https://github.com/YAI12xLinq-B/IMASCL<br />Comment: 8 pages, 1 figures, Accepted at NAACL SRW 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.16155
Document Type :
Working Paper