Back to Search
Start Over
Distillation for Multilingual Information Retrieval
- Publication Year :
- 2024
-
Abstract
- Recent work in cross-language information retrieval (CLIR), where queries and documents are in different languages, has shown the benefit of the Translate-Distill framework that trains a cross-language neural dual-encoder model using translation and distillation. However, Translate-Distill only supports a single document language. Multilingual information retrieval (MLIR), which ranks a multilingual document collection, is harder to train than CLIR because the model must assign comparable relevance scores to documents in different languages. This work extends Translate-Distill and propose Multilingual Translate-Distill (MTD) for MLIR. We show that ColBERT-X models trained with MTD outperform their counterparts trained ith Multilingual Translate-Train, which is the previous state-of-the-art training approach, by 5% to 25% in nDCG@20 and 15% to 45% in MAP. We also show that the model is robust to the way languages are mixed in training batches. Our implementation is available on GitHub.<br />Comment: 6 pages, 1 figure, accepted at SIGIR 2024 as short paper
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2405.00977
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1145/3626772.3657955