Back to Search Start Over

EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval

Authors :
Kim, Seungyeon
Rawat, Ankit Singh
Zaheer, Manzil
Jayasumana, Sadeep
Sadhanala, Veeranjaneyulu
Jitkrittum, Wittawat
Menon, Aditya Krishna
Fergus, Rob
Kumar, Sanjiv
Publication Year :
2023

Abstract

Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR). In this paper, we aim to improve distillation methods that pave the way for the resource-efficient deployment of such models in practice. Inspired by our theoretical analysis of the teacher-student generalization gap for IR models, we propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model. Unlike existing teacher score-based distillation methods, our proposed approach employs embedding matching tasks to provide a stronger signal to align the representations of the teacher and student models. In addition, it utilizes query generation to explore the data manifold to reduce the discrepancies between the student and the teacher where training data is sparse. Furthermore, our analysis also motivates novel asymmetric architectures for student models which realizes better embedding alignment without increasing online inference cost. On standard benchmarks like MSMARCO, we show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.12005
Document Type :
Working Paper