1. Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking
- Author
-
Bai, Jun, Chen, Zhuofan, Li, Zhenzi, Hong, Hanhua, Zhang, Jianfei, Li, Chen, Lin, Chenghua, and Rong, Wenge
- Subjects
Computer Science - Artificial Intelligence - Abstract
Text ranking has witnessed significant advancements, attributed to the utilization of dual-encoder enhanced by Pre-trained Language Models (PLMs). Given the proliferation of available PLMs, selecting the most effective one for a given dataset has become a non-trivial challenge. As a promising alternative to human intuition and brute-force fine-tuning, Transferability Estimation (TE) has emerged as an effective approach to model selection. However, current TE methods are primarily designed for classification tasks, and their estimated transferability may not align well with the objectives of text ranking. To address this challenge, we propose to compute the expected rank as transferability, explicitly reflecting the model's ranking capability. Furthermore, to mitigate anisotropy and incorporate training dynamics, we adaptively scale isotropic sentence embeddings to yield an accurate expected rank score. Our resulting method, Adaptive Ranking Transferability (AiRTran), can effectively capture subtle differences between models. On challenging model selection scenarios across various text ranking datasets, it demonstrates significant improvements over previous classification-oriented TE methods, human intuition, and ChatGPT with minor time consumption., Comment: Accepted by EMNLP 2024 main conference
- Published
- 2024