Back to Search Start Over

Task-level Distributionally Robust Optimization for Large Language Model-based Dense Retrieval

Authors :
Ma, Guangyuan
Ma, Yongliang
Wu, Xing
Su, Zhenpeng
Zhou, Ming
Hu, Songlin
Publication Year :
2024

Abstract

Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous heterogeneous fine-tuning collections from different domains. However, the discussion about its training data distribution is still minimal. Previous studies rely on empirically assigned dataset choices or sampling ratios, which inevitably leads to sub-optimal retrieval performances. In this paper, we propose a new task-level Distributionally Robust Optimization (tDRO) algorithm for LLM-DR fine-tuning, targeted at improving the universal domain generalization ability by end-to-end reweighting the data distribution of each task. The tDRO parameterizes the domain weights and updates them with scaled domain gradients. The optimized weights are then transferred to the LLM-DR fine-tuning to train more robust retrievers. Experiments show optimal improvements in large-scale retrieval benchmarks and reduce up to 30% dataset usage after applying our optimization algorithm with a series of different-sized LLM-DR models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10613
Document Type :
Working Paper