Back to Search Start Over

TRUCE: Private Benchmarking to Prevent Contamination and Improve Comparative Evaluation of LLMs

Authors :
Rajore, Tanmay
Chandran, Nishanth
Sitaram, Sunayana
Gupta, Divya
Sharma, Rahul
Mittal, Kashish
Swaminathan, Manohar
Publication Year :
2024

Abstract

Benchmarking is the de-facto standard for evaluating LLMs, due to its speed, replicability and low cost. However, recent work has pointed out that the majority of the open source benchmarks available today have been contaminated or leaked into LLMs, meaning that LLMs have access to test data during pretraining and/or fine-tuning. This raises serious concerns about the validity of benchmarking studies conducted so far and the future of evaluation using benchmarks. To solve this problem, we propose Private Benchmarking, a solution where test datasets are kept private and models are evaluated without revealing the test data to the model. We describe various scenarios (depending on the trust placed on model owners or dataset owners), and present solutions to avoid data contamination using private benchmarking. For scenarios where the model weights need to be kept private, we describe solutions from confidential computing and cryptography that can aid in private benchmarking. We build an end-to-end system, TRUCE, that enables such private benchmarking showing that the overheads introduced to protect models and benchmark are negligible (in the case of confidential computing) and tractable (when cryptographic security is required). Finally, we also discuss solutions to the problem of benchmark dataset auditing, to ensure that private benchmarks are of sufficiently high quality.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.00393
Document Type :
Working Paper