1. Private Benchmarking to Prevent Contamination and Improve Comparative Evaluation of LLMs
- Author
-
Chandran, Nishanth, Sitaram, Sunayana, Gupta, Divya, Sharma, Rahul, Mittal, Kashish, Swaminathan, Manohar, Chandran, Nishanth, Sitaram, Sunayana, Gupta, Divya, Sharma, Rahul, Mittal, Kashish, and Swaminathan, Manohar
- Abstract
Benchmarking is the de-facto standard for evaluating LLMs, due to its speed, replicability and low cost. However, recent work has pointed out that the majority of the open source benchmarks available today have been contaminated or leaked into LLMs, meaning that LLMs have access to test data during pretraining and/or fine-tuning. This raises serious concerns about the validity of benchmarking studies conducted so far and the future of evaluation using benchmarks. To solve this problem, we propose Private Benchmarking, a solution where test datasets are kept private and models are evaluated without revealing the test data to the model. We describe various scenarios (depending on the trust placed on model owners or dataset owners), and present solutions to avoid data contamination using private benchmarking. For scenarios where the model weights need to be kept private, we describe solutions from confidential computing and cryptography that can aid in private benchmarking. Finally, we present solutions the problem of benchmark dataset auditing, to ensure that private benchmarks are of sufficiently high quality.
- Published
- 2024