1. Enterprise Benchmarks for Large Language Model Evaluation
- Author
-
Zhang, Bing, Takeuchi, Mikio, Kawahara, Ryo, Asthana, Shubhi, Hossain, Md. Maruf, Ren, Guang-Jie, Soule, Kate, and Zhu, Yada
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computational Engineering, Finance, and Science - Abstract
The advancement of large language models (LLMs) has led to a greater challenge of having a rigorous and systematic evaluation of complex tasks performed, especially in enterprise applications. Therefore, LLMs need to be able to benchmark enterprise datasets for various tasks. This work presents a systematic exploration of benchmarking strategies tailored to LLM evaluation, focusing on the utilization of domain-specific datasets and consisting of a variety of NLP tasks. The proposed evaluation framework encompasses 25 publicly available datasets from diverse enterprise domains like financial services, legal, cyber security, and climate and sustainability. The diverse performance of 13 models across different enterprise tasks highlights the importance of selecting the right model based on the specific requirements of each task. Code and prompts are available on GitHub.
- Published
- 2024