Back to Search Start Over

DevEval: A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories

Authors :
Li, Jia
Li, Ge
Zhao, Yunfei
Li, Yongmin
Liu, Huanyu
Zhu, Hao
Wang, Lecheng
Liu, Kaibo
Fang, Zheng
Wang, Lanshen
Ding, Jiazheng
Zhang, Xuanming
Zhu, Yuqi
Dong, Yihong
Jin, Zhi
Li, Binhua
Huang, Fei
Li, Yongbin
Publication Year :
2024

Abstract

How to evaluate the coding abilities of Large Language Models (LLMs) remains an open question. We find that existing benchmarks are poorly aligned with real-world code repositories and are insufficient to evaluate the coding abilities of LLMs. To address the knowledge gap, we propose a new benchmark named DevEval, which has three advances. (1) DevEval aligns with real-world repositories in multiple dimensions, e.g., code distributions and dependency distributions. (2) DevEval is annotated by 13 developers and contains comprehensive annotations (e.g., requirements, original repositories, reference code, and reference dependencies). (3) DevEval comprises 1,874 testing samples from 117 repositories, covering 10 popular domains (e.g., Internet, Database). Based on DevEval, we propose repository-level code generation and evaluate 8 popular LLMs on DevEval (e.g., gpt-4, gpt-3.5, StarCoder 2, DeepSeek Coder, CodeLLaMa). Our experiments reveal these LLMs' coding abilities in real-world code repositories. For example, in our experiments, the highest Pass@1 of gpt-4-turbo is only 53.04%. We also analyze LLMs' failed cases and summarize their shortcomings. We hope DevEval can facilitate the development of LLMs in real code repositories. DevEval, prompts, and LLMs' predictions have been released.<br />Comment: Accepted by the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024). arXiv admin note: substantial text overlap with arXiv:2404.00599, arXiv:2401.06401

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.19856
Document Type :
Working Paper