1. On Evaluating the Efficiency of Source Code Generated by LLMs
- Author
-
Niu, Changan, Zhang, Ting, Li, Chuanyi, Luo, Bin, and Ng, Vincent
- Subjects
Computer Science - Software Engineering - Abstract
Recent years have seen the remarkable capabilities of large language models (LLMs) for code generation. Different from existing work that evaluate the correctness of the code generated by LLMs, we propose to further evaluate its efficiency. More efficient code can lead to higher performance and execution efficiency of programs and software completed by LLM-assisted programming. First, we evaluate the efficiency of the code generated by LLMs on two benchmarks, HumanEval and MBPP. Then, we choose a set of programming problems from the online judge platform LeetCode to conduct a more difficult evaluation. Finally, we explore several prompts that would enable LLMs to generate more efficient code., Comment: 1st special event of AI Foundation Models and Software Engineering (FORGE 2024)
- Published
- 2024