Back to Search Start Over

Low-Cost Language Models: Survey and Performance Evaluation on Python Code Generation

Authors :
Espejel, Jessica López
Alassan, Mahaman Sanoussi Yahaya
Bouhandi, Merieme
Dahhane, Walid
Ettifouri, El Hassane
Publication Year :
2024

Abstract

Large Language Models (LLMs) have become a popular choice for many Natural Language Processing (NLP) tasks due to their versatility and ability to produce high-quality results. Specifically, they are increasingly used for automatic code generation to help developers tackle repetitive coding tasks. However, LLMs' substantial computational and memory requirements often make them inaccessible to users with limited resources. This paper focuses on very low-cost models which offer a more accessible alternative to resource-intensive LLMs. We notably: (1) propose a thorough semi-manual evaluation of their performance in generating Python code, (2) introduce a Chain-of-Thought (CoT) prompting strategy to improve model reasoning and code quality, and (3) propose a new dataset of 60 programming problems, with varied difficulty levels, designed to extend existing benchmarks like HumanEval and EvalPlus. Our findings show that some low-cost compatible models achieve competitive results compared to larger models like ChatGPT despite using significantly fewer resources. We will make our dataset and prompts publicly available to support further research.<br />Comment: Under review at Elsevier's Engineering Applications of Artificial Intelligence

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.11160
Document Type :
Working Paper