Back to Search Start Over

Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation.

Authors :
Tosi, Davide
Source :
Future Internet; Jun2024, Vol. 16 Issue 6, p188, 19p
Publication Year :
2024

Abstract

The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19995903
Volume :
16
Issue :
6
Database :
Complementary Index
Journal :
Future Internet
Publication Type :
Academic Journal
Accession number :
178186762
Full Text :
https://doi.org/10.3390/fi16060188