Back to Search Start Over

The Potential of LLMs in Automating Software Testing: From Generation to Reporting

Authors :
Sherifi, Betim
Slhoub, Khaled
Nembhard, Fitzroy
Publication Year :
2024

Abstract

Having a high quality software is essential in software engineering, which requires robust validation and verification processes during testing activities. Manual testing, while effective, can be time consuming and costly, leading to an increased demand for automated methods. Recent advancements in Large Language Models (LLMs) have significantly influenced software engineering, particularly in areas like requirements analysis, test automation, and debugging. This paper explores an agent-oriented approach to automated software testing, using LLMs to reduce human intervention and enhance testing efficiency. The proposed framework integrates LLMs to generate unit tests, visualize call graphs, and automate test execution and reporting. Evaluations across multiple applications in Python and Java demonstrate the system's high test coverage and efficient operation. This research underscores the potential of LLM-powered agents to streamline software testing workflows while addressing challenges in scalability and accuracy.<br />Comment: 6 pages, 3 figures, 1 table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.00217
Document Type :
Working Paper