1. Enhancing Code Annotation Reliability: Generative AI's Role in Comment Quality Assessment Models
- Author
-
Killivalavan, Seetharam and Thenmozhi, Durairaj
- Subjects
Computer Science - Software Engineering - Abstract
This paper explores a novel method for enhancing binary classification models that assess code comment quality, leveraging Generative Artificial Intelligence to elevate model performance. By integrating 1,437 newly generated code-comment pairs, labeled as "Useful" or "Not Useful" and sourced from various GitHub repositories, into an existing C-language dataset of 9,048 pairs, we demonstrate substantial model improvements. Using an advanced Large Language Model, our approach yields a 5.78% precision increase in the Support Vector Machine (SVM) model, improving from 0.79 to 0.8478, and a 2.17% recall boost in the Artificial Neural Network (ANN) model, rising from 0.731 to 0.7527. These results underscore Generative AI's value in advancing code comment classification models, offering significant potential for enhanced accuracy in software development and quality control. This study provides a promising outlook on the integration of generative techniques for refining machine learning models in practical software engineering settings., Comment: Code Comment Quality Classification, Generative Artificial Intelligence, Support Vector Machines, Artificial Neural Networks, Natural Language Processing
- Published
- 2024