Back to Search Start Over

A Systematic Evaluation of Large Code Models in API Suggestion: When, Which, and How

Authors :
Wang, Chaozheng
Gao, Shuzheng
Gao, Cuiyun
Wang, Wenxuan
Chong, Chun Yong
Gao, Shan
Lyu, Michael R.
Publication Year :
2024

Abstract

API suggestion is a critical task in modern software development, assisting programmers by predicting and recommending third-party APIs based on the current context. Recent advancements in large code models (LCMs) have shown promise in the API suggestion task. However, they mainly focus on suggesting which APIs to use, ignoring that programmers may demand more assistance while using APIs in practice including when to use the suggested APIs and how to use the APIs. To mitigate the gap, we conduct a systematic evaluation of LCMs for the API suggestion task in the paper. To facilitate our investigation, we first build a benchmark that contains a diverse collection of code snippets, covering 176 APIs used in 853 popular Java projects. Three distinct scenarios in the API suggestion task are then considered for evaluation, including (1) ``\textit{when to use}'', which aims at determining the desired position and timing for API usage; (2) ``\textit{which to use}'', which aims at identifying the appropriate API from a given library; and (3) ``\textit{how to use}'', which aims at predicting the arguments for a given API. The consideration of the three scenarios allows for a comprehensive assessment of LCMs' capabilities in suggesting APIs for developers. During the evaluation, we choose nine popular LCMs with varying model sizes for the three scenarios. We also perform an in-depth analysis of the influence of context selection on the model performance ...<br />Comment: This paper is accepted in ASE 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.13178
Document Type :
Working Paper