Back to Search Start Over

Measuring Emergent Capabilities of LLMs for Software Engineering: How Far Are We?

Authors :
O'Brien, Conor
Rodriguez-Cardenas, Daniel
Velasco, Alejandro
Palacio, David N.
Poshyvanyk, Denys
Publication Year :
2024

Abstract

The adoption of Large Language Models (LLMs) across multiple contexts has sparked interest in understanding how scaling model size might lead to behavioral changes, as LLMs can exhibit behaviors not observed in their smaller counterparts. Understanding these emergent capabilities is essential for advancing LLM development and improving their interpretability across diverse tasks. However, whether LLMs exhibit true emergence in the context of Software Engineering remains an unexplored topic, as most research has focused on NLP tasks. In this paper, we investigate the emergence of capabilities in the context of SE. We propose a model-agnostic pipeline for evaluating this phenomenon across three SE tasks: bug fixing, code translation, and commit message generation. More precisely, for each task, we present a case study instantiating our pipeline to analyze the emergence of capabilities in CodeGen1-multi across four scales ranging from 350M to 16.1B parameters. Our findings do not not provide evidence to support the idea of emergent capabilities resulting from scaling the model size in the selected set of tasks. We hope our results can pave the way to a more nuanced understanding of emergent capabilities of LLMs within the SE domain, guiding future research to focus on task-specific evaluations and the identification of alternative factors contributing to this phenomenon. Our work underscores the importance of task diversity in examining model behaviors and highlights potential limitations in transferring prior understandings of and approaches to emergence from NLP to Software Engineering.<br />Comment: Submitted for RENE ICPC 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.17927
Document Type :
Working Paper