Back to Search
Start Over
PassSum: Leveraging paths of abstract syntax trees and self‐supervision for code summarization.
- Source :
- Journal of Software: Evolution & Process; Jun2024, Vol. 36 Issue 6, p1-27, 27p
- Publication Year :
- 2024
-
Abstract
- Code summarization is to provide a high‐level comment for a code snippet that typically describes the function and intent of the given code. Recent years have seen the successful application of data‐driven code summarization. To improve the performance of the model, numerous approaches use abstract syntax trees (ASTs) to represent the structural information of the code, which is considered by most researchers to be the main factor that distinguishes code from natural language. Then, such data‐driven methods are trained on large‐scale labeled datasets to obtain a model with strong generalization capabilities that can be applied to new examples. Nevertheless, we argue that state‐of‐the‐art approaches suffer from two key weaknesses: (1) inefficient encoding of ASTs; (2) reliance on a large labeled corpus for model training. As a result, such drawbacks lead to (1) oversized model, slow training, information loss and instability; (2) inability to be applied to programming languages with only a small amount of labeled data. In light of these weaknesses, we propose PassSum, a code summarization approach that addresses the aforementioned weaknesses via (1) a novel input representation which contains an efficient AST encoding method; (2) introducing three pretraining objectives and pretraining our model with a large amount of (easy‐to‐obtain) unlabeled data under the guidance of self‐supervised learning. Experimental results on code summarization for Java, Python, and Ruby methods demonstrate the superiority of PassSum to state‐of‐the‐art methods. Further experiments demonstrate that the input representation we use has both temporal and spatial advantages in addition to performance leadership. In addition, pretraining is also shown to make the model more generalizable with less labeled data, and also to speed up the convergence of the model during training. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 20477473
- Volume :
- 36
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- Journal of Software: Evolution & Process
- Publication Type :
- Academic Journal
- Accession number :
- 177677269
- Full Text :
- https://doi.org/10.1002/smr.2620