Back to Search Start Over

Code Representation Learning At Scale

Authors :
Zhang, Dejiao
Ahmad, Wasi
Tan, Ming
Ding, Hantian
Nallapati, Ramesh
Roth, Dan
Ma, Xiaofei
Xiang, Bing
Source :
ICLR 2024
Publication Year :
2024

Abstract

Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i.e., code generation. However, most of the existing works on code representation learning train models at a hundred million parameter scale using very limited pretraining corpora. In this work, we fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme. We first train the encoders via a mix that leverages both randomness in masking language modeling and the structure aspect of programming language. We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner. We establish an off-the-shelf encoder model that persistently outperforms the existing models on a wide variety of downstream tasks by large margins. To comprehend the factors contributing to successful code representation learning, we conduct detailed ablations and share our findings on (i) a customized and effective token-level denoising scheme for source code; (ii) the importance of hard negatives and hard positives; (iii) how the proposed bimodal contrastive learning boost the cross-lingual semantic search performance; and (iv) how the pretraining schemes decide the downstream task performance scales with the model size.<br />Comment: 10 pages

Details

Database :
arXiv
Journal :
ICLR 2024
Publication Type :
Report
Accession number :
edsarx.2402.01935
Document Type :
Working Paper