Back to Search Start Over

Co-training and Co-distillation for Quality Improvement and Compression of Language Models

Authors :
Lee, Hayeon
Hou, Rui
Kim, Jongpil
Liang, Davis
Zhang, Hongbo
Hwang, Sung Ju
Min, Alexander
Publication Year :
2023

Abstract

Knowledge Distillation (KD) compresses computationally expensive pre-trained language models (PLMs) by transferring their knowledge to smaller models, allowing their use in resource-constrained or real-time settings. However, most smaller models fail to surpass the performance of the original larger model, resulting in sacrificing performance to improve inference speed. To address this issue, we propose Co-Training and Co-Distillation (CTCD), a novel framework that improves performance and inference speed together by co-training two models while mutually distilling knowledge. The CTCD framework successfully achieves this based on two significant findings: 1) Distilling knowledge from the smaller model to the larger model during co-training improves the performance of the larger model. 2) The enhanced performance of the larger model further boosts the performance of the smaller model. The CTCD framework shows promise as it can be combined with existing techniques like architecture design or data augmentation, replacing one-way KD methods, to achieve further performance improvement. Extensive ablation studies demonstrate the effectiveness of CTCD, and the small model distilled by CTCD outperforms the original larger model by a significant margin of 1.66 on the GLUE benchmark.<br />Comment: Findings of EMNLP 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.02849
Document Type :
Working Paper