Back to Search
Start Over
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
- Publication Year :
- 2020
-
Abstract
- Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and, thus, are too resource-hungry and computation-intensive to suit low-capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted a lot of research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.<br />To appear in TACL 2021. The arXiv version is a pre-MIT Press publication version
- Subjects :
- FOS: Computer and information sciences
Linguistics and Language
Computer Science - Machine Learning
Computer science
Communication
Scale (chemistry)
Latency (audio)
Machine Learning (stat.ML)
Industrial engineering
Computer Science Applications
Machine Learning (cs.LG)
Human-Computer Interaction
Categorization
Artificial Intelligence
Model compression
Statistics - Machine Learning
State (computer science)
Transformer (machine learning model)
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....39b679ba592b6286a31a0e6669af0009