Back to Search
Start Over
HMC-T <scp>RAN</scp>
- Source :
- ACM Great Lakes Symposium on VLSI
- Publication Year :
- 2021
- Publisher :
- ACM, 2021.
-
Abstract
- Although Transformer-based deep learning models have been widely used in many natural language processing (NLP) tasks as well as computer vision, they suffer from gigantic model size and long latency. Network pruning can reduce the computational cost and model size. However, existing works mainly focus on irregular(sparse) pruning, which often causes irregular computations and extra indices per remained weight. In this work, we propose a Tensor-core inspired hierarchical model compression method to push the performance limit on modern GPUs. We present two modes of the two-step process. In the first mode, we use the Tensor-core aware block-based weight pruning method to exploit model sparsity in a coarse-grained manner and then use low-rank [33] decomposition to further reduce the weight storage in a fine-grained manner.In the second mode, we first use irregular pruning to achieve a highly sparse model and then apply the Tensor-core aware weight constraint on the sparse model to decompose the sparse matrix to several smaller but Tensor-core friendly sub-matrices. Experiments on Transformer, BERTBASE models show the proposed method outperforms the state-of-the-art.
- Subjects :
- Computer science
business.industry
Deep learning
Computation
05 social sciences
Process (computing)
010501 environmental sciences
01 natural sciences
Hierarchical database model
0502 economics and business
Pruning (decision trees)
Artificial intelligence
050207 economics
business
Algorithm
0105 earth and related environmental sciences
Sparse matrix
Block (data storage)
Transformer (machine learning model)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 2021 on Great Lakes Symposium on VLSI
- Accession number :
- edsair.doi...........6f951c3b2413f1ccefc7f0599997acf6