Back to Search
Start Over
一种基于信息瓶颈的神经网络混合压缩方法.
- Source :
-
Application Research of Computers / Jisuanji Yingyong Yanjiu . May2021, Vol. 38 Issue 5, p1463-1467. 5p. - Publication Year :
- 2021
-
Abstract
- How to deploy neural networks in mobile or embedded devices with limited computing and storage capabilities is a problem that must be faced in the development of neural networks. In order to compress the model size and reduce the computational pressure, this paper proposed a neural network hybrid compression scheme based on the information bottleneck. Based on the information bottleneck, this scheme found redundant information between adjacent neural network layers and used this as a basis to trim redundant neurons, then performed ternary quantization on the remaining neurons to further reduce the model storage memory. The experimental results show that compared with similar algorithms on the MNIST and CIFAR-10 datasets, the proposed method has higher compression rate and lower calculation amount. [ABSTRACT FROM AUTHOR]
- Subjects :
- *NEURAL development
*NEURONS
*STORAGE
*MEMORY
*ALGORITHMS
Subjects
Details
- Language :
- Chinese
- ISSN :
- 10013695
- Volume :
- 38
- Issue :
- 5
- Database :
- Academic Search Index
- Journal :
- Application Research of Computers / Jisuanji Yingyong Yanjiu
- Publication Type :
- Academic Journal
- Accession number :
- 150306851
- Full Text :
- https://doi.org/10.19734/j.issn.1001-3695.2020.01.0009