Back to Search Start Over

Hybrid Deep Learning Model Based on Sparse Recurrent Architecture.

Authors :
Wu, Yutao
Liu, Min
Source :
Journal of Circuits, Systems & Computers. 5/15/2024, Vol. 33 Issue 7, p1-17. 17p.
Publication Year :
2024

Abstract

Deep neural network has made surprising achievements in natural language processing, image pattern classification recognition, and other domains in the last few years. It is still tough to apply to hardware-constrained or mobile equipment because of the huge number of parameters, high storage as well as computing costs. In this paper, a new sparse iteration neural network architecture is proposed. First, the pruning method is used to compress the model size and make the network sparse. Then the architecture is iterated on the sparse network model, and the network performance is improved without adding additional parameters. Finally, the hybrid deep learning model was carried out on CV tasks and NLP tasks on ANN, CNN, and Transformer. Compared with the sparse network architecture, we finally found that the accuracy of the MINST, CIFAR10, PASCAL VOC 2012, and SQuAD datasets is improved by 0.47%, 0.64%, 3.75%, and 15.06%, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02181266
Volume :
33
Issue :
7
Database :
Academic Search Index
Journal :
Journal of Circuits, Systems & Computers
Publication Type :
Academic Journal
Accession number :
176812600
Full Text :
https://doi.org/10.1142/S0218126624501202