Back to Search Start Over

Thick-Net: Parallel Network Structure for Sequential Modeling

Authors :
Li, Yu-Xuan
Liu, Jin-Yuan
Li, Liang
Guan, Xiang
Publication Year :
2019

Abstract

Recurrent neural networks have been widely used in sequence learning tasks. In previous studies, the performance of the model has always been improved by either wider or deeper structures. However, the former becomes more prone to overfitting, while the latter is difficult to optimize. In this paper, we propose a simple new model named Thick-Net, by expanding the network from another dimension: thickness. Multiple parallel values are obtained via more sets of parameters in each hidden state, and the maximum value is selected as the final output among parallel intermediate outputs. Notably, Thick-Net can efficiently avoid overfitting, and is easier to optimize than the vanilla structures due to the large dropout affiliated with it. Our model is evaluated on four sequential tasks including adding problem, permuted sequential MNIST, text classification and language modeling. The results of these tasks demonstrate that our model can not only improve accuracy with faster convergence but also facilitate a better generalization ability.<br />Comment: 11 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.08074
Document Type :
Working Paper