Back to Search Start Over

The Propensity for Density in Feed-forward Models

Authors :
Schoots, Nandi
Jackson, Alex
Kholmovaia, Ali
McBurney, Peter
Shanahan, Murray
Source :
ECAI 2024
Publication Year :
2024

Abstract

Does the process of training a neural network to solve a task tend to use all of the available weights even when the task could be solved with fewer weights? To address this question we study the effects of pruning fully connected, convolutional and residual models while varying their widths. We find that the proportion of weights that can be pruned without degrading performance is largely invariant to model size. Increasing the width of a model has little effect on the density of the pruned model relative to the increase in absolute size of the pruned network. In particular, we find substantial prunability across a large range of model sizes, where our biggest model is 50 times as wide as our smallest model. We explore three hypotheses that could explain these findings.

Details

Database :
arXiv
Journal :
ECAI 2024
Publication Type :
Report
Accession number :
edsarx.2410.14461
Document Type :
Working Paper
Full Text :
https://doi.org/10.3233/FAIA240819