Back to Search Start Over

Impact of Disentanglement on Pruning Neural Networks

Authors :
Shneider, Carl
Rostami, Peyman
Kacem, Anis
Sinha, Nilotpal
Shabayek, Abd El Rahman
Aouada, Djamila
Publication Year :
2023

Abstract

Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency. This can be realized via efficient model compression. Disentangled latent representations produced by variational autoencoder (VAE) networks are a promising approach for achieving model compression because they mainly retain task-specific information, discarding useless information for the task at hand. We make use of the Beta-VAE framework combined with a standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations on the pruning process for the task of classification. In particular, we perform experiments on MNIST and CIFAR10 datasets, examine disentanglement challenges, and propose a path forward for future works.<br />Comment: Presented in ISCS23

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.09994
Document Type :
Working Paper