Back to Search Start Over

LayerCollapse: Adaptive compression of neural networks

Authors :
Shabgahi, Soheil Zibakhsh
Shariff, Mohammad Sohail
Koushanfar, Farinaz
Publication Year :
2023

Abstract

Handling the ever-increasing scale of contemporary deep learning and transformer-based models poses a significant challenge. Overparameterized Transformer networks outperform prior art in Natural Language processing and Computer Vision. These models contain hundreds of millions of parameters, demanding significant computational resources and making them prone to overfitting on down stream tasks. In this work we present LayerCollapse, a novel structured pruning method to reduce the depth of fully connected layers. We propose an innovative regularizer that promotes shallow fully connected layers, compressing the model with minimal performance impact. This regularizer enables post-training compression without fine-tuning while preserving performance. LayerCollapse controls model expressiveness by regularizing the activation functions between fully connected layers, modulating them to linearity. A linear activation function collapses the rank of a transformation to the rank of the corresponding linear transformation, which demands less resources from the hardware. We demonstrate the effectiveness of LayerCollapse by showing its compression capabilities in sentimental analysis, text generation, and image classification benchmarks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.17943
Document Type :
Working Paper