Back to Search Start Over

FlexiGPT: Pruning and Extending Large Language Models with Low-Rank Weight Sharing

Authors :
Smith, James Seale
Lin, Chi-Heng
Tuli, Shikhar
Jeelani, Haris
Gao, Shangqian
Shen, Yilin
Jin, Hongxia
Hsu, Yen-Chang
Publication Year :
2025

Abstract

The rapid proliferation of large language models (LLMs) in natural language processing (NLP) has created a critical need for techniques that enable efficient deployment on memory-constrained devices without compromising performance. We present a method to prune LLMs that selectively prunes model blocks based on an importance score and replaces them with a low-parameter replacement strategy. Specifically, we propose a principled metric to replace each pruned block using a weight-sharing mechanism that leverages unpruned counterparts from the model and block-specific low-rank adapters. Furthermore, we facilitate the learning of these replacement blocks with output feature normalization and an adapter initialization scheme built on low-rank SVD reconstructions. Empirical evaluations demonstrate substantial performance gains over existing methods, achieving state-of-the-art performance on 5/6 benchmarks for a compression rate of 30% and 6/6 benchmarks for a compression rate of 40%. We also demonstrate that our approach can extend smaller models, boosting performance on 6/6 benchmarks using only ~0.3% tokens of extended training with minimal additional parameter costs.<br />Comment: Accepted to NAACL 2025 - Main Conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.14713
Document Type :
Working Paper