Back to Search Start Over

Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models

Authors :
Jin, Feihu
Liu, Yin
Tan, Ying
Publication Year :
2024

Abstract

Parameter-efficient tuning methods such as LoRA could achieve comparable performance to model tuning by tuning a small portion of the parameters. However, substantial computational resources are still required, as this process involves calculating gradients and performing back-propagation throughout the model. Much effort has recently been devoted to utilizing the derivative-free optimization method to eschew the computation of gradients and showcase an augmented level of robustness in few-shot settings. In this paper, we prepend the low-rank modules into each self-attention layer of the model and employ two derivative-free optimization methods to optimize these low-rank modules at each layer alternately. Extensive results on various tasks and language models demonstrate that our proposed method achieves substantial improvement and exhibits clear advantages in memory usage and convergence speed compared to existing gradient-based parameter-efficient tuning and derivative-free optimization methods in few-shot settings.<br />Comment: 14 pages, 4 figures, 5 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.01754
Document Type :
Working Paper