Back to Search Start Over

A Survey of Small Language Models

Authors :
Van Nguyen, Chien
Shen, Xuan
Aponte, Ryan
Xia, Yu
Basu, Samyadeep
Hu, Zhengmian
Chen, Jian
Parmar, Mihir
Kunapuli, Sasidhar
Barrow, Joe
Wu, Junda
Singh, Ashish
Wang, Yu
Gu, Jiuxiang
Dernoncourt, Franck
Ahmed, Nesreen K.
Lipka, Nedim
Zhang, Ruiyi
Chen, Xiang
Yu, Tong
Kim, Sungchul
Deilamsalehy, Hanieh
Park, Namyong
Rimer, Mike
Zhang, Zhehao
Yang, Huanrui
Rossi, Ryan A.
Nguyen, Thien Huu
Publication Year :
2024

Abstract

Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.20011
Document Type :
Working Paper