Back to Search Start Over

Can Small Language Models Learn, Unlearn, and Retain Noise Patterns?

Authors :
Scaria, Nicy
Kennedy, Silvester John Joseph
Subramani, Deepak
Publication Year :
2024

Abstract

Small Language Models (SLMs) are generally considered to be more compact versions of large language models (LLMs), typically having fewer than 7 billion parameters. This study investigates the ability of small language models to learn, retain, and subsequently eliminate noise that is typically not found on the internet, where most pretraining datasets are sourced. For this, four pre-trained SLMs were utilized: Olmo 1B, Qwen1.5 1.8B, Gemma 2B, and Phi2 2.7B. The models were instruction-tuned without noise and tested for task execution with in-context learning. Afterward, noise patterns were introduced to evaluate the models' learning and unlearning capabilities. We evaluated the models' performance at various training levels. Phi consistently excelled with word-level noise but performed the worst with character-level noise. Despite being the smallest with approximately 1 billion parameters, Olmo performed consistently well on tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.00996
Document Type :
Working Paper