Back to Search Start Over

Benchmarking Pre-trained Large Language Models' Potential Across Urdu NLP tasks

Authors :
Tahir, Munief Hassan
Shams, Sana
Fiaz, Layba
Adeeba, Farah
Hussain, Sarmad
Publication Year :
2024

Abstract

Large Language Models (LLMs) pre-trained on multilingual data have revolutionized natural language processing research, by transitioning from languages and task specific model pipelines to a single model adapted on a variety of tasks. However majority of existing multilingual NLP benchmarks for LLMs provide evaluation data in only few languages with little linguistic diversity. In addition these benchmarks lack quality assessment against the respective state-of the art models. This study presents an in-depth examination of prominent LLMs; GPT-3.5-turbo, Llama2-7B-Chat, Bloomz 7B1 and Bloomz 3B, across 14 tasks using 15 Urdu datasets, in a zero-shot setting, and their performance against state-of-the-art (SOTA) models, has been compared and analysed. Our experiments show that SOTA models surpass all the encoder-decoder pre-trained language models in all Urdu NLP tasks with zero-shot learning. Our results further show that LLMs with fewer parameters, but more language specific data in the base model perform better than larger computational models, but low language data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.15453
Document Type :
Working Paper