Back to Search Start Over

Pushing The Limit of LLM Capacity for Text Classification

Authors :
Zhang, Yazhou
Wang, Mengyao
Ren, Chenyu
Li, Qiuchi
Tiwari, Prayag
Wang, Benyou
Qin, Jing
Publication Year :
2024

Abstract

The value of text classification's future research has encountered challenges and uncertainties, due to the extraordinary efficacy demonstrated by large language models (LLMs) across numerous downstream NLP tasks. In this era of open-ended language modeling, where task boundaries are gradually fading, an urgent question emerges: have we made significant advances in text classification under the full benefit of LLMs? To answer this question, we propose RGPT, an adaptive boosting framework tailored to produce a specialized text classification LLM by recurrently ensembling a pool of strong base learners. The base learners are constructed by adaptively adjusting the distribution of training samples and iteratively fine-tuning LLMs with them. Such base learners are then ensembled to be a specialized text classification LLM, by recurrently incorporating the historical predictions from the previous learners. Through a comprehensive empirical comparison, we show that RGPT significantly outperforms 8 SOTA PLMs and 7 SOTA LLMs on four benchmarks by 1.36% on average. Further evaluation experiments show a clear surpassing of RGPT over human classification.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.07470
Document Type :
Working Paper