1. SLM-Mod: Small Language Models Surpass LLMs at Content Moderation
- Author
-
Zhan, Xianyang, Goyal, Agam, Chen, Yilun, Chandrasekharan, Eshwar, and Saha, Koustuv
- Subjects
Computer Science - Computation and Language - Abstract
Large language models (LLMs) have shown promise in many natural language understanding tasks, including content moderation. However, these models can be expensive to query in real-time and do not allow for a community-specific approach to content moderation. To address these challenges, we explore the use of open-source small language models (SLMs) for community-specific content moderation tasks. We fine-tune and evaluate SLMs (less than 15B parameters) by comparing their performance against much larger open- and closed-sourced models. Using 150K comments from 15 popular Reddit communities, we find that SLMs outperform LLMs at content moderation -- 11.5% higher accuracy and 25.7% higher recall on average across all communities. We further show the promise of cross-community content moderation, which has implications for new communities and the development of cross-platform moderation techniques. Finally, we outline directions for future work on language model based content moderation. Code and links to HuggingFace models can be found at https://github.com/AGoyal0512/SLM-Mod., Comment: Preprint: 15 pages, 8 figures, 8 pages
- Published
- 2024