Back to Search
Start Over
Hate Speech Detection Using Large Language Models: A Comprehensive Review
- Source :
- IEEE Access, Vol 13, Pp 20871-20892 (2025)
- Publication Year :
- 2025
- Publisher :
- IEEE, 2025.
-
Abstract
- The widespread use of social media and other online platforms has facilitated unprecedented communication and information exchange. However, it has also led to the spread of hate speech and poses serious challenges to societal harmony as well as individual well-being. Traditional methods for detecting hate speech, such as keyword matching, rule-based systems, and machine learning algorithms, often struggle to capture the subtle and context-dependent nature of hateful content. This paper provides a comprehensive review of the application of large language models (LLMs) like GPT-3, BERT, and their successors in hate speech detection. We analyze the evolution of LLMs in natural language processing and examine their strengths and limitations in identifying hate speech. Additionally, we address the significant challenges and explore how LLMs method can affect the accuracy and fairness of hate speech detection systems. By synthesizing recent research, this review aims to offer a holistic understanding of the current state-of-the-art methods in hate speech detection utilizing LLMs and to suggest directions for future research that could enhance the efficacy and equity of these systems.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 13
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.bb1e5c114f7845fba621e09c5cbaba93
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2025.3532397