Back to Search
Start Over
Enhancing Misinformation Detection in Spanish Language with Deep Learning: BERT and RoBERTa Transformer Models.
- Source :
- Applied Sciences (2076-3417); Nov2024, Vol. 14 Issue 21, p9729, 27p
- Publication Year :
- 2024
-
Abstract
- This paper presents an approach to identifying political fake news in Spanish using Transformer architectures. Current methodologies often overlook political news due to the lack of quality datasets, especially in Spanish. To address this, we created a synthetic dataset of 57,231 Spanish political news articles, gathered via automated web scraping and enhanced with generative large language models. This dataset is used for fine-tuning and benchmarking Transformer models like BERT and RoBERTa for fake news detection. Our fine-tuned models showed outstanding performance on this dataset, with accuracy ranging from 97.4% to 98.6%. However, testing with a smaller, independent hand-curated dataset, including statements from political leaders during Spain's July 2023 electoral debates, revealed a performance drop to 71%. Although this suggests that the model needs additional refinements to handle the complexity and variability of real-world political discourse, achieving over 70% accuracy seems a promising result in the under-explored domain of Spanish political fake news detection. [ABSTRACT FROM AUTHOR]
- Subjects :
- LANGUAGE models
TRANSFORMER models
SPANISH language
FAKE news
POLITICIANS
Subjects
Details
- Language :
- English
- ISSN :
- 20763417
- Volume :
- 14
- Issue :
- 21
- Database :
- Complementary Index
- Journal :
- Applied Sciences (2076-3417)
- Publication Type :
- Academic Journal
- Accession number :
- 180782742
- Full Text :
- https://doi.org/10.3390/app14219729