Back to Search
Start Over
Leveraging Readability and Sentiment in Spam Review Filtering Using Transformer Models.
- Source :
- Computer Systems Science & Engineering; 2023, Vol. 45 Issue 2, p1439-1454, 16p
- Publication Year :
- 2023
-
Abstract
- Online reviews significantly influence decision-making in many aspects of society. The integrity of internet evaluations is crucial for both consumers and vendors. This concern necessitates the development of effective fake review detection techniques. The goal of this study is to identify fraudulent text reviews. A comparison is made on shill reviews vs. genuine reviews over sentiment and readability features using semi-supervised language processing methods with a labeled and balanced Deceptive Opinion dataset. We analyze textual features accessible in internet reviews by merging sentiment mining approaches with readability. Overall, the research improves fake review screening by using various transformer models such as Bidirectional Encoder Representation from Transformers (BERT), Robustly Optimized BERT (Roberta), XLNET (Transformer-XL) and XLM-Roberta (Cross-lingual Language model-Roberta). This proposed research extracts and classifies features from product reviews to increase the effectiveness of review filtering. As evidenced by the investigation, the application of transformer models improves the performance of spam review filtering when related to existing machine learning and deep learning models. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 02676192
- Volume :
- 45
- Issue :
- 2
- Database :
- Supplemental Index
- Journal :
- Computer Systems Science & Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 161541176
- Full Text :
- https://doi.org/10.32604/csse.2023.029953