Back to Search
Start Over
Enhancement of text categorization results via an ensemble learning technique.
- Source :
-
AIP Conference Proceedings . 2023, Vol. 2457 Issue 1, p1-9. 9p. - Publication Year :
- 2023
-
Abstract
- Despite considerable success in discovering the knowledge, conventional machine learning algorithms may defeat to achieve satisfying performances during the transaction with imbalanced, complex, noise, and high dimensional data. In this context, it is substantial to think about efficiently building an adequate knowledge and mining model. Ensemble learning aims to consolidate the classical machine learning (ML) algorithms, data modeling, and data mining into a unified framework. Text categorization is a critical application that uses the unified ensemble learning framework to detect a new article's class. This paper develops a two-layer stacking ensemble model containing different ML algorithms. Since, stacking model consist of stacked layers and each layer built with multiple ML algorithms we constructed the first layer of our stacking model with three ML algorithms (Multinominal Naïve Bayes (MNB, logistic regression (LR), and k-Nearest Neighbor (k-NN)) classifiers, while the second layer applies a random forest classification algorithm. The proposed stacking ensemble model is compared with the classical ML algorithms MNB, LR, and k-NN) in accuracy and error measure. The result shows that using the stacking model outperforms better than MNB and k-NN algorithms with Accuracy reached 89.72% and 89.75 %, respectively. While using LR, the Accuracy equals 91.5%, which is closed to the result of the proposed model, which equals 91.66%. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 0094243X
- Volume :
- 2457
- Issue :
- 1
- Database :
- Academic Search Index
- Journal :
- AIP Conference Proceedings
- Publication Type :
- Conference
- Accession number :
- 161652281
- Full Text :
- https://doi.org/10.1063/5.0122942