1. 머신러닝 모델의 성능 저하 완화를 위한 반복적 결측값 처리 기법.
- Author
-
이종관 and 이민우
- Abstract
Machine learning models find extensive application across diverse domains, with their performance heavily reliant on the data quality employed during the learning process. However, real-world datasets include some missing data due to limitations and errors in data collection methods, incomplete or inconsistent data-gathering processes, and human errors during processing. Consequently, effective handling of missing values becomes imperative to ensure optimal model performance. A common way to deal with missing data is to either delete the data containing the missing values or to impute them appropriately. Deletion is straightforward, but at the cost of information loss. Imputation, on the other hand, can result in a loss of variability in the dataset and skewed correlations between variables. The proposed scheme reduces dimensionality by utilizing variables without missing values and employs the outcomes to estimate the missing values. Experimental validations affirm that the proposed scheme mitigates the performance degradation of various machine learning models compared to existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF