1. Adversarial training-based robust lifetime prediction system for power transformers.
- Author
-
Tusher, Animesh Sarkar, Rahman, M.A., Islam, Md. Rashidul, and Hossain, M.J.
- Subjects
- *
REMAINING useful life , *MACHINE learning , *SMART devices , *POWER transformers , *BOOSTING algorithms , *ELECTRIC power distribution grids , *RANDOM forest algorithms - Abstract
Predictive maintenance, facilitated by smart devices and cyber infrastructure, is used for essential equipment like power transformers, enhancing power grid stability and reducing operating costs. As part of predictive maintenance, machine learning (ML) methods are employed to predict the remaining useful life (RUL) of power transformers, which can be vulnerable to cyber-attacks, especially data contamination attacks. Hence, this work introduces false data injection (FDI) attacks in ML-based RUL prediction and investigates the impacts on lifetime prediction. Three different attack templates of FDI attacks are implemented to corrupt the input data of extreme gradient boosting (XGBoost), extra trees (ETs) and random forest (RF)-based lifetime predictor models, where single attack templates are found severe compared to a mixed attack template of FDI attacks. Also, adversarial training is presented as a countermeasure, where the adversarially trained XGBoost model outperforms the other two models under normal conditions and cyber-attacks. Experiment results indicate that the lifetime prediction errors of the proposed model in all scenarios can be maintained at about 6 and 3 in terms of RMSE and MAE, respectively. • Introducing cyber-attacks on ML-based power transformer lifetime predictors. • Investigating the severity of single and mixed models of FDI attacks on predictors. • Presenting a comparative analysis of predictors under normal cases and cyber-attacks. • Proposing an adversarial training-based countermeasure to mitigate the severity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF