1. Optimized Deep Stacked Long Short-Term Memory Network for Long-Term Load Forecasting
- Author
-
Ehab E. Elattar and Tamer Ahmed Farrag
- Subjects
General Computer Science ,Computer science ,020209 energy ,load forecasting ,02 engineering and technology ,Artificial intelligent ,Field (computer science) ,Data modeling ,Electric power system ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Hyperparameter ,business.industry ,Deep learning ,General Engineering ,Process (computing) ,deep learning ,Term (time) ,TK1-9971 ,machine learning ,Computer engineering ,020201 artificial intelligence & image processing ,recurrent neural network ,Artificial intelligence ,Electrical engineering. Electronics. Nuclear engineering ,business ,staked long short-term memory network - Abstract
Long-term load forecasting (LTLF) is an essential process for strategical planning of the future needed extension in the power systems of any country. Besides, deep learning has become the heart of the machine learning paradigm, which is wildly used nowadays in many fields, and it also has become the current revolution in Artificial Intelligence (AI). In this paper, an optimized deep learning model based on Stacked Long Short-Term Memory Network (SLSTMN) is proposed. The architecture of the model is optimized to get the best configuration using Genetic Algorithm (GA). In addition, the hyperparameters of the model network are optimized using many deep learning techniques. During the optimization process, hundreds of model configurations are tested. The accuracy of this model is compared with many deep learning models and is compared against the related work in the field of LTLF. The dataset of the South Australia State (SA) power system is used to test the compared models. This data includes maximum daily load, daily maximum temperature, daily minimum temperature, weekday, the month, and holidays for 12 years from 2005 to 2016. SLSTMN achieves excellent accuracy and the lowest error value (almost 1%) when compared with other models on the same dataset and with related work models on different datasets.
- Published
- 2021