Back to Search
Start Over
A novel ensemble deep reinforcement learning model for short‐term load forecasting based on Q‐learning dynamic model selection.
- Source :
- Journal of Engineering; Jul2024, Vol. 2024 Issue 7, p1-20, 20p
- Publication Year :
- 2024
-
Abstract
- Short‐term load forecasting is critical for power system planning and operations, and ensemble forecasting methods for electricity loads have been shown to be effective in obtaining accurate forecasts. However, the weights in ensemble prediction models are usually preset based on the overall performance after training, which prevents the model from adapting in the face of different scenarios, limiting the improvement of prediction performance. In order to improve the accurateness and validity of the ensemble prediction method further, this paper proposes an ensemble deep reinforcement learning approach using Q‐learning dynamic weight assignment to consider local behaviours caused by changes in the external environment. Firstly, the variational mode decomposition is used to reduce the non‐stationarity of the original data by decomposing the load sequence. Then, the recurrent neural network (RNN), long short‐term memory (LSTM), and gated recurrent unit (GRU) are selected as the basic power load predictors. Finally, the optimal weights are ensembled for the three sub‐predictors by the optimal weights generated using the Q‐learning algorithm, and the final results are obtained by combining their respective predictions. The results show that the forecasting capability of the proposed method outperforms all sub‐models and several baseline ensemble forecasting methods. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 20513305
- Volume :
- 2024
- Issue :
- 7
- Database :
- Complementary Index
- Journal :
- Journal of Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 178648164
- Full Text :
- https://doi.org/10.1049/tje2.12409