Back to Search Start Over

Deep stochastic reinforcement learning-based energy management strategy for fuel cell hybrid electric vehicles.

Authors :
Jouda, Basel
Jobran Al-Mahasneh, Ahmad
Mallouh, Mohammed Abu
Source :
Energy Conversion & Management. Feb2024, Vol. 301, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

• Deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. • The performance of the proposed approach is benchmarked against Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). • Using New York City cycle as a validation drive cycle, the approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. • The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. • The proposed method has 38% less training time when compared to the DDQN approach. Fuel cell hybrid electric vehicles offer a promising solution for sustainable and environment friendly transportation, but they necessitate efficient energy management strategies (EMSs) to optimize their fuel economy. However, designing an optimal leaning-based EMS becomes challenging in the presence of limited training data. This paper presents a deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. The approach introduces a deep REINFORCE framework with a deep neural network baseline and entropy regularization to develop a stochastic policy for EMS. The performance of the proposed approach is benchmarked against three EMSs: i) a state-of- art deep deterministic reinforcement learning technique called Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). Using New York City cycle as a validation drive cycle, the deep REINFORCE approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. Moreover, the training results show that the proposed algorithm reduces training time by 38% compared to the DDQN approach. The proposed deep REINFORCE-based EMS shows superiority not only in terms of fuel economy, but also in terms of dealing with epistemic uncertainty. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01968904
Volume :
301
Database :
Academic Search Index
Journal :
Energy Conversion & Management
Publication Type :
Academic Journal
Accession number :
175243493
Full Text :
https://doi.org/10.1016/j.enconman.2023.117973