Back to Search Start Over

Temporal Consistency-Based Loss Function for Both Deep Q-Networks and Deep Deterministic Policy Gradients for Continuous Actions.

Authors :
Kim, Chayoung
Source :
Symmetry (20738994); Dec2021, Vol. 13 Issue 12, p2411-2411, 1p
Publication Year :
2021

Abstract

Artificial intelligence (AI) techniques in power grid control and energy management in building automation require both deep Q-networks (DQNs) and deep deterministic policy gradients (DDPGs) in deep reinforcement learning (DRL) as off-policy algorithms. Most studies on improving the stability of DRL have addressed these with replay buffers and a target network using a delayed temporal difference (TD) backup, which is known for minimizing a loss function at every iteration. The loss functions were developed for DQN and DDPG, and it is well-known that there have been few studies on improving the techniques of the loss functions used in both DQN and DDPG. Therefore, we modified the loss function based on a temporal consistency (TC) loss and adapted the proposed TC loss function for the target network update in both DQN and DDPG. The proposed TC loss function showed effective results, particularly in a critic network in DDPG. In this work, we demonstrate that, in OpenAI Gym, both "cart-pole" and "pendulum", the proposed TC loss function shows enormously improved convergence speed and performance, particularly in the critic network in DDPG. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20738994
Volume :
13
Issue :
12
Database :
Complementary Index
Journal :
Symmetry (20738994)
Publication Type :
Academic Journal
Accession number :
154346144
Full Text :
https://doi.org/10.3390/sym13122411