Back to Search Start Over

Deep reinforcement learning for approximate policy iteration: convergence analysis and a post-earthquake disaster response case study.

Authors :
Gosavi, A.
Sneed, L. H.
Spearing, L. A.
Source :
Optimization Letters; Dec2024, Vol. 18 Issue 9, p2033-2050, 18p
Publication Year :
2024

Abstract

Approximate policy iteration (API) is a class of reinforcement learning (RL) algorithms that seek to solve the long-run discounted reward Markov decision process (MDP), via the policy iteration paradigm, without learning the transition model in the underlying Bellman equation. Unfortunately, these algorithms suffer from a defect known as chattering in which the solution (policy) delivered in each iteration of the algorithm oscillates between improved and worsened policies, leading to sub-optimal behavior. Two causes for this that have been traced to the crucial policy improvement step are: (i) the inaccuracies in the policy improvement function and (ii) the exploration/exploitation tradeoff integral to this step, which generates variability in performance. Both of these defects are amplified by simulation noise. Deep RL belongs to a newer class of algorithms in which the resolution of the learning process is refined via mechanisms such as experience replay and/or deep neural networks for improved performance. In this paper, a new deep learning approach is developed for API which employs a more accurate policy improvement function, via an enhanced resolution Bellman equation, thereby reducing chattering and eliminating the need for exploration in the policy improvement step. Versions of the new algorithm for both the long-run discounted MDP and semi-MDP are presented. Convergence properties of the new algorithm are studied mathematically, and a post-earthquake disaster response case study is employed to demonstrate numerically the algorithm's efficacy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
18624472
Volume :
18
Issue :
9
Database :
Complementary Index
Journal :
Optimization Letters
Publication Type :
Academic Journal
Accession number :
180268742
Full Text :
https://doi.org/10.1007/s11590-023-02062-0