Back to Search Start Over

Distributional offline continuous-time reinforcement learning with neural physics-informed PDEs (SciPhy RL for DOCTR-L).

Authors :
Halperin, Igor
Source :
Neural Computing & Applications. Mar2024, Vol. 36 Issue 9, p4643-4659. 17p.
Publication Year :
2024

Abstract

This paper addresses distributional offline continuous-time reinforcement learning (DOCTR-L) with stochastic policies for high-dimensional optimal control. A soft distributional version of the classical Hamilton–Jacobi–Bellman (HJB) equation is given by a semilinear partial differential equation (PDE). This 'soft HJB equation' can be learned from offline data without assuming that the latter correspond to a previous optimal or near-optimal policy. A data-driven solution of the soft HJB equation uses methods of Neural PDEs and Physics-Informed Neural Networks developed in the field of Scientific Machine Learning (SciML). The suggested approach, dubbed 'SciPhy RL', thus reduces DOCTR-L to solving neural PDEs from data. Our algorithm called Deep DOCTR-L converts offline high-dimensional data into an optimal policy in one step by reducing it to supervised learning, instead of relying on value iteration or policy iteration methods. The method enables a computable approach to the quality control of obtained policies in terms of both expected returns and uncertainties about their values. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
36
Issue :
9
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
175529895
Full Text :
https://doi.org/10.1007/s00521-023-09300-7