Back to Search
Start Over
Nonlinear Monte Carlo Methods with Polynomial Runtime for Bellman Equations of Discrete Time High-Dimensional Stochastic Optimal Control Problems.
- Source :
- Applied Mathematics & Optimization; Feb2025, Vol. 91 Issue 1, p1-42, 42p
- Publication Year :
- 2025
-
Abstract
- Discrete time stochastic optimal control problems and Markov decision processes (MDPs), respectively, serve as fundamental models for problems that involve sequential decision making under uncertainty and as such constitute the theoretical foundation of reinforcement learning. In this article we study the numerical approximation of MDPs with infinite time horizon, finite control set, and general state spaces. Our set-up in particular covers infinite-horizon optimal stopping problems of discrete time Markov processes. A key tool to solve MDPs are Bellman equations which characterize the value functions of the MDPs and determine the optimal control strategies. By combining ideas from the full-history recursive multilevel Picard approximation method, which was recently introduced to solve certain nonlinear partial differential equations, and ideas from Q-learning we introduce a class of suitable nonlinear Monte Carlo methods and prove that the proposed methods do not suffer from the curse of dimensionality in the numerical approximation of the solutions of Bellman equations and the associated discrete time stochastic optimal control problems. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00954616
- Volume :
- 91
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Applied Mathematics & Optimization
- Publication Type :
- Academic Journal
- Accession number :
- 182808603
- Full Text :
- https://doi.org/10.1007/s00245-024-10213-7