Back to Search Start Over

Deep backward multistep schemes for nonlinear PDEs and approximation error analysis

Authors :
Germain, Maximilien
Pham, Huyen
Warin, Xavier
Laboratoire de Probabilités, Statistiques et Modélisations (LPSM (UMR_8001))
Université Paris Diderot - Paris 7 (UPD7)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
EDF (EDF)
EDF R&D (EDF R&D)
Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Université de Paris (UP)
Laboratoire de Finance des Marchés d'Energie (FiME Lab)
EDF (EDF)-EDF (EDF)-CREST-Université Paris Dauphine-PSL
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)
FiME, Laboratoire de Finance des Marchés de l'Energie, and the 'Finance and Sustainable Development' EDF - CACIB Chair.
Publication Year :
2020
Publisher :
HAL CCSD, 2020.

Abstract

42 pages; We develop multistep machine learning schemes for solving nonlinear partial differential equations (PDEs) in high dimension. The method is based on probabilistic representation of PDEs by backward stochastic differential equations (BSDEs) and its iterated time discretization. In the case of semilinear PDEs, our algorithm estimates simultaneously by backward induction the solution and its gradient by neural networks through sequential minimizations of suitable quadratic loss functions that are performed by stochastic gradient descent. The approach is extended to the more challenging case of fully nonlinear PDEs, and we propose different approximations of the Hessian of the solution to the PDE, i.e., the $\Gamma$-component of the BSDE, by combining Malliavin weights and neural networks. Extensive numerical tests are carried out with various examples of semilinear PDEs including viscous Burgers equation and examples of fully nonlinear PDEs like Hamilton-Jacobi-Bellman equations arising in portfolio selection problems with stochastic volatilities, or Monge-Ampère equations in dimension up to 15. The performance and accuracy of our numerical results are compared with some other recent machine learning algorithms in the literature, see \cite{HJE17}, \cite{HPW19}, \cite{BEJ19}, \cite{BBCJN19} and \cite{phawar19}. Furthermore, we provide a rigorous approximation error analysis of the deep backward multistep scheme as well as the deep splitting method for semilinear PDEs, which yields convergence rate in terms of the number of neurons for shallow neural networks.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.dedup.wf.001..87119266867f0327fb30530da1e6c76d