Back to Search
Start Over
Tensor neural networks for high-dimensional Fokker-Planck equations
- Publication Year :
- 2024
-
Abstract
- We solve high-dimensional steady-state Fokker-Planck equations on the whole space by applying tensor neural networks. The tensor networks are a linear combination of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation (in physical variables) in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper bounded domain or numerical support for the Fokker-Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker-Planck equations from two to ten dimensions.
- Subjects :
- Mathematics - Numerical Analysis
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2404.05615
- Document Type :
- Working Paper