1. Neural networks catching up with finite differences in solving partial differential equations in higher dimensions.
- Author
-
Avrutskiy, Vsevolod I.
- Subjects
- *
PARTIAL differential equations , *FINITE differences , *BOUNDARY value problems , *FINITE difference method , *DIRECTIONAL derivatives , *DIMENSIONS - Abstract
Solving partial differential equations using neural networks is mostly a proof of concept approach. In the case of direct function approximation, a single neural network is constructed to be the solution of a particular boundary value problem. Independent variables are fed into the input layer, and a single output is considered as the solution's value. The network is substituted into the equation, and the residual is then minimized with respect to the weights of the network using a gradient-based method. Our previous work showed that by minimizing all derivatives of the residual up to the third order one can obtain a machine precise solution for 2D boundary value problem using very sparse grids. The goal of this paper is to use this grid complexity advantage in order to obtain a solution faster than finite differences. However, the number of all possible high-order derivatives (and therefore the training time) increases with the number of dimensions and it was unclear whether this goal can be achieved. Here, we demonstrate that this increase can be compensated by using random directional derivatives instead. In 2D case neural networks are slower than finite differences, but for each additional dimension the complexity increases approximately 4 times for neural networks and 125 times for finite differences. This allows neural networks to catch up in 3D case for memory complexity and in 5D case for time complexity. For the first time a machine precise solution was obtained with neural network faster than with finite differences method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF