Back to Search Start Over

Convergence of Implicit Gradient Descent for Training Two-Layer Physics-Informed Neural Networks

Authors :
Xu, Xianliang
Du, Ting
Kong, Wang
Li, Ye
Huang, Zhongyi
Publication Year :
2024

Abstract

Optimization algorithms are crucial in training physics-informed neural networks (PINNs), as unsuitable methods may lead to poor solutions. Compared to the common gradient descent (GD) algorithm, implicit gradient descent (IGD) outperforms it in handling certain multi-scale problems. In this paper, we provide convergence analysis for the IGD in training over-parameterized two-layer PINNs. We first demonstrate the positive definiteness of Gram matrices for some general smooth activation functions, such as sigmoidal function, softplus function, tanh function, and others. Then, over-parameterization allows us to prove that the randomly initialized IGD converges a globally optimal solution at a linear convergence rate. Moreover, due to the distinct training dynamics of IGD compared to GD, the learning rate can be selected independently of the sample size and the least eigenvalue of the Gram matrix. Additionally, the novel approach used in our convergence analysis imposes a milder requirement on the network width. Finally, empirical results validate our theoretical findings.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.02827
Document Type :
Working Paper