1. PDE Models for Deep Neural Networks: Learning Theory, Calculus of Variations and Optimal Control
- Author
-
Markowich, Peter and Portaro, Simone
- Subjects
Mathematics - Optimization and Control - Abstract
We propose a partial differential-integral equation (PDE) framework for deep neural networks (DNNs) and their associated learning problem by taking the continuum limits of both network width and depth. The proposed model captures the complex interactions among hidden nodes, overcoming limitations of traditional discrete and ordinary differential equation (ODE)-based models. We explore the well-posedness of the forward propagation problem, analyze the existence and properties of minimizers for the learning task, and provide a detailed examination of necessary and sufficient conditions for the existence of critical points. Controllability and optimality conditions for the learning task with its associated PDE forward problem are established using variational calculus, the Pontryagin Maximum Principle, and the Hamilton-Jacobi-Bellman equation, framing the deep learning process as a PDE-constrained optimization problem. In this context, we prove the existence of viscosity solutions for the latter and we establish optimal feedback controls based on the value functional. This approach facilitates the development of new network architectures and numerical methods that improve upon traditional layer-by-layer gradient descent techniques by introducing forward-backward PDE discretization. The paper provides a mathematical foundation for connecting neural networks, PDE theory, variational analysis, and optimal control, partly building on and extending the results of \cite{liu2020selection}, where the main focus was the analysis of the forward evolution. By integrating these fields, we offer a robust framework that enhances deep learning models' stability, efficiency, and interpretability.
- Published
- 2024