Back to Search
Start Over
Operator learning with Gaussian processes.
- Source :
-
Computer Methods in Applied Mechanics & Engineering . Feb2025, Vol. 434, pN.PAG-N.PAG. 1p. - Publication Year :
- 2025
-
Abstract
- Operator learning focuses on approximating mappings G † : U → V between infinite-dimensional spaces of functions, such as u : Ω u → R and v : Ω v → R. This makes it particularly suitable for solving parametric nonlinear partial differential equations (PDEs). Recent advancements in machine learning (ML) have brought operator learning to the forefront of research. While most progress in this area has been driven by variants of deep neural networks (NNs), recent studies have demonstrated that Gaussian process (GP)/kernel-based methods can also be competitive. These methods offer advantages in terms of interpretability and provide theoretical and computational guarantees. In this article, we introduce a hybrid GP/NN-based framework for operator learning, leveraging the strengths of both deep neural networks and kernel methods. Instead of directly approximating the function-valued operator G † , we use a GP to approximate its associated real-valued bilinear form G ˜ † : U × V ∗ → R. This bilinear form is defined by the dual pairing G ˜ † (u , φ) ≔ [ φ , G † (u) ] , which allows us to recover the operator G † through G † (u) (y) = G ˜ † (u , δ y). The mean function of the GP can be set to zero or parameterized by a neural operator and for each setting we develop a robust and scalable training mechanism based on maximum likelihood estimation (MLE) that can optionally leverage the physics involved. Numerical benchmarks demonstrate our method's scope, scalability, efficiency, and robustness; showing that (1) it enhances the performance of a base neural operator by using it as the mean function of a GP, and (2) it enables the construction of zero-shot data-driven models that can make accurate predictions without any prior training. Additionally, our framework (a) naturally extends to cases where G † : U → ∏ s = 1 S V s maps into a vector of functions, and (b) benefits from computational speed-ups achieved through product kernel structures and Kronecker product matrix representations of the underlying kernel matrices. 1 1 GitHub repository: https://github.com/Bostanabad-Research-Group/GP-for-Operator-Learning. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00457825
- Volume :
- 434
- Database :
- Academic Search Index
- Journal :
- Computer Methods in Applied Mechanics & Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 181540015
- Full Text :
- https://doi.org/10.1016/j.cma.2024.117581