Back to Search Start Over

Operator Compression with Deep Neural Networks

Operator Compression with Deep Neural Networks

Authors :
Kröpfl, Fabian
Maier, Roland
Peterseim, Daniel
Publication Year :
2021

Abstract

This paper studies the compression of partial differential operators using neural networks. We consider a family of operators, parameterized by a potentially high-dimensional space of coefficients that may vary on a large range of scales. Based on existing methods that compress such a multiscale operator to a finite-dimensional sparse surrogate model on a given target scale, we propose to directly approximate the coefficient-to-surrogate map with a neural network. We emulate local assembly structures of the surrogates and thus only require a moderately sized network that can be trained efficiently in an offline phase. This enables large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches. We apply the abstract framework to a family of prototypical second-order elliptic heterogeneous diffusion operators as a demonstrating example.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.12080
Document Type :
Working Paper