Back to Search Start Over

Query languages for neural networks

Authors :
Grohe, Martin
Standke, Christoph
Steegmans, Juno
Bussche, Jan Van den
Publication Year :
2024

Abstract

We lay the foundations for a database-inspired approach to interpreting and understanding neural network models by querying them using declarative languages. Towards this end we study different query languages, based on first-order logic, that mainly differ in their access to the neural network model. First-order logic over the reals naturally yields a language which views the network as a black box; only the input--output function defined by the network can be queried. This is essentially the approach of constraint query languages. On the other hand, a white-box language can be obtained by viewing the network as a weighted graph, and extending first-order logic with summation over weight terms. The latter approach is essentially an abstraction of SQL. In general, the two approaches are incomparable in expressive power, as we will show. Under natural circumstances, however, the white-box approach can subsume the black-box approach; this is our main result. We prove the result concretely for linear constraint queries over real functions definable by feedforward neural networks with a fixed number of hidden layers and piecewise linear activation functions.<br />Comment: To appear at ICDT 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10362
Document Type :
Working Paper