Back to Search Start Over

What perceptron neural networks are (not) good for?

Authors :
Calude, Cristian S.
Heidari, Shahrokh
Sifakis, Joseph
Source :
Information Sciences. Apr2023, Vol. 621, p844-857. 14p.
Publication Year :
2023

Abstract

Perceptron Neural Networks (PNNs) are essential components of intelligent systems because they produce efficient solutions to problems of overwhelming complexity for conventional computing methods. Many papers show that PNNs can approximate a wide variety of functions, but comparatively, very few discuss their limitations and the scope of this paper. To this aim, we define two classes of Boolean functions – sensitive and robust –, and prove that an exponentially large set of sensitive functions are exponentially difficult to compute by multi-layer PNNs (hence incomputable by single-layer PNNs). A comparatively large set of functions in the second one, but not all, are computable by single-layer PNNs. Finally, we used polynomial threshold PNNs to compute all Boolean functions with quantum annealing and present in detail a QUBO computation on the D-Wave Advantage. These results confirm that the successes of PNNs, or lack of them, are in part determined by properties of the learned data sets and suggest that sensitive functions may not be (efficiently) computed by PNNs. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
621
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
161726837
Full Text :
https://doi.org/10.1016/j.ins.2022.11.083