Back to Search
Start Over
Stronger separation of analog neuron hierarchy by deterministic context-free languages.
- Source :
-
Neurocomputing . Jul2022, Vol. 493, p605-612. 8p. - Publication Year :
- 2022
-
Abstract
- The computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function depends on the descriptive complexity of their weight parameters encoding the NN program. In order to study the power of increasing analogicity in NNs between integer (finite automata) and arbitrary rational weights (Turing machines), we have established the analog neuron hierarchy 0ANNs ⊂ 1ANNs ⊂ 2ANNs ⊆ 3ANNs where α ANN is a binary-state NN that is extended with α ⩾ 0 extra analog-state neurons with rational weights. In our previous work, we have compared it to the traditional Chomsky hierarchy and separated its first two levels. The separation 1ANNs ⫋ 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L # = { 0 n 1 n | n ⩾ 1 } which cannot be recognized by any 1ANN even with real weights, while any DCFL is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL (DFCL') cannot be recognized by 1ANNs with real weights, which means DCFL's ⊂ (2ANNs \ 1ANNs), implying 1ANNs ∩ DCFLs = 0ANNs. For this purpose, we show that any 1ANN that would recognize a DFCL' can be augmented to a larger 1ANN that would recognize L # , which does not exists. [ABSTRACT FROM AUTHOR]
- Subjects :
- *RECURRENT neural networks
*TURING machines
*NEURONS
*FINITE state machines
Subjects
Details
- Language :
- English
- ISSN :
- 09252312
- Volume :
- 493
- Database :
- Academic Search Index
- Journal :
- Neurocomputing
- Publication Type :
- Academic Journal
- Accession number :
- 156810307
- Full Text :
- https://doi.org/10.1016/j.neucom.2021.12.107