Back to Search Start Over

The Self-Trapping Attractor Neural Network— Part II: Properties of a Sparsely Connected Model Storing Multiple Memories

Authors :
R. Pavloski
M. Karimi
Source :
IEEE Transactions on Neural Networks. 16:1427-1439
Publication Year :
2005
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2005.

Abstract

In a previous paper, the self-trapping network (STN) was introduced as more biologically realistic than attractor neural networks (ANNs) based on the Ising model. This paper extends the previous analysis of a one-dimensional (1-D) STN storing a single memory to a model that stores multiple memories and that possesses generalized sparse connnectivity. The energy, Lyapunov function, and partition function derived for the 1-D model are generalized to the case of an attractor network with only near-neighbor synapses, coupled to a system that computes memory overlaps. Simulations reveal that 1) the STN dramatically reduces intra-ANN connectivity without severly affecting the size of basins of attraction, with fast self-trapping able to sustain attractors even in the absence of intra-ANN synapses; 2) the basins of attraction can be controlled by a single free parameter, providing natural attention-like effects; 3) the same parameter determines the memory capacity of the network, and the latter is much less dependent than a standard ANN on the noise level of the system; 4) the STN serves as a useful memory for some correlated memory patterns for which the standard ANN totally fails; 5) the STN can store a large number of sparse patterns; and 6) a Monte Carlo procedure, a competitive neural network, and binary neurons with thresholds can be used to induce self-trapping.

Details

ISSN :
10459227
Volume :
16
Database :
OpenAIRE
Journal :
IEEE Transactions on Neural Networks
Accession number :
edsair.doi.dedup.....b719ebb76d867eb321a833653f478ea5