Back to Search Start Over

An Analysis of Catastrophic Interference.

Authors :
Sharkey, Noel E.
Sharkey, Amanda J. C.
Source :
Connection Science; Sep95, Vol. 7 Issue 3/4, p301-330, 30p, 21 Diagrams, 1 Chart, 6 Graphs
Publication Year :
1995

Abstract

A number of recent simulation studies have shown that when feedforward neural nets are trained, using backpropagation, to memorize sets of items in sequential blocks and without negative exemplars, severe retroactive interference or catastrophic forgetting results. Both formal analysis and simulation studies are employed here to show why and under what circumstances such retroactive interference arises. The conclusion is that, on the one hand, approximations to 'ideal' network geometries can entirely alleviate interference if the training data sets have been generated from a learnable function (not arbitrary pattern associations). All that is required is either a representative training set or enough sequential memory sets. However, this elimination of interference comes with cost of a breakdown in discrimination between input patterns that have been learned and those that have not: catastrophic remembering. On the other hand, localized geometries for subfunctions eliminate the discrimination problem but are easily disrupted by new training sets and thus cause catastrophic interference. The paper concludes with a formally guaranteed solution to the problems of interference and discrimination. This is the Hebbian Autoassociative Recognition Memory (HARM) model which is essentially a neural net implementation of a simple look-up table. Although it requires considerable memory resources, when used as a yardstick with which to evaluate other proposed solutions, it uses the same or less resources. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09540091
Volume :
7
Issue :
3/4
Database :
Complementary Index
Journal :
Connection Science
Publication Type :
Academic Journal
Accession number :
9603043035
Full Text :
https://doi.org/10.1080/09540099550039264