1. ENCODING WORDS INTO A POTTS ATTRACTOR NETWORK
- Author
-
Pirmoradian, Sahar, Treves, Alessandro, and SISSA, Cognitive Neuroscience Sector
- Subjects
Vocabulary ,Potts Attractor Network ,Grammar ,Artificial neural network ,Computer science ,business.industry ,Artificial Language ,media_common.quotation_subject ,Word Representation ,computer.software_genre ,BLISS ,Attractor ,Encoding (semiotics) ,Artificial intelligence ,business ,computer ,Sentence ,Attractor network ,Natural language processing ,media_common ,computer.programming_language - Abstract
To understand the brain mechanisms underlying language phenomena, and sentence construction in particular, a number of approaches have been followed that are based on artificial neural networks, where words are encoded as distributed patterns of activity. Still, issues like the distinct encoding of semantic vs syntactic features, word binding, and the learning processes through which words come to be encoded that way, have remained tough challenges. We explore a novel approach to address these challenges, which focuses first on encoding words of an artificial language of intermediate complexity (BLISS) into a Potts attractor net. Such a network has the capability to spontaneously latch between attractor states, offering a simplified cortical model of sentence production. The network stores the BLISS vocabulary, and hopefully its grammar, in its semantic and syntactic subnetworks. Function and content words are encoded differently on the two subnetworks, as suggested by neuropsychological findings. We propose that a next step might describe the self-organization of a comparable representation of words through a model of a learning process.
- Published
- 2013