32 results on '"Models of neural computation"'
Search Results
2. Putting distributed representations into context
- Author
-
John E. Hummel
- Subjects
Cognitive science ,Linguistics and Language ,Theoretical computer science ,Computer science ,Cognitive Neuroscience ,Computation ,05 social sciences ,Experimental and Cognitive Psychology ,Context (language use) ,050105 experimental psychology ,Language and Linguistics ,Code (semiotics) ,Term (time) ,03 medical and health sciences ,0302 clinical medicine ,Models of neural computation ,Connectionism ,0501 psychology and cognitive sciences ,030217 neurology & neurosurgery - Abstract
The merits of distributed representations are widely discussed but, I believe, largely misunderstood. My purpose in this paper is to put the issue of distributed representations into context. I will argue that the term distributed is only meaningful with respect to that which is being represented and that degree to which a localist or a distributed code is more desirable depends on the goals of the computation to be performed. I will also argue that localist codes play an essential role in symbolic neural computation.
- Published
- 2016
3. Erosion Wear Response of Glass Microsphere Coatings: Parametric Appraisal and Prediction Using Taguchi Method and Neural Computation
- Author
-
Alok Satapathy and Gaurav Gupta
- Subjects
Materials science ,Borosilicate glass ,Mechanical Engineering ,Atmospheric-pressure plasma ,Surfaces and Interfaces ,Surfaces, Coatings and Films ,Microsphere ,Glass microsphere ,Taguchi methods ,Models of neural computation ,Mechanics of Materials ,Erosion ,Composite material ,Parametric statistics - Abstract
This article presents an analysis of the erosion wear response of borosilicate glass microsphere (BGM)-coated metal specimens subjected to reproducible erosive situations. The coatings are deposited on metal substrates by a plasma spraying route using an atmospheric plasma spray setup working on a nontransferred arc mode. The response of these coatings to solid particle erosion for different test parameters is studied. The erosion test schedule is planned as per Taguchi's experimental design and is carried out under controlled laboratory conditions using an air jet–type erosion tester. The analysis of test results reveals that the impact velocity is the most significant among various factors influencing the erosion wear rate of these coatings. A prediction tool based on artificial neural networks (ANNs) is then implemented to predict the triboperformance of such coatings in regard to their erosion rates under different test conditions. ANN is a technique that takes into account the training, testing, and ...
- Published
- 2014
4. Comment: Cell Motility Models and Inference for Dynamic Systems
- Author
-
Edward L. Ionides
- Subjects
Statistics and Probability ,Discrete mathematics ,Mathematical and theoretical biology ,Models of neural computation ,Series (mathematics) ,Tweedie distribution ,Bayesian probability ,Statistics ,Computational statistics ,Radial basis function ,Statistics, Probability and Uncertainty ,Bayesian inference - Abstract
Chng, E. S., Chen, S., and Mulgrew, B. (1996), “Gradient Radial Basis Function Networks for Nonlinear and Nonstationary Time Series Prediction,” IEEE Transactions on Neural Networks, 7, 190–194. [858] Fruehwirth-Schnatter, S. (1994), “Data Augmentation and Dynamic Linear Models,” Journal of Time Series Analysis, 15, 183–202. [858] Gardiner, C. W. (1985), Handbook of Stochastic Methods, Berlin: Springer. [863] Golightly, A., and Wilkinson, D. J. (2008), “Bayesian Inference for Nonlinear Multivariate Diffusion Models Observed With Eror,” Computational Statistics & Data Analysis, 52, 1674–1693. [863] Helmchen, F., and Denk, W. (2005), “Deep Tissue Two-Photon Microscopy,” Nature Methods, 2, 932–940. [855] Holmes, C. C., and Mallick, B. K. (1998), “Bayesian Radial Basis Functions of Variable Dimension,” Neural Computation, 10, 1217–1233. [857] Ionides, E., Fang, K., Rivkah Isseroff, R., and Oster, G. (2004), “Stochastic Models for Cell Motion and Taxis,” Journal of Mathematical Biology, 48(1), 23–37. [855] Miller, M. M., Wei, S. H., Parker, I., and Cahalan, M. D. (2002), “Two-Photon Imaging of Lymphocyte Motility and Dynamic Antigen Responses in Intact Lymph Node,” Science, 296, 1869–1873. [855] Niemi, J., and West, M. (2010), “Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-space Models,” Journal of Computational and Graphical Statistics, 19, 260– 280. [858] Okada, T., Miller, M. J., Parker, I., Krummel, M. F., Neighbors, M., Hartley, S. B., O’Garra, A., Cahalan, M. D., and Cyster, J. G. (2005), “AntigenEngaged B Cells Undergo Chemotaxis Toward the T Zone and Form Motile Conjugates With Helper T Cells,” PLoS Biology, 3, e150. [855] Powell, M. J. D. (1987), “Radial Basis Functions for Multivariable Interpolation: A Review,” in Algorithms for Approximation, eds. J. C. Mason and M. G. Cox,, Oxford: Clarendon Press, pp. 143–167. [857] Prado, R., and West, M. (2010), Time Series: Modelling, Computation, and Inference, Boca Raton, FL: Chapman & Hall/CRC Press, Taylor & Francis Group. [856,858] Roberts, G. O., and Stramer, O. (2001), “On Inference for Partially Observed Nonlinear Diffusion Models Using the Metropolis-Hastings Algorithm,” Biometrika, 88, 603. [856] ——— (2002), “Langevin Diffusions and Metropolis-Hastings Algorithms,” Methodology and Computing in Applied Probability, 4, 337–357. [863] Roberts, G. O., and Tweedie, R. L. (1996), “Exponential Convergence of Langevin Distributions and Their Discrete Approximations,” Bernoulli, 2, 341–363. [856] Schienbein, M., and Gruler, H. (1993), “Langevin Equation, Fokker-Planck Equation and Cell Migration,” Bulletin of Mathematical Biology, 55, 585–608. [856] Smith, J. T., Tomfohr, J. K., Wells, M. C., Beebe, T. P., Kepler, T. B., and Reichert, W. M. (2004), “Measurement of Cell Migration on Surface-Bound Fibronectin Gradients,” Langmuir, 20, 8279–8286. [855] West, M., and Harrison, P. J. (1997), Bayesian Forecasting and Dynamic Models (2nd ed.), New York: Springer-Verlag. [856,858] Wynn, W. K. (1981), “Tropic and Taxic Responses of Pathogenes to Plants,” Annual Review of Phytopathology, 19, 237–255. [855]
- Published
- 2012
5. Getting symbols out of a neural architecture
- Author
-
John E. Hummel
- Subjects
Cognitive science ,Artificial neural network ,Computer science ,business.industry ,media_common.quotation_subject ,Symbolic communication ,Human-Computer Interaction ,Models of neural computation ,Connectionism ,Artificial Intelligence ,Perception ,Mental representation ,Artificial intelligence ,Architecture ,business ,Software ,Coding (social sciences) ,media_common - Abstract
Traditional connectionist networks are sharply limited as general accounts of human perception and cognition because they are unable to represent relational ideas such as loves (John, Mary) or bigger-than (Volkswagen, breadbox) in a way that allows them to be manipulated as explicitly relational structures. This paper reviews and critiques the four major responses to this problem in the modelling community: (1) reject connectionism (in any form) in favour of traditional symbolic approaches to modelling the mind; (2) reject the idea that mental representations are symbolic (i.e. reject the idea that we can represent relations); and (3) attempt to represent symbolic structures in a connectionist/neural architecture by finding a way to represent role-filler bindings. Approach (3) is further subdivided into (3a) approaches based on varieties of conjunctive coding and (3b) approaches based on dynamic role-filler binding. I will argue that (3b) is necessary to get symbolic processing out of a neural computing architecture. Specifically, I will argue that vector addition is both the best way to accomplish dynamic binding and an essential part of the proper treatment of symbols in a neural architecture.
- Published
- 2011
6. Towards a cognitive robotics methodology for reward-based decision-making: dynamical systems modelling of the Iowa Gambling Task
- Author
-
Robert Lowe and Tom Ziemke
- Subjects
Human-Computer Interaction ,Cognitive science ,Models of neural computation ,Dynamical systems theory ,Artificial Intelligence ,Neural substrate ,Alternative hypothesis ,Cognitive robotics ,Psychology ,Somatic marker hypothesis ,Iowa gambling task ,Software ,Task (project management) - Abstract
The somatic marker hypothesis (SMH) posits that the role of emotions and mental states in decision-making manifests through bodily responses to stimuli of import to the organism's welfare. The Iowa Gambling Task (IGT), proposed by Bechara and Damasio in the mid-1990s, has provided the major source of empirical validation to the role of somatic markers in the service of flexible and cost-effective decision-making in humans. In recent years the IGT has been the subject of much criticism concerning: (1) whether measures of somatic markers reveal that they are important for decision-making as opposed to behaviour preparation; (2) the underlying neural substrate posited as critical to decision-making of the type relevant to the task; and (3) aspects of the methodological approach used, particularly on the canonical version of the task. In this paper, a cognitive robotics methodology is proposed to explore a dynamical systems approach as it applies to the neural computation of reward-based learning and issues concerning embodiment. This approach is particularly relevant in light of a strongly emerging alternative hypothesis to the SMH, the reversal learning hypothesis, which links, behaviourally and neurocomputationally, a number of more or less complex reward-based decision-making tasks, including the 'A-not-B' task-already subject to dynamical systems investigations with a focus on neural activation dynamics. It is also suggested that the cognitive robotics methodology may be used to extend systematically the IGT benchmark to more naturalised, but nevertheless controlled, settings that might better explore the extent to which the SMH, and somatic states per se, impact on complex decision-making.
- Published
- 2010
7. Visual Rhetoric: Primary Metaphors and Symmetric Object Alignment
- Author
-
Maria J. Ortiz
- Subjects
Linguistics and Language ,Corpus analysis ,business.industry ,Computer science ,Communication ,Experimental and Cognitive Psychology ,computer.software_genre ,Object (computer science) ,Visual rhetoric ,Models of neural computation ,Primary (astronomy) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Verbal corpus analysis in various languages and advances in the study of neural computation have revealed that we make use of primary metaphors in our reasoning processes. Very few studies have been carried out, however, regarding manifestations of primary metaphors in visual corpora. In this article the author proposes to observe how primary metaphors might generate visual constructs often used in advertising and characterized by the symmetric alignment of objects. The objectives are twofold: first, to contribute to the existing knowledge of how this pattern functions, and, second, to study the presence of primary metaphors in pictorial advertising.
- Published
- 2010
8. Multilinear models of single cell responses in the medial nucleus of the trapezoid body
- Author
-
Sandra Tolnai, Misha B. Ahrens, Jürgen Jost, Maneesh Sahani, Rudolf Rübsamen, and Bernhard Englitz
- Subjects
Sound localization ,Multilinear map ,Auditory Pathways ,Speech recognition ,Models, Neurological ,Neuroscience (miscellaneous) ,Action Potentials ,Olivary Nucleus ,Stimulus (physiology) ,Synaptic Transmission ,Cochlear nucleus ,Models of neural computation ,Predictive Value of Tests ,otorhinolaryngologic diseases ,medicine ,Animals ,Auditory system ,Trapezoid body ,Computer Simulation ,Sound Localization ,Neurons ,Neural Inhibition ,medicine.anatomical_structure ,Receptive field ,Linear Models ,Gerbillinae ,Psychology ,Neuroscience - Abstract
The representation of acoustic stimuli in the brainstem forms the basis for higher auditory processing. While some characteristics of this representation (e.g. tuning curve) are widely accepted, it remains a challenge to predict the firing rate at high temporal resolution in response to complex stimuli. In this study we explore models for in vivo, single cell responses in the medial nucleus of the trapezoid body (MNTB) under complex sound stimulation. We estimate a family of models, the multilinear models, encompassing the classical spectrotemporal receptive field and allowing arbitrary input-nonlinearities and certain multiplicative interactions between sound energy and its short-term auditory context. We compare these to models of more traditional type, and also evaluate their performance under various stimulus representations. Using the context model, 75% of the explainable variance could be predicted based on a cochlear-like, gamma-tone stimulus representation. The presence of multiplicative contextual interactions strongly reduces certain inhibitory/suppressive regions of the linear kernels, suggesting an underlying nonlinear mechanism, e.g. cochlear or synaptic suppression, as the source of the suppression in MNTB neuronal responses. In conclusion, the context model provides a rich and still interpretable extension over many previous phenomenological models for modeling responses in the auditory brainstem at submillisecond resolution.
- Published
- 2010
9. Clustered cortical organization and the enhanced probability of intra-areal functional integration
- Author
-
C. Garret Cooper and Benjamin M. Ramsden
- Subjects
Models, Neurological ,Neuroscience (miscellaneous) ,Domain (software engineering) ,Random Allocation ,Models of neural computation ,Joint probability distribution ,Neural Pathways ,medicine ,Animals ,Cluster Analysis ,Humans ,Visual Pathways ,Functional integration ,Projection (set theory) ,Cell Aggregation ,Probability ,Visual Cortex ,Mathematics ,Network model ,Neurons ,business.industry ,Computational Biology ,Pattern recognition ,Visual cortex ,medicine.anatomical_structure ,Macaca ,Probability distribution ,Artificial intelligence ,Nerve Net ,business - Abstract
Similarly responsive neurons organize into submillimeter-sized clusters (domains) across many neocortical areas, notably in Areas V1 and V2 of primate visual cortex. While this clustered organization may arise from wiring minimization or from self-organizing development, it could potentially support important neural computation benefits. Here, we suggest that domain organization offers an efficient computational mechanism for intra-areal functional integration in certain cortical areas and hypothesize that domain proximity could support a higher-than-expected spatial correlation of their respective terminals yielding higher probabilities of integration of differing domain preferences. To investigate this hypothesis we devised a spatial model inspired by known parameters of V2 functional organization, where neighboring domains prefer either colored or oriented stimuli. Preference-selective joint probabilities were calculated for model terminal co-occurrence with configurations encompassing diverse domain proximity, shape, and projection. Compared to random distributions, paired neighboring domains (< or =1200 microm apart) yielded significantly enhanced coincidence of terminals converging from each domain. Using this reference data, a second larger-scale model indicated that V2 domain organization may accommodate relatively complete sets of intra-areal color/orientation integrations. Together, these data indicate that domain organization could support significant and efficient intra-areal integration of different preferences and suggest further experiments investigating prevalence and mechanisms of domain-mediated intra-areal integration.
- Published
- 2010
10. On the occurrence of stable heteroclinic channels in Lotka–Volterra models
- Author
-
Christian Bick and Mikhail I. Rabinovich
- Subjects
Coupling ,Models of neural computation ,Dynamical systems theory ,Control theory ,General Mathematics ,Structure (category theory) ,Complex system ,Applied mathematics ,Transient (oscillation) ,Computer Science Applications ,Mathematics ,Communication channel - Abstract
The Lotka–Volterra (LV) equations can be used to model the behaviour of complex systems in nature. Trajectories in a stable heteroclinic channel (SHC) describe transient dynamics according to the winnerless competition principle in such a system. The existence of an SHC is guaranteed if the parameters of the LV equations satisfy a number of conditions. We study under what conditions a heteroclinic channel arises in a system where the coupling strengths are chosen randomly. These results describe the overall structure of the system dependent on the length of the channel. This relationship gives an estimation for the possible length of sequences of states in systems occurring in nature.
- Published
- 2009
11. Neural Computation in Authorship Attribution: The Case of Selected Tamil Articles∗
- Author
-
M. Bagavandas, Abdul Hameed, and G. Manimannan
- Subjects
Linguistics and Language ,Learning vector quantization ,Artificial neural network ,business.industry ,Computer science ,Quantization (signal processing) ,Codebook ,Machine learning ,computer.software_genre ,ENCODE ,Language and Linguistics ,language.human_language ,Models of neural computation ,Tamil ,language ,Artificial intelligence ,business ,Attribution ,computer - Abstract
Neural networks regard author attribution as a problem of pattern recognition and the proven results of their applications make them promising techniques for the future. Several neural networks are being applied for authorship determination. Learning vector quantization (LVQ) is a neural network technique that develops a codebook of quantization vectors and makes use of these vectors to encode any input vector. In this article an attempt is made to attribute authorship to disputed articles using LVQ and verify them with the results obtained by traditional canonical discriminant analysis. This study demonstrates that statistical methods of attributing authorship can be paired effectively with neural networks to produce a powerful classification tool. Comparisons are made using means of 24 function words identified from the 32 articles written in the Tamil language by three contemporary scholars of great repute to determine the authorship of 23 unattributed articles pertaining to the same period. T...
- Published
- 2009
12. Neural Computation Scheme of Compound Control: Tacit Learning for Bipedal Locomotion
- Author
-
Shingo Shimoda and Hidenori Kimura
- Subjects
Scheme (programming language) ,Models of neural computation ,Artificial neural network ,business.industry ,Computer science ,Robot ,Control engineering ,Artificial intelligence ,Bipedalism ,business ,Control (linguistics) ,computer ,computer.programming_language - Abstract
The growing need for controlling complex behaviors of versatile robots working in unpredictable environment has revealed the fundamental limitation of model-based control strategy that requires pre...
- Published
- 2008
13. Inferring input nonlinearities in neural encoding models
- Author
-
Maneesh Sahani, Liam Paninski, and Misha B. Ahrens
- Subjects
Neurons ,Pointwise ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Models, Neurological ,Neuroscience (miscellaneous) ,Linear model ,Covariance ,Synthetic data ,Nonlinear system ,Models of neural computation ,Nonlinear Dynamics ,Control theory ,Neural Networks, Computer ,Algorithm ,Linear filter ,Mathematics - Abstract
We describe a class of models that predict how the instantaneous firing rate of a neuron depends on a dynamic stimulus. The models utilize a learnt pointwise nonlinear transform of the stimulus, followed by a linear filter that acts on the sequence of transformed inputs. In one case, the nonlinear transform is the same at all filter lag-times. Thus, this "input nonlinearity" converts the initial numerical representation of stimulus value to a new representation that provides optimal input to the subsequent linear model. We describe algorithms that estimate both the input nonlinearity and the linear weights simultaneously; and present techniques to regularise and quantify uncertainty in the estimates. In a second approach, the model is generalized to allow a different nonlinear transform of the stimulus value at each lag-time. Although more general, this model is algorithmically more straightforward to fit. However, it has many more degrees of freedom than the first approach, thus requiring more data for accurate estimation. We test the feasibility of these methods on synthetic data, and on responses from a neuron in rodent barrel cortex. The models are shown to predict responses to novel data accurately, and to recover several important neuronal response properties.
- Published
- 2008
14. Inferring the capacity of the vector Poisson channel with a Bernoulli model
- Author
-
Don H. Johnson and Ilan N. Goodman
- Subjects
education.field_of_study ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,Models, Neurological ,Population ,Neuroscience (miscellaneous) ,Poisson distribution ,Point process ,Binomial Distribution ,symbols.namesake ,Channel capacity ,Models of neural computation ,Statistics ,symbols ,Neural Networks, Computer ,Poisson Distribution ,Bernoulli process ,Neural coding ,education ,Algorithm ,Mathematics - Abstract
The capacity defines the ultimate fidelity limits of information transmission by any system. We derive the capacity of parallel Poisson process channels to judge the relative effectiveness of neural population structures. Because the Poisson process is equivalent to a Bernoulli process having small event probabilities, we infer the capacity of multi-channel Poisson models from their Bernoulli surrogates. For neural populations wherein each neuron has individual innervation, inter-neuron dependencies increase capacity, the opposite behavior of populations that share a single input. We use Shannon's rate-distortion theory to show that for Gaussian stimuli, the mean-squared error of the decoded stimulus decreases exponentially in both the population size and the maximal discharge rate. Detailed analysis shows that population coding is essential for accurate stimulus reconstruction. By modeling multi-neuron recordings as a sum of a neural population, we show that the resulting capacity is much less than the population's, reducing it to a level that can be less than provided with two separated neural responses. This result suggests that attempting neural control without spike sorting greatly reduces the achievable fidelity. In contrast, single-electrode neural stimulation does not incur any capacity deficit in comparison to stimulating individual neurons.
- Published
- 2008
15. Hebbian learning in a model with dynamic rate-coded neurons: An alternative to the generative model approach for learning receptive fields from natural scenes
- Author
-
Fred H. Hamker and Jan Wiltschut
- Subjects
Neurons ,Computational model ,Artificial neural network ,business.industry ,Models, Neurological ,Neuroscience (miscellaneous) ,Feed forward ,Models, Psychological ,Nature ,Visual processing ,Generative model ,Hebbian theory ,Models of neural computation ,Nonlinear Dynamics ,Visual Perception ,Animals ,Humans ,Learning ,Visual Pathways ,Artificial intelligence ,Psychology ,business ,Photic Stimulation ,Network model - Abstract
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
- Published
- 2007
16. Dopamine, prediction error and associative learning: A model-based account
- Author
-
Andrew J. A. Smith, Ming Li, Shitij Kapur, and Sue Becker
- Subjects
Dopamine ,Models, Neurological ,Neuroscience (miscellaneous) ,Action Potentials ,Models, Psychological ,Latent inhibition ,Models of neural computation ,Predictive Value of Tests ,Salience (neuroscience) ,Learning rule ,Animals ,Humans ,Reinforcement learning ,Neurons ,Motivation ,business.industry ,Association Learning ,Associative learning ,Electrophysiology ,Incentive salience ,Artificial intelligence ,Temporal difference learning ,business ,Psychology ,Reinforcement, Psychology ,Algorithms ,Cognitive psychology - Abstract
The notion of prediction error has established itself at the heart of formal models of animal learning and current hypotheses of dopamine function. Several interpretations of prediction error have been offered, including the model-free reinforcement learning method known as temporal difference learning (TD), and the important Rescorla-Wagner (RW) learning rule. Here, we present a model-based adaptation of these ideas that provides a good account of empirical data pertaining to dopamine neuron firing patterns and associative learning paradigms such as latent inhibition, Kamin blocking and overshadowing. Our departure from model-free reinforcement learning also offers: 1) a parsimonious distinction between tonic and phasic dopamine functions; 2) a potential generalization of the role of phasic dopamine from valence-dependent "reward" processing to valence-independent "salience" processing; 3) an explanation for the selectivity of certain dopamine manipulations on motivation for distal rewards; and 4) a plausible link between formal notions of prediction error and accounts of disturbances of thought in schizophrenia (in which dopamine dysfunction is strongly implicated). The model distinguishes itself from existing accounts by offering novel predictions pertaining to the firing of dopamine neurons in various untested behavioral scenarios.
- Published
- 2006
17. The Brain's concepts: the role of the Sensory-motor system in conceptual knowledge
- Author
-
George Lakoff and Vittorio Gallese
- Subjects
Sensory motor system ,Cognitive science ,Structure (mathematical logic) ,Cognitive Neuroscience ,Inference ,Experimental and Cognitive Psychology ,Cognition ,Neuropsychology and Physiological Psychology ,Models of neural computation ,Arts and Humanities (miscellaneous) ,Action (philosophy) ,Developmental and Educational Psychology ,Philosophical theory ,Psychology ,Cognitive linguistics - Abstract
Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called "abstract" concepts that constitute the meanings of grammatical constructions and general inference patterns.
- Published
- 2005
18. A DISTRIBUTED NEURAL APPROACH FOR CAUSAL REASONING USING COOPERATIVE AND COMPETITIVE NEURAL COMPUTATIONS
- Author
-
Lotfi Ben Romdhane and B. Ayeb
- Subjects
Process (engineering) ,business.industry ,Computer science ,media_common.quotation_subject ,Computation ,Work (physics) ,Models of neural computation ,Artificial Intelligence ,Causal reasoning ,Artificial intelligence ,Function (engineering) ,business ,Gradient descent ,Energy (signal processing) ,media_common - Abstract
In this work, we develop a neural model to solve causal reasoning problems (also called abduction) in the open, independent, and incompatibility classes. We model the reasoning process by a single and global energy function using cooperative and competitive neural computation. The update rules of the distinct connections of the network are derived from its energy function, using gradient descent techniques. Simulation results reveal a good performance of the model.
- Published
- 2005
19. Fixational instability and natural image statistics: Implications for early visual representations
- Author
-
Antonino Casile and Michele Rucci
- Subjects
Visual perception ,Eye Movements ,genetic structures ,Computer science ,media_common.quotation_subject ,Models, Neurological ,Neuroscience (miscellaneous) ,Fixation, Ocular ,Visual system ,Lateral geniculate nucleus ,Models of neural computation ,Statistics ,Animals ,Humans ,Learning ,Contrast (vision) ,Computer Simulation ,Visual Pathways ,Computer vision ,media_common ,Neurons ,Models, Statistical ,business.industry ,Geniculate Bodies ,Eye movement ,Gaze ,eye diseases ,Saccade ,Visual Perception ,Artificial intelligence ,Nerve Net ,business ,Photic Stimulation - Abstract
Under natural viewing conditions, small movements of the eye, head and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they are stabilized on the retina for several seconds. However, it is unclear whether the physiological motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of the visual input to the retina and on the structure of neural activity in the early visual system. We show that fixational instability introduces a component in the retinal input signals that, in the presence of natural images, lacks spatial correlations. This component strongly influences neural activity in a model of the LGN. It decorrelates cell responses even if the contrast sensitivity functions of simulated cells are not perfectly tuned to counter-balance the power-law spectrum of natural images. A decorrelation of neural activity at the early stages of the visual system has been proposed to be beneficial for discarding statistical redundancies in the input signals. The results of this study suggest that fixational instability might contribute to the establishment of efficient representations of natural stimuli.
- Published
- 2005
20. How does Hebb both ‘divide’ and ‘conquer’ speech perception and production?
- Author
-
Kristof Strijkers, Laboratoire de psychologie cognitive (LPC), Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU), and ANR-11-IDEX-0001-02/11-LABX-0036,BLRI,Brain and Language Research Institute(2011)
- Subjects
Divide and conquer algorithms ,Cognitive science ,Linguistics and Language ,Speech production ,Speech perception ,business.industry ,Computer science ,Cognitive Neuroscience ,Experimental and Cognitive Psychology ,Phonology ,Semantics ,computer.software_genre ,Language and Linguistics ,Models of neural computation ,Hebbian theory ,Dynamics (music) ,[SCCO.PSYC]Cognitive science/Psychology ,Artificial intelligence ,business ,computer ,ComputingMilieux_MISCELLANEOUS ,Natural language processing - Abstract
A particular merit of Hickok's integrated (H)SFC model lies in the mechanistic detail with which psycholinguistic, motor control and neuroscientific perspectives on speech production are bridged, offering a much-welcomed alternative to traditional views on how concepts are translated to speech in the human mind. However, there seems to be an inconsistency between the dynamics engendering the mature (H)SFC model and those which allow the model to develop this architecture. Concretely, I question how the proposed (H)SFC model, which becomes functionally connected through Hebbian-like learning, provides the adult speaker with a brain network where ventral and dorsal streams dissociate.
- Published
- 2013
21. Neural network model to generate head swing in locomotion of Caenorhabditis elegans
- Author
-
Ryuzo Shingai and Kazumi Sakata
- Subjects
Time Factors ,Models, Neurological ,Neural Conduction ,Neuroscience (miscellaneous) ,Inhibitory postsynaptic potential ,Synaptic Transmission ,Ion Channels ,Membrane Potentials ,Synapse ,Models of neural computation ,medicine ,Animals ,Premovement neuronal activity ,Computer Simulation ,Caenorhabditis elegans ,Neurons ,Membrane potential ,Chemistry ,Muscles ,Neural Inhibition ,Forward locomotion ,Electric Stimulation ,medicine.anatomical_structure ,nervous system ,Synapses ,Excitatory postsynaptic potential ,Neural Networks, Computer ,Neuron ,Head ,Mechanoreceptors ,Neuroscience ,Locomotion - Abstract
Computer simulation of the neural network composed of the head neurons of Caenorhabditis elegans was performed to reconstruct the realistic changes in the membrane potential of motoneurons in swinging the head for coordinated forward locomotion. The model neuron had ion channels for calcium and potassium, whose parameters were obtained by fitting the experimental data. Transmission properties of the chemical synapses were set as graded. The neural network involved in forward movement was extracted by tracing the neuronal activity flow upstream from the motoneurons connected to the head muscles. Simulations were performed with datasets, which included all combinations of the excitatory and inhibitory properties of the neurons. In this model, a pulse input entered only from motoneuron VB1, and activation of the stretch receptors on SAA neurons was necessary for the periodic bending. The synaptic output property of each neuron was estimated for the alternate contraction of the dorsal and ventral muscles. The AIB neuron was excitatory, RIV and SMD neurons seemed to be excitatory and RMD and SAA neurons seemed to be inhibitory. With datasets violating Dale's principle for the SMB neuron, AIB neuron was excitatory and RMD neuron was inhibitory. RIA, RIV and SMD neurons seemed to be excitatory.
- Published
- 2004
22. Detecting dynamical changes within a simulated neural ensemble using a measure of representational quality
- Author
-
A. David Redish and Jadin C. Jackson
- Subjects
Models of neural computation ,Quantitative Biology::Neurons and Cognition ,Neural ensemble ,Artificial neural network ,Computation ,Neuroscience (miscellaneous) ,Dynamical system (definition) ,Representation (mathematics) ,Algorithm ,Measure (mathematics) ,Network model ,Mathematics - Abstract
Technological advances allowing simultaneous recording of neuronal ensembles have led to many developments in our understanding of how the brain performs neural computations. One key technique for extracting information from neural populations has been population reconstruction. While reconstruction is a powerful tool, it only provides a value and gives no indication of the quality of the representation itself. In this paper, we present a mathematically and statistically justified measure for assessing the quality of a representation in a neuronal ensemble. Using a simulated neural network, we show that this measure can distinguish between system states and identify moments of dynamical change within the system. While the examples used in this paper all derive from a standard network model, the measure itself is very general. It requires only a representational space, measured tuning curves, and neural ensembles.
- Published
- 2003
23. High-frequency, depressing inhibition facilitates synchronization in globally inhibitory networks
- Author
-
A Bose and S Kunec
- Subjects
Models of neural computation ,medicine.anatomical_structure ,Interneuron ,Chemistry ,Ripple ,Neuroscience (miscellaneous) ,Extracellular ,medicine ,Excitatory postsynaptic potential ,Hippocampus ,Inhibitory postsynaptic potential ,Neuroscience ,Positive feedback - Abstract
Motivated by the study of sharp wave-associated ripples, high-frequency (∼200 Hz) extracellular field oscillations observed in the CA1 region of the rat hippocampus during slow-wave sleep and periods of behavioural immobility, we consider a single inhibitory neuron synapsing onto a network of uncoupled, excitatory neurons. The inhibitory synapse is depressing and has a small synaptic delay. Each excitatory cell provides instantaneous, positive feedback to the inhibitory cell. We show that the interneuron can rapidly synchronize the action potentials of the pyramidal cells if the frequency of inhibitory input is increased in a ramp-like manner as occurs during the ripple. We show that the basin of attraction of the synchronous solution is larger when the inhibition frequency is gradually increased as opposed to remaining constant.
- Published
- 2003
24. Measuring linear and quadratic contributions to neuronal response
- Author
-
Duane Q. Nykamp
- Subjects
Quantitative Biology::Neurons and Cognition ,Linear system ,Neuroscience (miscellaneous) ,Linear model ,Sensory system ,Stimulus (physiology) ,Complex cell ,Visual cortex ,medicine.anatomical_structure ,Models of neural computation ,medicine ,Orthonormal basis ,Algorithm ,Neuroscience ,Mathematics - Abstract
We present a method to dissociate the sign-dependent (linear or odd-order) response from the sign-independent (quadratic or even-order) response of a neuron to sequences of random orthonormal stimulus elements. The method is based on a modification of the classical linear–nonlinear model of neural response. The analysis produces estimates of the stimulus features to which th en euron responds in a sign-dependent manner, the stimulus features to which th en euron responds in a sign-independent manner and the relative weight of th es ign-independent response. We propose that this method could be used to characterize simple and complex cells in the primary visual cortex. Ah i ghly idealized model of neuronal response to a stimulus is the linear–nonlinear model. In this model, spiking probability is a linear function of the stimulus, composed with a sigmoidal nonlinearity to ensure nonnegative probabilities. The linear–nonlinear neuron behaves essentially like a linear system. It has virtually opposite responses to stimuli with opposite signs. Its response to a sum of stimuli can be largely predicted by its response to each stimulus individually. Clearly most neurons, even in primary sensory regions, are not well represented by a linear–nonlinear model. In the primary visual cortex, for example, only simple cells respond similarly to a linear–nonlinear model. Complex cell response is more fundamentally nonlinear and cannot be captured by a linear–nonlinear model. One feature of complex cells is their indifference to the contrast sign of the visual stimulus. Fo re xample, an idealized complex cell responds similarly to a black or a white bar on a grey background. We extend the linear–nonlinear framework to capture this sign-independent response. We allow the neuron’s response to be a linear function not only of the stimulus values
- Published
- 2003
25. Network capacity analysis for latent attractor computation
- Author
-
Ali A. Minai and Simona Doboli
- Subjects
Signal processing ,Quantitative Biology::Neurons and Cognition ,Artificial neural network ,business.industry ,Model of computation ,Computation ,MathematicsofComputing_NUMERICALANALYSIS ,Neuroscience (miscellaneous) ,Content-addressable memory ,Models of neural computation ,Hebbian theory ,Attractor ,Artificial intelligence ,business ,Mathematics - Abstract
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called ‘latent attractors’ where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus—a brain region of fundamental significance for memory and spatial learning.Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive—K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the ac...
- Published
- 2003
26. Analysis of the elastic net model applied to the formation of ocular dominance and orientation columns
- Author
-
Andrei Cimponeriu and Geoffrey J. Goodhill
- Subjects
Elastic net regularization ,Orientation column ,business.industry ,Orientation (computer vision) ,Mathematical analysis ,Neuroscience (miscellaneous) ,Function (mathematics) ,Ocular dominance ,Optics ,Bifurcation theory ,Models of neural computation ,Visual cortex ,medicine.anatomical_structure ,medicine ,business ,Mathematics - Abstract
The development and structure of orientation (OR) and ocular dominance (OD) maps in the primary visual cortex of cats and monkeys can be modelled using the elastic net algorithm, which attempts to find an 'optimal' cortical representation of the input features. Here we analyse this behaviour in terms of parameters of the feature space. We derive expressions for the OR periodicity, and the first bifurcation point as a function of the annealing parameter using the methods of Durbin et al (Durbin R, Szeliski R and Yuille A 1989 Neural Computation 1 348-58). We also investigate the effect of the relative order of OR and OD development on overall map structure. This analysis suggests that developmental order can be predicted from the final OR and OD periodicities. In conjunction with experimentally measured values for these periodicities, the model predicts that (i) in normal macaques OD develops first, (ii) in normal cats OR develops first and (iii) in strabismic cats OD develops first.
- Published
- 2000
27. Competitive Learning and its Application in Adaptive Vision for Autonomous Mobile Robots
- Author
-
Howard C. Card and Dean K. McNeill
- Subjects
Signal processing ,business.industry ,Computer science ,Competitive learning ,Mobile robot ,Robotics ,Machine learning ,computer.software_genre ,Motion (physics) ,Task (project management) ,Human-Computer Interaction ,Models of neural computation ,Artificial Intelligence ,Pattern recognition (psychology) ,Artificial intelligence ,business ,computer ,Software - Abstract
The task of providing robust vision for autonomous mobile robots is a complex signal processing problem which cannot be solved using traditional deterministic computing techniques. In this article we investigate four unsupervised neural learning algorithms, known collectively as competitive learning, in order to assess both their theoretical operation and their ability to learn to represent a basic robotic vision task. This task involves the ability of a modest robotic system to identify the components of basic motion and to generalize upon that learned knowledge to classify correctly novel visual experiences. This investigation shows that standard competitive learning and the DeSieno version of frequency-sensitive competitive learning (FSCL) are unsuitable for solving this problem. Soft competitive learning, while capable of producing an appropriate solution, is too computationally expensive in its present form to be used under the constraints of this application. However, the Krishnamurthy version of FS...
- Published
- 1999
28. A neural network based system for predicting earthmoving production
- Author
-
Jonathan Shi
- Subjects
Engineering ,Artificial neural network ,business.industry ,Time delay neural network ,Computation ,Sample (statistics) ,Building and Construction ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Backpropagation ,Management Information Systems ,Models of neural computation ,Artificial intelligence ,User interface ,business ,computer ,Bitwise operation - Abstract
An artificial neural network based system (NN earth) is developed for construction practitioners as a simple tool for predicting earthmoving operations, which are modelled by back propagation neural networks with four expected parameters and seven affecting factors. These networks are then trained using the data patterns obtained from simulation because there are insufficient data available from industrial sources. The trained network is then incorporated as the computation engine of NN earth. To engender confidence in the results of neural computation, a validation function is implemented in NN earth to allow the user to apply the engine to historic cases prior to applying it to a new project. An equipment database is also implemented in NN earth to provide default information, such as internal cost rate, fuel cost, and operator's cost. User interfaces are developed to facilitate inputting project information and manipulating the system. The major functions and use of NN earth are illustrated in a sample...
- Published
- 1999
29. Neural computation for optimum power hybrid circuit design
- Author
-
Andrzej Kos and G. De Mey
- Subjects
Engineering ,Artificial neural network ,business.industry ,Circuit design ,Hardware_PERFORMANCEANDRELIABILITY ,computer.software_genre ,Power (physics) ,Models of neural computation ,Distribution (mathematics) ,Control theory ,Power electronics ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Computer Aided Design ,Electrical and Electronic Engineering ,business ,Gradient method ,computer - Abstract
The optimal placement of power components on a hybrid circuit is considered. The classical gradient method is compared to a neural net approach, in order to get an optimal temperature distribution on the substrate.
- Published
- 1994
30. ACOUSTIC COMMUNICATION AND AUDITORY NEURAL COMPUTATION IN SOUND-PRODUCING FISH
- Author
-
John D. Crawford
- Subjects
geography ,Models of neural computation ,geography.geographical_feature_category ,Ecology ,Computer science ,Acoustics ,Speech recognition ,%22">Fish ,Ecology, Evolution, Behavior and Systematics ,Sound (geography) - Published
- 2002
31. The many facets of adaptation in fly visual motion processing
- Author
-
Rafael Kurtz
- Subjects
Computer science ,business.industry ,Visually guided ,Motion detection ,Visual motion processing ,Stimulus (physiology) ,Article Addendum ,Models of neural computation ,Obstacle avoidance ,Structure from motion ,Computer vision ,Artificial intelligence ,Neuronal adaptation ,General Agricultural and Biological Sciences ,business - Abstract
Neuronal adaptation has been studied extensively in visual motion-sensitive neurons of the fly Calliphora vicina, a model system in which the computational principles of visual motion processing are amenable on a single-cell level. Evidenced by several recent papers, the original idea had to be dismissed that motion adaptation adjusts velocity coding to the current stimulus range by a simple parameter change in the motion detection scheme. In contrast, linear encoding of velocity modulations and total information rates might even go down in the course of adaptation. Thus it seems that rather than improving absolute velocity encoding motion adaptation might bring forward an efficient extraction of those features in the visual input signal that are most relevant for visually guided course control and obstacle avoidance.
- Published
- 2009
32. Visual display units versus visual computation
- Author
-
Arnold J. Wilkins
- Subjects
Brightness ,Spectral power distribution ,Cathode ray tube ,business.industry ,Astrophysics::High Energy Astrophysical Phenomena ,Computation ,General Social Sciences ,Eye movement ,Phosphor ,law.invention ,Image (mathematics) ,Human-Computer Interaction ,Models of neural computation ,Arts and Humanities (miscellaneous) ,law ,Developmental and Educational Psychology ,Computer vision ,Artificial intelligence ,Psychology ,business - Abstract
Vision is the result of complex neural computation. It is argued that cathode ray tube displays make the neural computation more complex than it needs to be because (1) they pulsate in brightness; (2) they present a visual image which is spatially periodic but which demands precise control of eye movement; and (3) the spectral power distribution of light emitted by the phosphor is uneven.
- Published
- 1991
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.