11 results on '"Cottrell, Garrison W."'
Search Results
2. A probabilistic model of eye movements in concept formation
- Author
-
Nelson, Jonathan D. and Cottrell, Garrison W.
- Published
- 2007
- Full Text
- View/download PDF
3. Phase space learning in an autonomous dynamical neural network
- Author
-
Inazawa, Hiroshi and Cottrell, Garrison W.
- Published
- 2006
- Full Text
- View/download PDF
4. DeePr-ESN: A deep projection-encoding echo-state network.
- Author
-
Ma, Qianli, Shen, Lifeng, and Cottrell, Garrison W.
- Subjects
- *
RECURRENT neural networks , *TIME series analysis , *DIGITAL video , *DEEP learning - Abstract
• We develop a novel hierarchical reservoir computing (RC) framework called the Deep Projection-encoding Echo State Network (DeePr-ESN) based on projection-encodings between reservoirs, which takes advantage of the merits of reservoir computing and deep learning, and bridges the gap between them. • By unsupervised encoding of echo states layer by layer, the proposed DeePr-ESN can not only provide more robust generalization performance than existing methods, but also obtains more rich multiscale dynamics than other hierarchical RC models. • Compared with the existing Reservoir Computing hierarchical models, the DeePr-ESN achieves better performance on well-known chaotic time series modeling tasks and several real-world time series prediction tasks. Highlights (for review) As a recurrent neural network that requires no training, the reservoir computing (RC) model has attracted widespread attention in the last decade, especially in the context of time series prediction. However, most time series have a multiscale structure, which a single-hidden-layer RC model may have difficulty capturing. In this paper, we propose a novel multiple projection-encoding hierarchical reservoir computing framework called Deep Projection-encoding Echo State Network (DeePr-ESN). The most distinctive feature of our model is its ability to learn multiscale dynamics through stacked ESNs, connected via subspace projections. Specifically, when an input time series is projected into the high-dimensional echo-state space of a reservoir, a subsequent encoding layer (e.g., an autoencoder or PCA) projects the echo-state representations into a lower-dimensional feature space. These representations are the principal components of the echo-state representations, which removes the high frequency components of the representations. These can then be processed by another ESN through random connections. By using projection layers and encoding layers alternately, our DeePr-ESN can provide much more robust generalization performance than previous methods, and also fully takes advantage of the temporal kernel property of ESNs to encode the multiscale dynamics of time series. In our experiments, the DeePr-ESNs outperform both standard ESNs and existing hierarchical reservoir computing models on some artificial and real-world time series prediction tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation
- Author
-
Tong, Matthew H., Joyce, Carrie A., and Cottrell, Garrison W.
- Subjects
- *
BRAIN research , *NEURONS , *FACE , *VISUAL perception - Abstract
Abstract: What is the role of the Fusiform Face Area (FFA)? Is it specific to face processing, or is it a visual expertise area? The expertise hypothesis is appealing due to a number of studies showing that the FFA is activated by pictures of objects within the subject''s domain of expertise (e.g., cars for car experts, birds for birders, etc.), and that activation of the FFA increases as new expertise is acquired in the lab. However, it is incumbent upon the proponents of the expertise hypothesis to explain how it is that an area that is initially specialized for faces becomes recruited for new classes of stimuli. We dub this the “visual expertise mystery.” One suggested answer to this mystery is that the FFA is used simply because it is a fine discrimination area, but this account has historically lacked a mechanism describing exactly how the FFA would be recruited for novel domains of expertise. In this study, we show that a neurocomputational model trained to perform subordinate-level discrimination within a visually homogeneous class develops transformations that magnify differences between similar objects, in marked contrast to networks trained to simply categorize the objects. This magnification generalizes to novel classes, leading to faster learning of new discriminations. We suggest this is why the FFA is recruited for new expertise. The model predicts that individual FFA neurons will have highly variable responses to stimuli within expertise domains. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
6. From PDP to NDP through LFG:: The naive dog physics manifesto
- Author
-
Cottrell, Garrison W.
- Published
- 1989
- Full Text
- View/download PDF
7. Time series classification with Echo Memory Networks.
- Author
-
Ma, Qianli, Zhuang, Wanqing, Shen, Lifeng, and Cottrell, Garrison W.
- Subjects
- *
HUMAN activity recognition , *RECURRENT neural networks , *TIME series analysis , *HUMAN behavior , *ECHO , *HUMAN body , *CLASSIFICATION - Abstract
Echo state networks (ESNs) are randomly connected recurrent neural networks (RNNs) that can be used as a temporal kernel for modeling time series data, and have been successfully applied on time series prediction tasks. Recently, ESNs have been applied to time series classification (TSC) tasks. However, previous ESN-based classifiers involve either training the model by predicting the next item of a sequence, or predicting the class label at each time step. The former is essentially a predictive model adapted from time series prediction work, rather than a model designed specifically for the classification task. The latter approach only considers local patterns at each time step and then averages over the classifications. Hence, rather than selecting the most discriminating sections of the time series, this approach will incorporate non-discriminative information into the classification, reducing accuracy. In this paper, we propose a novel end-to-end framework called the Echo Memory Network (EMN) in which the time series dynamics and multi-scale discriminative features are efficiently learned from an unrolled echo memory using multi-scale convolution and max-over-time pooling. First, the time series data are projected into the high dimensional nonlinear space of the reservoir and the echo states are collected into the echo memory matrix, followed by a single multi-scale convolutional layer to extract multi-scale features from the echo memory matrix. Max-over-time pooling is used to maintain temporal invariance and select the most important local patterns. Finally, a fully-connected hidden layer feeds into a softmax layer for classification. This architecture is applied to both time series classification and human action recognition datasets. For the human action recognition datasets, we divide the action data into five different components of the human body, and propose two spatial information fusion strategies to integrate the spatial information over them. With one training-free recurrent layer and only one layer of convolution, the EMN is a very efficient end-to-end model, and ranks first in overall classification ability on 55 TSC benchmark datasets and four 3D skeleton-based human action recognition tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Humans have idiosyncratic and task-specific scanpaths for judging faces.
- Author
-
Kanan, Christopher, Bseiso, Dina N.F., Ray, Nicholas A., Hsiao, Janet H., and Cottrell, Garrison W.
- Subjects
- *
EYE movements , *FACE perception , *VISUAL perception , *MACHINE learning , *ALGORITHMS ,VISION research - Abstract
Since Yarbus’s seminal work, vision scientists have argued that our eye movement patterns differ depending upon our task. This has recently motivated the creation of multi-fixation pattern analysis algorithms that try to infer a person’s task (or mental state) from their eye movements alone. Here, we introduce new algorithms for multi-fixation pattern analysis, and we use them to argue that people have scanpath routines for judging faces. We tested our methods on the eye movements of subjects as they made six distinct judgments about faces. We found that our algorithms could detect whether a participant is trying to distinguish angriness, happiness, trustworthiness, tiredness, attractiveness, or age. However, our algorithms were more accurate at inferring a subject’s task when only trained on data from that subject than when trained on data gathered from other subjects, and we were able to infer the identity of our subjects using the same algorithms. These results suggest that (1) individuals have scanpath routines for judging faces, and that (2) these are diagnostic of that subject, but that (3) at least for the tasks we used, subjects do not converge on the same “ideal” scanpath pattern. Whether universal scanpath patterns exist for a task, we suggest, depends on the task’s constraints and the level of expertise of the subject. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. The roles of visual expertise and visual input in the face inversion effect: Behavioral and neurocomputational evidence
- Author
-
McCleery, Joseph P., Zhang, Lingyun, Ge, Liezhong, Wang, Zhe, Christiansen, Eric M., Lee, Kang, and Cottrell, Garrison W.
- Subjects
- *
VISUAL discrimination , *VISUAL perception , *VISUAL acuity ,VISION research - Abstract
Abstract: Research has shown that inverting faces significantly disrupts the processing of configural information, leading to a face inversion effect. We recently used a contextual priming technique to show that the presence or absence of the face inversion effect can be determined via the top–down activation of face versus non-face processing systems [Ge, L., Wang, Z., McCleery, J., & Lee, K. (2006). Activation of face expertise and the inversion effect. Psychological Science, 17(1), 12–16]. In the current study, we replicate these findings using the same technique but under different conditions. We then extend these findings through the application of a neural network model of face and Chinese character expertise systems. Results provide support for the hypothesis that a specialized face expertise system develops through extensive training of the visual system with upright faces, and that top–down mechanisms are capable of influencing when this face expertise system is engaged. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
10. Learning grammatical structure with Echo State Networks
- Author
-
Tong, Matthew H., Bickett, Adam D., Christiansen, Eric M., and Cottrell, Garrison W.
- Subjects
- *
GRAMMAR , *NEURAL computers , *COMPUTER storage devices , *COMPUTERS - Abstract
Abstract: Echo State Networks (ESNs) have been shown to be effective for a number of tasks, including motor control, dynamic time series prediction, and memorizing musical sequences. However, their performance on natural language tasks has been largely unexplored until now. Simple Recurrent Networks (SRNs) have a long history in language modeling and show a striking similarity in architecture to ESNs. A comparison of SRNs and ESNs on a natural language task is therefore a natural choice for experimentation. Elman applies SRNs to a standard task in statistical NLP: predicting the next word in a corpus, given the previous words. Using a simple context-free grammar and an SRN with backpropagation through time (BPTT), Elman showed that the network was able to learn internal representations that were sensitive to linguistic processes that were useful for the prediction task. Here, using ESNs, we show that training such internal representations is unnecessary to achieve levels of performance comparable to SRNs. We also compare the processing capabilities of ESNs to bigrams and trigrams. Due to some unexpected regularities of Elman’s grammar, these statistical techniques are capable of maintaining dependencies over greater distances than might be initially expected. However, we show that the memory of ESNs in this word-prediction task, although noisy, extends significantly beyond that of bigrams and trigrams, enabling ESNs to make good predictions of verb agreement at distances over which these methods operate at chance. Overall, our results indicate a surprising ability of ESNs to learn a grammar, suggesting that they form useful internal representations without learning them. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
11. Early selection of diagnostic facial information in the human visual cortex
- Author
-
Joyce, Carrie A., Schyns, Philippe G., Gosselin, Frédéric, Cottrell, Garrison W., and Rossion, Bruno
- Subjects
- *
EVOKED potentials (Electrophysiology) , *VISUAL cortex , *FACE perception , *DIAGNOSTIC imaging - Abstract
Abstract: There is behavioral evidence that different visual categorization tasks on various types of stimuli (e.g., faces) are sensitive to distinct visual characteristics of the same image, for example, spatial frequencies. However, it has been more difficult to address the question of how early in the processing stream this sensitivity to the information relevant to the categorization task emerges. The current study uses scalp event-related potentials recorded in humans to examine how and when information diagnostic to a particular task is processed during that task versus during a task for which it is not diagnostic. Subjects were shown diagnostic and anti-diagnostic face images for both expression and gender decisions (created using Gosselin and Schyns’ Bubbles technique), and asked to perform both tasks on all stimuli. Behaviorally, there was a larger advantage of diagnostic over anti-diagnostic facial images when images designed to be diagnostic for a particular task were shown when performing that task, as compared to performing the other task. Most importantly, this interaction was seen in the amplitude of the occipito-temporal N170, a visual component reflecting a perceptual stage of processing associated with the categorization of faces. When participants performed the gender categorization task, the N170 amplitude was larger when they were presented with gender diagnostic images than with expression-diagnostic images, relative to their respective non-diagnostic stimuli. However, categorizing faces according to their facial expression was not significantly associated with a larger N170 when subjects categorized expression diagnostic cues relative to gender-diagnostic cues. These results show that the influence of higher-level task-oriented processing may take place at the level of visual categorization stages for faces, at least for processes relying on shared diagnostic features with facial identity judgments, such as gender cues. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.