22 results on '"Elliott Moreton"'
Search Results
2. Isn't that Fantabulous: Security, Linguistic and Usability Challenges of Pronounceable Tokens.
- Author
-
Andrew M. White 0002, Katherine Shaw, Fabian Monrose, and Elliott Moreton
- Published
- 2014
- Full Text
- View/download PDF
3. 2. Phonological Abstractness In English Diphthong Raising
- Author
-
Elliott Moreton
- Subjects
Applied Mathematics ,General Mathematics - Published
- 2021
4. Structure and Substance in Artificial-Phonology Learning, Part II: Substance.
- Author
-
Elliott Moreton and Joe Pater
- Published
- 2012
- Full Text
- View/download PDF
5. Phonotactics in the perception of Japanese vowel length: evidence for long-distance dependencies.
- Author
-
Elliott Moreton and Shigeaki Amano
- Published
- 1999
- Full Text
- View/download PDF
6. Realization of the English postvocalic [voice] contrast in F1 and F2.
- Author
-
Elliott Moreton
- Published
- 2004
- Full Text
- View/download PDF
7. Learning Repetition, but not Syllable Reversal
- Author
-
Josh Fennell, Brandon Prickett, Lisa D. Sanders, Katya Pertsova, Elliott Moreton, and Joe Pater
- Subjects
Reduplication ,Trace (semiology) ,Repetition (rhetorical device) ,Computer science ,Working memory ,Chomsky hierarchy ,Conscious awareness ,Speech recognition ,General Earth and Planetary Sciences ,Syllable ,Implicit learning ,General Environmental Science - Abstract
Reduplication is common, but analogous reversal processes are rare, even though reversal, which involves nested rather than crossed dependencies, is less complex on the Chomsky hierarchy. We hypothesize that the explanation is that repetitions can be recognized when they match and reactivate a stored trace in short-term memory, but recognizing a reversal requires rearranging the input in working memory before attempting to match it to the stored trace. Repetitions can thus be recognized, and repetition patterns learned, implicitly, whereas reversals require explicit, conscious awareness. To test these hypotheses, participants were trained to recognize either a reduplication or a syllable-reversal pattern, and then asked to state the rule. In two experiments, above-chance classification performance on the Reversal pattern was confined to Correct Staters, whereas above-chance performance on the Reduplication pattern was found with or without correct rule-stating. Final proportion correct was positively correlated with final response time for the Reversal Correct Staters but no other group. These results support the hypothesis that reversal, unlike reduplication, requires conscious, time-consuming computation.
- Published
- 2021
8. Emergent positional privilege in novel English blends
- Author
-
Rachel Broad, Katya Pertsova, Jennifer L. Smith, Brandon Prickett, and Elliott Moreton
- Subjects
060201 languages & linguistics ,Linguistics and Language ,Grammar ,Salience (language) ,media_common.quotation_subject ,Phonology ,06 humanities and the arts ,Privilege (computing) ,Part of speech ,Language and Linguistics ,Linguistics ,Noun ,0602 languages and literature ,Proper noun ,Psychology ,media_common - Abstract
We present evidence from experiments on novel blend formation showing that adult English speakers have access to constraints that give phonological privilege to heads, nouns, and proper nouns, even though the nonblend phonology provides no evidence that such constraints are generally active in the grammar of English. Our results (i) demonstrate that these positional constraints are universally available; (ii) confirm that the lexical category ‘proper noun’ has the status of a strong position, which has broader implications for the role of lexical categories in positional privilege effects; and (iii) confirm that strong positions based on salience from nonphonetic sources (such as morphosyntactic, semantic, or psycholinguistic salience) participate in position-specific phonological phenomena.
- Published
- 2017
9. Phonological Concept Learning
- Author
-
Katya Pertsova, Elliott Moreton, and Joe Pater
- Subjects
Visual perception ,Computer science ,Concept Formation ,Cognitive Neuroscience ,Experimental and Cognitive Psychology ,computer.software_genre ,050105 experimental psychology ,Artificial Intelligence ,Schema (psychology) ,Concept learning ,Humans ,Learning ,0501 psychology and cognitive sciences ,060201 languages & linguistics ,Phonotactics ,Inductive bias ,business.industry ,05 social sciences ,Linguistics ,Phonology ,06 humanities and the arts ,Models, Theoretical ,Implicit learning ,0602 languages and literature ,Artificial intelligence ,Cues ,business ,computer ,Visual learning ,Natural language processing ,Cognitive psychology - Abstract
Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, 1988a) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, 2003; Hayes & Wilson, 2008) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules (“rule-seeking”). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins (1961) (“SHJ”), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other.
- Published
- 2015
10. Inter- and intra-dimensional dependencies in implicit phonotactic learning
- Author
-
Elliott Moreton
- Subjects
Linguistics and Language ,business.industry ,Algorithmic learning theory ,Stability (learning theory) ,Multi-task learning ,Experimental and Cognitive Psychology ,computer.software_genre ,Machine learning ,Language and Linguistics ,Implicit learning ,Neuropsychology and Physiological Psychology ,Inductive transfer ,Artificial Intelligence ,Unsupervised learning ,Instance-based learning ,Artificial intelligence ,Sequence learning ,business ,Psychology ,computer ,Natural language processing - Abstract
Is phonological learning subject to the same inductive biases as learning in other domains? Previous studies of non-linguistic learning found that intra-dimensional dependencies (between two instances of the same feature) were learned more easily than inter-dimensional ones. This study compares implicit learning of intra- and inter-dimensional phonotactic dependencies. A series of six unsupervised implicit-learning experiments shows that a pattern based on agreement between two instances of the same feature is easier to learn than one based on correlation between instances of two different features. The results are interpreted as evidence for domain-general restrictions on the form of domain-specific learning primitives.
- Published
- 2012
11. Analytic bias and phonological typology
- Author
-
Elliott Moreton
- Subjects
Typology ,Systematic error ,Consonant ,Linguistics and Language ,Vowel ,Voice ,Cognition ,Psychology ,Language and Linguistics ,Cognitive psychology - Abstract
Two factors have been proposed as the main determinants of phonological typology: channel bias, phonetically systematic errors in transmission, and analytic bias, cognitive predispositions making learners more receptive to some patterns than others. Much of typology can be explained equally well by either factor, making them hard to distinguish empirically. This study presents evidence that analytic bias is strong enough to create typological asymmetries in a case where channel bias is controlled. I show that (i) phonological dependencies between the height of two vowels are typologically more common than dependencies between vowel height and consonant voicing, (ii) the phonetic precursors of the height-height and height-voice patterns are equally robust and (iii) in two experiments, English speakers learned a height-height pattern and a voice-voice pattern better than a height-voice pattern. I conclude that both factors contribute to typology, and discuss hypotheses about their interaction.
- Published
- 2008
12. Isn't that Fantabulous
- Author
-
Elliott Moreton, Andrew M. White, Fabian Monrose, and Katherine E. Shaw
- Subjects
Password ,Password policy ,Cognitive password ,Biometrics ,business.industry ,Computer science ,Internet privacy ,Usability ,Pronunciation ,Computer security ,computer.software_genre ,USable ,Authentication (law) ,business ,computer - Abstract
Over the past few decades, passwords as a means of user authentication have been consistently criticized by users and security analysts alike. However, password-based systems are ubiquitous and entrenched in modern society-users understand how to use them, system administrators are intimately familiar with their operation, and many robust frameworks exist to make deploying passwords simple. Unfortunately, much of the formal research on user authentication has focused on attempting to provide alternatives (e.g., biometrics) to password-based mechanisms (or belated analyses of users' password choices), forcing administrators to use ad-hoc methods in attempts to improve security. This practice has lead to user frustration and inflated estimates of system security. We challenge common wisdom and re-examine whether pronounceable authentication strings might indeed offer a more reasonable alternative to traditional passwords. We argue that pronounceable authentication strings can lead to both improved system security and a decreased burden on users. To re-examine this potential, we explore questions related to how one might develop techniques for rating the pronounceability of word-like strings, and in doing so, enable one to quantify pronunciation difficulty. Armed with such an understanding, we posit new directions for generating usable passwords which are pronounceable and, we hope, memorable, hint-able and resistant to attack.
- Published
- 2014
13. Emergent faithfulness to morphological and semantic heads in lexical blends
- Author
-
Elliott Moreton, Fabian Monrose, Katherine E. Shaw, and Andrew M. White
- Subjects
Communication ,Computer science ,Head (linguistics) ,business.industry ,Stress (linguistics) ,General Earth and Planetary Sciences ,Phonology ,business ,Word (group theory) ,Fork (software development) ,General Environmental Science - Abstract
In many languages, sounds in certain "privileged" positions preserve marked structure which is eliminated elsewhere (Positional Faithfulness, Beckman 1998). This paper presents new corpus and experimental evidence that faithfulness to main-stress location and segmental content of morpho-semantic heads emerges in English blends. The study compared right-headed (subordinating) blends, like motor + hotel -> motel (a kind of hotel) with coordinating blends like spoon + fork -> spork (equally spoon and fork).Stress: Analysis of 1095 blends from Thurner (1993) found that right-headed blends were more faithful to stress location of the second source word than were coordinating blends. Given source words with conflicting stress (e.g., FLOUNder + sarDINE), participants preferentially matched the blend that preserved second-word stress (flounDINE) to a right-headed definition.Segmental content: When source-word length was controlled, segments from right-headed blends were more likely to survive than those from coordinating blends. Given source words that could be spliced at two points (e.g., flaMiNGo + MoNGoose), participants preferentially matched the one that preserved more of the second word (flamongoose) to a right-headed definition.These results support the hypotheses that Positional Faithfulness constraints are universally available, that heads are a privileged position, and that blend phonology is sensitive to headedness.
- Published
- 2014
14. Sonority variation in Stochastic Optimality Theory: Implications for markedness hierarchies
- Author
-
Jennifer L. Smith and Elliott Moreton
- Published
- 2012
15. Structural constraints in the perception of English stop-sonorant clusters
- Author
-
Elliott Moreton
- Subjects
Linguistics and Language ,Speech perception ,Cognitive Neuroscience ,media_common.quotation_subject ,Experimental and Cognitive Psychology ,Models, Psychological ,Language and Linguistics ,Rendaku ,Phonetics ,Perception ,Developmental and Educational Psychology ,Psychophysics ,Odds Ratio ,Humans ,media_common ,Language ,Phonotactics ,Psycholinguistics ,Sonorant ,Syllabification ,Phonology ,United States ,Logistic Models ,Speech Perception ,Psychology ,Cognitive psychology - Abstract
Native-language phonemes combined in a non-native way can be misperceived so as to conform to native phonotactics, e.g. English listeners are biased to hear syllable-initial [tr] rather than the illegal [tl] (Perception and Psychophysics 34 (1983) 338; Perception and Psychophysics 60 (1998) 941). What sort of linguistic knowledge causes phonotactic perceptual bias? Two classes of models were compared: unit models, which attribute bias to the listener’s differing experience of each cluster (such as their different frequencies), and structure models, which use abstract phonological generalizations (such as a ban on [coronal][coronal] sequences). Listeners (N ¼ 16 in each experiment) judged synthetic 6 £ 6 arrays of stop-sonorant clusters in which both consonants were ambiguous. The effect of the stop judgment on the log odds ratio of the sonorant judgment was assessed separately for each stimulus token to provide a stimulus-independent measure of bias. Experiment 1 compared perceptual bias against the onsets [bw] and [dl], which violate different structural constraints but are both of zero frequency. Experiment 2 compared bias against [dl] in CCV and VCCV contexts, to investigate the interaction of syllabification with segmentism and to rule out a compensation-for-coarticulation account of Experiment 1. Results of both experiments favor the structure models (supported by NSF). q 2002 Elsevier Science B.V. All rights reserved.
- Published
- 2002
16. Non-computable functions in Optimality Theory
- Author
-
Elliott Moreton
- Subjects
business.industry ,Phonology ,Optimality theory ,computer.software_genre ,Zero (linguistics) ,Range (mathematics) ,Computable function ,Phonological rule ,Chain (algebraic topology) ,Markedness ,Artificial intelligence ,business ,computer ,Mathematical economics ,Natural language processing ,Mathematics - Abstract
Is Optimality Theory a constraining theory? A formal analysis shows that it is, if two auxiliary assumptions are made: (1) that only markedness and faithfulness constraints are allowed, and (2) that input and output representations are made from the same elements. Such OT grammars turn out to be incapable of computing circular or infinite chain shifts. These theoretical predictions are borne out by a wide range of natural phonological processes including augmentation, alternations with zero, metathesis, and exchange rules. The results confirm, extend, and account for the observations of Anderson & Browne (1973) on exchange rules in phonology and morphology. Non-computable functions in Optimality Theory 1 1
- Published
- 1999
- Full Text
- View/download PDF
17. Modeling global and focal hyperarticulation during human-computer error resolution
- Author
-
Sharon Oviatt, Margaret MacEachern, Elliott Moreton, and Gina-Anne Levow
- Subjects
Speech Acoustics ,Time Factors ,Acoustics and Ultrasonics ,Computer science ,Computers ,Speech recognition ,Phonetics ,Linguistics ,Resolution (logic) ,Style (sociolinguistics) ,Arts and Humanities (miscellaneous) ,Humans ,Speech ,Adaptation (computer science) ,Utterance ,Speech error ,Spoken language - Abstract
When resolving errors with interactive systems, people sometimes hyperarticulate—or adopt a clarified style of speech that has been associated with increased recognition errors. The primary goals of the present study were: (1) to provide a comprehensive analysis of acoustic, prosodic, and phonological adaptations to speech during human–computer error resolution after different types of recognition error; and (2) to examine changes in speech during both global and focal utterance repairs. A semi-automatic simulation method with a novel error-generation capability was used to compare speech immediately before and after system recognition errors. Matched original-repeat utterance pairs then were analyzed for type and magnitude of linguistic adaptation during global and focal repairs. Results indicated that the primary hyperarticulate changes in speech following all error types were durational, with increases in number and length of pauses most noteworthy. Speech also was adapted toward a more deliberate and hyperclear articulatory style. During focal error repairs, large durational effects functioned together with pitch and amplitude to provide selective prominence marking of the repair region. These results corroborate and generalize the computer-elicited hyperarticulate adaptation model (CHAM). Implications are discussed for improved error handling in next-generation spoken language and multimodal systems.
- Published
- 1998
18. Influence of phonotactic rules on perception of ambiguous segments
- Author
-
Elliott Moreton
- Subjects
Phonotactics ,Property (philosophy) ,Acoustics and Ultrasonics ,Continuum (measurement) ,media_common.quotation_subject ,Lexicon ,Linguistics ,Arts and Humanities (miscellaneous) ,Perception ,Vowel ,Syllable ,Word (group theory) ,media_common ,Mathematics - Abstract
In every language, some sequences of sounds are illegal. English, for instance, bans stressed lax vowels word‐finally—[bI] cannot be an English word. Linguists traditionally attribute this to language‐particular phonotactic rules. More recently, some psychologists have suggested instead that phonotactics is the emergent statistical property of the lexicon, caused by the frequency of some sequences and the rarity of others. The issue is tested here by comparing the effects of absolute phonotactic rules versus nonphonotactic lexical frequency differences on phonetic category boundaries. Stimuli are disyllabic English pseudowords ending in a stressed syllable whose vowel is ambiguous. One continuum is [gri] (very common in that position) to [grI] (illegal); another is [kri] (legal but very rare) to [krI] (illegal). Controls, with both end points legal, are [grich]‐[grIch] and [krich‐krIch]. The [I] end point’s illegality should move the i–I boundary towards [I] compared with the controls. The rule theory predicts equal shifts for the [gr] and [kr] continuua; the statistical theory says it will be much larger for the frequent [gr]. [Work supported by NIH.]
- Published
- 1997
19. Phonotactic constraints, frequency, and legality in English onset‐cluster perception
- Author
-
Elliott Moreton
- Subjects
Phonotactics ,Acoustics and Ultrasonics ,Sonorant ,Speech recognition ,media_common.quotation_subject ,Context (language use) ,Term (time) ,Constraint (information theory) ,Arts and Humanities (miscellaneous) ,Perception ,Percept ,Coarticulation ,Mathematics ,media_common - Abstract
Phonological context can affect phoneme identification, favoring one response over another [Massaro and Cohen, Percept. Psychophys. 34, 338–348 (1983)]. It is unclear what is disfavored: infrequency, legality, or phonotactic constraint violation. This study compared bias on an [l]–[w] continuum in two situations. In one, the three factors were deliberately confounded; in the other, frequency and legality were controlled. Stimuli were synthetic syllables ambiguous between [blae bwae dlae dwae] (‘‘bd array’’) or [mlae mwae nlae nwae] (‘‘mn array’’). In the bd array, [bl dw] are legal, frequent English onsets, while [dl bw] have zero frequency, are illegal, and violate a constraint against same‐place onsets; hence, ‘‘d’’ responses should facilitate ‘‘w’’ responses. The whole mn array is unattested and illegal, but [nl mw] violate the constraint. Simultaneous stop‐sonorant judgments were obtained from English listeners for each 6‐by‐6 array. A mixed‐effects logistic‐regression model (including a term to model out compensation for coarticulation) was used to measure dependency between stop and sonorant responses. The bd array drew the expected bias against ‘‘dl.’’ Bias against ‘‘nl’’ in the mn array was considerably smaller, but still present. These results suggest that perceptual bias is jointly determined by constraint violation, and by frequency and/or legality.
- Published
- 2004
20. Realization of the English [voice] contrast in F1 and F2
- Author
-
Elliott Moreton
- Subjects
Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,American English ,Diphthong ,Obstruent ,Psychology ,Linguistics ,Coda - Abstract
Before a [voice] coda, F1 is higher for monophthongs but lower for /aI/ than before [+voice]. We test the hypothesis that this is due to local hyperarticulation before voiceless obstruents. Experiment 1, with 16 American English speakers, found the /aI/ pattern of more peripheral F1 and F2 in the offglides /oI eI aU/ as well, showing that it is part of the realization of [voice] rather than a historical property of /aI/. Some of the F2 increase in /aI oI eI/ cannot be accounted for by articulatory raising alone, but must be ascribed to fronting. The diphthong nuclei tended to change in the same direction as the offglides. Experiments 2 and 3, each with a different 16 American English speakers, collected ‘‘tight’’‐‘‘tide’’ (Exp. 2) or ‘‘ate’’‐‘‘aid’’ (Exp. 3) judgments of a synthetic stimulus in which offglide F1, offglide F2, and nuclear duration were varied independently. ‘‘Tight’’ and ‘‘ate’’ responses were facilitated by lower F1, by higher F2, and by shorter nuclei. Log‐linear analysis showed that the...
- Published
- 2002
21. The discovery of natural classes by non‐native listeners
- Author
-
John Kingston and Elliott Moreton
- Subjects
Feature (linguistics) ,German ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Speech recognition ,language ,Contrast (statistics) ,Natural (music) ,sort ,Exemplar theory ,Variety (linguistics) ,language.human_language ,Mathematics - Abstract
Using natural disyllabic stimuli, English speakers were trained to sort one, two, or three pairs of German nonlow rounded vowels into two cat‐egories, then a new pair was added and their ability to sort it into the same categories was tested. Each pair contrasted in [high], [tense], and [back]. Category membership was determined by only one of these features. Three‐pair training provided enough information to tell which one; two‐pair narrowed it down to two possibilities; one‐pair left it open. Listeners in three experiments using different pairs of test vowels sorted the test vowels significantly better as the number of training pairs increased. This suggests they were inducing feature‐based natural classes. The rival exemplar model of Nosofsky [J. Exp. Psych.: Gen. 115, 39–57 (1986)] would say that while more pairs bring more phonetic variety, they also exemplify more robustly the dimensions along which the categories differ phonetically, and thereby shift attentional weight to those dimensions. Similar experiments will pit these two hypotheses against one another with training pairs which contrast only in the feature that defines the sort categories. If listeners are using feature‐based natural classes, their performance should not improve with more training pairs; if they are using exemplars, it should. [Work supported by NIH.]
- Published
- 1998
22. Native language determines the parsing of nonlinguistic rhythmic stimuli
- Author
-
Elliott Moreton and Kiyomi Kusumoto
- Subjects
Rhythm ,Native english ,Parsing ,Phrase ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,First language ,Iamb ,Stimulus (physiology) ,Psychology ,computer.software_genre ,computer ,Linguistics - Abstract
Given a rhythmic sound stimulus, the hearer automatically parses it into recurrent units. Researchers since Woodrow [Arch. Psych. 14, 1–66 (1909)] have consistently found that stimulus elements differing in duration are parsed iambically, while those differing in intensity are parsed trochaically. In an extensive typological survey, Hayes [Metrical Stress Theory (1995)] observes that no language has quantity‐insensitive iambic feet, and suggests that the nonlinguistic parsing tendencies may be at the root of this language universal (the ‘‘iambic‐trochaic law’’). However, the relevant nonlinguistic parsing experiments were done with native English speakers. We compared parsing behavior in native speakers of English and in two dialects of Japanese. Our findings suggest that native language (specifically, the structure of the minimal intonational phrase) determines parsing preference, not the other way around. The lack of quantity‐insensitive iambs must have another source.
- Published
- 1997
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.