1,925 results on '"*PLACE of articulation"'
Search Results
2. Effects of voice onset time and place of articulation on perception of dichotic Turkish syllables
- Author
-
Eskicioglu, Emre, Taslica, Serhat, Guducu, Cagdas, Oniz, Adile, and Ozgoren, Murat
- Published
- 2025
- Full Text
- View/download PDF
Catalog
3. Estimation of place of articulation of fricatives from spectral features.
- Author
-
Nataraj, K. S., Pandey, Prem C., and Dasgupta, Hirak
- Abstract
An investigation is carried out for speaker-independent acoustic-to-articulatory mapping for fricative utterances using simultaneously acquired speech signals and articulatory data. The relation of the place of articulation with the spectral characteristics is examined using several earlier reported spectral features and six proposed spectral features (maximum-sum segment centroid, normalized sum of absolute spectral slopes, and four spectral energy features). A method is presented for estimating the place of articulation using a feedforward neural network. It is evaluated using a dataset comprising utterances with a mix of phonetic contexts and from multiple speakers, five-fold cross-validation, and networks with different hidden layers and neurons. The six proposed spectral features used as the input feature set resulted in the lowest estimation error and low sensitivity to the training data size. Estimation using this feature set with an optimal network provided a correlation coefficient of 0.978 and an RMS error of 2.54 mm. The errors were smaller than the differences between the adjacent places, indicating that the method may be helpful in providing visual feedback of articulatory efforts in speech training aids. [ABSTRACT FROM AUTHOR] more...
- Published
- 2023
- Full Text
- View/download PDF
4. Emergent learning bias and the underattestation of simple patterns.
- Author
-
O'Hara, Charlie
- Subjects
MACHINE learning ,REINFORCEMENT learning - Abstract
This paper investigates the typology of word-initial and word-final place of articulation contrasts in stops, revealing two typological skews. First, languages tend to make restrictions based on syllable position alone, rather than banning particular places of articulation in word-final position. Second, restrictions based on place of articulation alone are underrepresented compared to restrictions that are based on position. This paper argues that this typological skew is the result of an emergent bias found in agent-based models of generational learning using the Perceptron learning algorithm and MaxEnt grammar using independently motivated constraints. Previous work on agent-based learning with MaxEnt has found a simplicity bias (Pater and Moreton 2012) which predicts the first typological skew, but fails to predict the second skew. This paper analyzes the way that the set of constraints in the grammar affects the relative learnability of different patterns, creating learning biases more elaborate than a simplicity bias, and capturing the observed typology. [ABSTRACT FROM AUTHOR] more...
- Published
- 2023
- Full Text
- View/download PDF
5. Factors affecting voice onset time in English stops: English proficiency, gender, place of articulation, and vowel context.
- Author
-
Chun-yin Doris Chen, Wei-yunWinona Liu, and Tzu-Fen Nellie Yeh
- Subjects
LANGUAGE ability ,ENGLISH language ,VOWELS ,NATIVE language ,LANGUAGE transfer (Language learning) ,HUMAN voice ,SECOND language acquisition - Abstract
This study investigates the voice onset time (VOT) of L2 English stops produced by native speakers of Chinese at different L2 proficiency levels. Four factors were examined: English proficiency, gender, place of articulation, and vowel context. High and low achievers of English were recruited as the experimental groups and native speakers of English as the control group. Each group consisted of 16 participants, 8 males and 8 females. Each participant took part in a read-aloud task, in which the target words were presented in an embedded sentence. The results showed that the effects of L2 proficiency were significant in that high achievers outperformed low achievers who were affected by L1 negative transfer more seriously when producing native-like English stops. Additionally, when producing English stops, velar stops had significantly greater VOT values than either bilabial or alveolar ones. However, no significant gender differences were found. Male and female participants produced similar VOT values in English stops. Last, the vowel context was also a significant factor. The VOT lengths differed according to the context of a following vowels. More specifically, the VOT of a stop is significantly longer when followed by a tense vowel. [ABSTRACT FROM AUTHOR] more...
- Published
- 2023
- Full Text
- View/download PDF
6. OPIS GLASOVA U GRAMATIČKOJ LITERATURI SRPSKOG I BOSANSKOG JEZIKA.
- Author
-
Šubarić, Sanja
- Abstract
Copyright of Journal of Language & Literary Studies / Folia Linguistica & Litteraria is the property of Journal of Language & Literary Studies / Folia Linguistica & Litteraria and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) more...
- Published
- 2023
- Full Text
- View/download PDF
7. Speech Therapy Protocol After Palate Repair
- Author
-
Siddiqui, Amina Asif and Fayyaz, Ghulam Qadir, editor
- Published
- 2022
- Full Text
- View/download PDF
8. Voice Onset Time of Mankiyali Language: An Acoustic Analysis.
- Author
-
Ullah, Shakir, Anjum, Uzma, and Saleem, Tahir
- Subjects
LANGUAGE revival ,ENDANGERED languages ,NATIVE language ,HUMAN voice ,DOCUMENTATION ,LANGUAGE & languages - Abstract
The endangered Indo-Aryan language Mankiyali, spoken in northern Pakistan, lacks linguistic documentation and necessitates research. This study explores the Voice Onset Time (VOT) values of Mankiyali's stop consonants to determine the duration of sound release, characterized as negative, positive, and zero VOTs. The investigation aims to identify the laryngeal categories present in the language. Using a mixed methods approach, data were collected from five native male speakers via the Zoom H6 platform. The study employed the theoretical framework of Fant's (1970) source filter model and analyzed each phoneme using PRAAT software. Twenty-five tokens of a single phoneme were recorded across the five speakers. The results reveal that Mankiyali encompasses three laryngeal categories: voiceless unaspirated (VLUA) stops, voiceless aspirated (VLA) stops, and voiced unaspirated (VDUA) stops. The study highlights significant differences in VOTs based on place of articulation and phonation. In terms of phonation, the VLUA bilabial stop /p/, alveolar stop /t/, and velar stop /k/ exhibit shorter voicing lag compared to their VLA counterparts /p
h , th , kh /. All VLUA and VLA stops display +VOT values, while all VDUA stops exhibit -VOT values. Regarding place of articulation, the bilabial /p/ demonstrates a longer voicing lag than the alveolar /t/ but a shorter lag than the velar /k/. Additionally, the results indicate similarities in voicing lag among the VDUA stops /b, d, g/. This study offers valuable insights into the phonetic and phonological aspects of Mankiyali and holds potential significance for the language's preservation. [ABSTRACT FROM AUTHOR] more...- Published
- 2023
- Full Text
- View/download PDF
9. Emergent learning bias and the underattestation of simple patterns
- Author
-
O’Hara, Charlie
- Published
- 2023
- Full Text
- View/download PDF
10. What's in a Japanese kawaii 'cute' name? A linguistic perspective.
- Author
-
Gakuji Kumagai
- Subjects
BEHAVIORAL sciences ,SOUND symbolism ,ANIME ,FEMININITY ,PHONETICS ,GESTURE - Abstract
While the concept termed as kawaii is often translated into English as 'cute' or 'pretty', it has multiple connotations. It is one of the most significant topics of investigation in behavioural science and Kansei/affective engineering. This study aims to explore linguistic (phonetic and phonological) features/units associated with kawaii. Specifically, it examines, through experimental methods, what kinds of phonetic and phonological features are associated with kawaii, in terms of the following three consonantal features: place of articulation, voicing/frequency, and manner of articulation. The results showed that the features associated with kawaii are: [labial], [high frequency], and [sonorant]. The factors associated with kawaii may include the pouting gesture, babyishness, smallness, femininity, and roundness. The study findings have practical implications due to their applicability regarding the naming of anime characters and products characterised by kawaii. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
- Full Text
- View/download PDF
11. Velarization in English and Arabic.
- Author
-
Al-Rahman Eltaif, Sua'ad Abd
- Subjects
ENGLISH language ,SOFT palate ,SPEECH - Abstract
Copyright of Journal of Tikrit University for Humanities is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) more...
- Published
- 2022
- Full Text
- View/download PDF
12. Characterization of Consonant Sounds Using Features Related to Place of Articulation
- Author
-
Ramteke, Pravin Bhaskar, Hegde, Srishti, Koolagudi, Shashidhar G., Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Elçi, Atilla, editor, Sa, Pankaj Kumar, editor, Modi, Chirag N., editor, Olague, Gustavo, editor, Sahoo, Manmath N., editor, and Bakshi, Sambit, editor more...
- Published
- 2020
- Full Text
- View/download PDF
13. AUDIOVISUAL SPEECH PERCEPTION IN DIOTIC AND DICHOTIC LISTENING CONDITIONS.
- Author
-
Vinay, Sandhya
- Subjects
- *
SPEECH perception , *VOWELS , *EXPERIMENTAL design , *ANALYSIS of variance , *AUDITORY perception , *DICHOTIC listening tests , *ARTICULATION disorders , *CONSONANTS , *PHONETICS - Abstract
Background: Speech perception is multisensory, relying on auditory as well as visual information from the articulators. Watching articulatory gestures which are either congruent or incongruent with the speech audio can change the auditory percept, indicating that there is a complex integration of auditory and visual stimuli. A speech segment is comprised of distinctive features, notably voice onset time (VOT) and place of articulation (POA). Understanding the importance of each of these features for audiovisual (AV) speech perception is critical. The present study investigated the perception of AV consonant-vowel (CV) syllables with various VOTs and POAs under two conditions: diotic incongruent and dichotic congruent. Material and methods: AV stimuli comprised diotic and dichotic CV syllables with stop consonants (bilabial /pa/ and /ba/; alveolar /ta/ and /da/; and velar /ka/ and /ga/) presented with congruent and incongruent video CV syllables with stop consonants. There were 40 righthanded normal hearing young adults (20 females, mean age 23 years, SD = 2.4 years) and 20 males (mean age 24 years, SD = 2.1 years) who participated in the experiment. Results: In the diotic incongruent AV condition, short VOT (voiced CV syllables) of the visual segments were identified when auditory segments had a CV syllable with long VOT (unvoiced CV syllables). In the dichotic congruent AV condition, there was an increase in identification of the audio segment when the subject was presented with a video segment congruent to either ear, in this way overriding the otherwise presented ear advantage in dichotic listening. Distinct visual salience of bilabial stop syllables had greater visual influence (observed as greater identification scores) than velar stop syllables and thus overrode the acoustic dominance of velar syllables. Conclusions: The findings of the present study have important implications for understanding the perception of diotic incongruent and dichotic congruent audiovisual CV syllables in which the stop consonants have different VOT and POA combinations. Earlier findings on the effect of VOT on dichotic listening can be extended to AV speech having dichotic auditory segments. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
- Full Text
- View/download PDF
14. Toward a Model of Auditory-Visual Speech Intelligibility
- Author
-
Grant, Ken W., Bernstein, Joshua G. W., Fay, Richard R., Series Editor, Avraham, Karen, Editorial Board Member, Popper, Arthur N., Series Editor, Bass, Andrew, Editorial Board Member, Cunningham, Lisa, Editorial Board Member, Fritzsch, Bernd, Editorial Board Member, Groves, Andrew, Editorial Board Member, Hertzano, Ronna, Editorial Board Member, Le Prell, Colleen, Editorial Board Member, Litovsky, Ruth, Editorial Board Member, Manis, Paul, Editorial Board Member, Manley, Geoffrey, Editorial Board Member, Moore, Brian, Editorial Board Member, Simmons, Andrea, Editorial Board Member, Yost, William, Editorial Board Member, Lee, Adrian K. C., editor, Wallace, Mark T., editor, and Coffin, Allison B., editor more...
- Published
- 2019
- Full Text
- View/download PDF
15. Classification of Arabic fricative consonants according to their places of articulation.
- Author
-
Elfahm, Youssef, Abajaddi, Nesrine, Mounir, Badia, Elmaazouzi, Laila, Mounir, Ilham, and Farchi, Abdelmajid
- Subjects
AUTOMATIC speech recognition ,FRICATIVES (Phonetics) ,SUPPORT vector machines ,CLASSIFICATION ,HUMAN voice - Abstract
Many technology systems have used voice recognition applications to transcribe a speaker's speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and nonsibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonantvowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
- Full Text
- View/download PDF
16. A Comparative Study of Dari Persian and English Language Consonant Sounds.
- Author
-
Usmanyar, Mahmood
- Subjects
PERSIAN language ,LISTENING comprehension ,ENGLISH language ,CONSONANTS ,LANGUAGE policy ,NATIVE language - Abstract
This research article compares the consonant sounds of English and Dari Persian language in terms of state of larynx, place and manner of articulation. This research article aims to determine similarities and differences between the consonant systems of English and Dari Persian language which can be useful for teachers and learners of both languages, especially in listening and speaking skills. In this research article, the qualitative method has been used to find similar and different consonant sounds. In this research article, it was found out that eighteen consonants are similar in between, two consonant sounds are slightly similar, 4 English consonants are not present in Dari Persian, and 3 Dari Persian consonants are not present in English language. It is believed that one's mother tongue obviously has influence on second or foreign language. That is, one's own language pronunciation habits are so strong that they are extremely difficult to break. On the other hand, mispronouncing the sounds in spoken language can cause miscommunication or misunderstanding. Therefore, this research article can help teachers and learners of English with Dari Persian as the first language and vice versa to maintain effective and meaningful communication while listening and speaking with more focus on the sounds which are different between the first and the second or foreign language. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
- Full Text
- View/download PDF
17. EXPLORING A RECENTLY DEVELOPED ROMANIAN SPEECH CORPUS IN TERMS OF COARTICULATION PHENOMENA ACROSS WORD BOUNDARIES.
- Author
-
NICULESCU, OANA
- Subjects
ARTICULATION (Speech) ,ROMANIAN language ,DELETION (Linguistics) ,SPEECH ,FRICATIVES (Phonetics) ,PLACE of articulation ,OBSTRUENTS (Phonetics) - Abstract
The purpose of this article is twofold. On the one hand, we aim to investigate coarticulation phenomena in word-final position pertaining to standard Romanian spontaneous speech. The analysis focuses on deletion processes, most notably the deletion of the definite article-l, external hiatus repair mechanisms, word-final obstruent devoicing and voicing phenomena, as well as fricativization of the voiced postalveolar affricate. On the other hand, we aim to showcase the benefits of working on a recently developed Romanian speech corpus by correlating the transcripts with the audio recordings and automatically extracting the relevant acoustic data pertaining to each of the aforementioned connected speech processes. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
18. A contrastive analysis of southern welsh and cockney accents.
- Author
-
Alharbi, Amjad, Alqreeni, Gaida, Alothman, Hissah, Alanazi, Shatha, and Omar, Abdulfattah
- Subjects
PHONOLOGICAL awareness ,COMPARATIVE method ,SOUND systems ,SOCIAL networks ,CONSONANTS ,SOCIAL media - Abstract
This study is concerned with comparing the pronunciation in Southern Welsh, a Celtic language, and Cockney, an English dialect, regarding the place of articulation. The study uses a comparative method to shed light on the similarities and differences between the two accents. The data were collected from YouTube videos of speakers of Southern Welsh and Cockney and the consonant sound systems were analysed and compared. This study answers two main research questions: Do Southern Welsh and Cockney accents have the same consonants? What are the phonological differences between Southern Welsh and Cockney regarding place of articulation? The findings show that there are some phonological differences between Sothern Welsh and Cockney in terms of bilabial, labiodental, dental, alveolar, lateral, palatal, velar, and uvular sounds. However, they are similar in terms of post-alveolar and glottal sounds. Awareness of these phonological differences is important for EFL learners to develop strong competencies in dealing with these accents which are gaining an increasing popularity due to the unprecedented spread of social media networks and applications. [ABSTRACT FROM AUTHOR] more...
- Published
- 2021
- Full Text
- View/download PDF
19. Development of Minimal Pair Test in Tamil (MPT-T)
- Author
-
Kavitha Vijayakumar, Saranyaa Gunalan, and Ranjith Rajeshwaran
- Subjects
paediatric cochlear implantees ,place of articulation ,vowel change ,vowel length ,Medicine - Abstract
Introduction: Speech perception testing provides an accurate measurement of the child’s ability to perceive and distinguish the various phonetic segments and patterns of the sounds. From among the many types of speech stimuli used, minimal pairs can also be used to assess the phoneme recognition skills. Thus, the study focused on developing Minimal Pair Test in Tamil (MPT-T). Aim: To develop and validate the MPT in Tamil on Normal Hearing (NH) children and paediatric Cochlear Implantees. Materials and Methods: It was an experimental study which included school going children in the age range of six to eight years and the duration of the study was 12 months. The test was developed in two phases. The first phase focussed on the construction of the word list, recording of the word pairs and the preparation of the test. The second phase was administration of the test on NH children and paediatric cochlear implantees. The test scores were analysed using Mann Whitney U test, Kruskal Wallis and Wilcoxon signed-rank test. The results showed a statistical significance between the NH group and the paediatric cochlear implantees. Results: The present study included 40 NH children and 15 paediatric cochlear implantees through purposive sampling method. The specific speech feature analysis of the paediatric cochlear implantees revealed that there was difficulty identifying the word pairs differing in Vowel Length (VL) and the best performed feature was Place of Articulation (POA). The results showed statistical significance between the NH group and the paediatric cochlear implantees. Conclusion: The developed test can be effectively used in clinic for assessing speech perception abilities of pediatric Cochlear Implantees and also in planning the rehabilitative goals. more...
- Published
- 2021
- Full Text
- View/download PDF
20. Development of Minimal Pair Test in Tamil (MPT-T).
- Author
-
VIJAYAKUMAR, KAVITHA, GUNALAN, SARANYAA, and RAJESHWARAN, RANJITH
- Subjects
MANN Whitney U Test ,PERCEPTION testing ,SCHOOL children ,SPEECH perception - Abstract
Introduction: Speech perception testing provides an accurate measurement of the child's ability to perceive and distinguish the various phonetic segments and patterns of the sounds. From among the many types of speech stimuli used, minimal pairs can also be used to assess the phoneme recognition skills. Thus, the study focussed on developing Minimal Pair Test in Tamil (MPT-T). Aim: The aim of the present study was to develop and validate the MPT-T in Tamil on Normal Hearing (NH) children and paediatric cochlear implantees (CI). Materials and Methods: It was an experimental study which included school going children in the age range of six to eight years and the duration of the study was 12 months. The test was developed in two phases. The first phase focussed on the construction of the word list, recording of the word pairs and the preparation of the test. The second phase was administration of the test on NH children and paediatric cochlear implantees. The test scores were analysed using Mann Whitney U test, Kruskal Wallis and Wilcoxon Sign ranked test. The results showed a statistical significance between the NH group and the paediatric cochlear implantees. Results: The present study included 40 NH children and 15 paediatric cochlear implantees through purposive sampling method. The specific speech feature analysis of the paediatric cochlear implantees revealed that there was difficulty identifying the word pairs differing in Vowel Length (VL) and the best performed feature was Place of Articulation (POA). The results showed statistical significance between the NH group and the paediatric cochlear implantees. Conclusion: The developed test can be effectively used in clinic for assessing speech perception abilities of pediatric Cochlear Implantees (CI) and also in planning the rehabilitative goals. [ABSTRACT FROM AUTHOR] more...
- Published
- 2021
- Full Text
- View/download PDF
21. Phonetics of Consonants
- Author
-
Fuchs, Susanne and Birkholz, Peter
- Published
- 2019
- Full Text
- View/download PDF
22. Stop Voicing and F0 Perturbation in Pahari
- Author
-
Nazia Rashid, Abdul Qadir Khan, Ayesha Sohail, and Bilal Ahmed Abbasi
- Subjects
Pahari ,perturbation ,fundamental frequency ,voicing ,place of articulation ,Philology. Linguistics ,P1-1091 - Abstract
The present study has been carried out to investigate the perturbation effect of the voicing of initial stops on the fundamental frequency (F0) of the following vowels in Pahari. Results show that F0 values are significantly higher following voiceless unaspirated stops than voiced stops. F0 contours indicate an initially falling pattern for vowel [a:] after voiced and voiceless unaspirated stops. A rising pattern after voiced stops and a falling pattern after voiceless unaspirated stops is observed after [i:] and [u:]. These results match Umeda (1981) who found that F0 of a vowel following voiceless stops starts high and drops sharply, but when the vowel follows a voiced stop, F0 starts at a relatively low frequency followed by a gradual rise. The present data show no statistically significant difference between the F0 values of vowels with different places of articulation. Place of articulation is thus the least influencing factor. more...
- Published
- 2021
- Full Text
- View/download PDF
23. Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions
- Author
-
Maëva Michon, Gonzalo Boncompte, and Vladimir López
- Subjects
orofacial movements ,place of articulation ,ERPs ,viseme ,articuleme ,speech motor system ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue. more...
- Published
- 2020
- Full Text
- View/download PDF
24. STOP VOICING AND F0 PERTURBATION IN PAHARI.
- Author
-
RASHID, Nazia, KHAN, Abdul Qadir, SOHAIL, Ayesha, and ABBASI, Bilal Ahmed
- Subjects
VOWELS - Abstract
Copyright of Acta Linguistica Asiatica is the property of Acta Linguistica Asiatica and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) more...
- Published
- 2021
- Full Text
- View/download PDF
25. Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions.
- Author
-
Michon, Maëva, Boncompte, Gonzalo, and López, Vladimir
- Subjects
FORECASTING ,PHONEME (Linguistics) ,ELECTROPHYSIOLOGY ,SPEECH perception ,VISUAL perception ,SPEECH - Abstract
The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue. [ABSTRACT FROM AUTHOR] more...
- Published
- 2020
- Full Text
- View/download PDF
26. Effects of phonological features on reading-aloud latencies: A cross-linguistic comparison
- Author
-
Petroula Mousikou, Anastasia Ulicheva, Zoya Cherkasova, and Kevin Roon
- Subjects
Linguistics and Language ,Speech production ,Psycholinguistics ,Place of articulation ,Experimental and Cognitive Psychology ,Phonology ,Language and Linguistics ,Article ,Feature (linguistics) ,Prime (symbol) ,Reading ,Dynamics (music) ,Phonetics ,Reaction Time ,Voice ,Humans ,Speech ,Psychology ,Orthography ,Cognitive psychology ,Language - Abstract
Most psycholinguistic models of reading aloud and of speech production do not include linguistic representations more fine-grained than the phoneme, despite the fact that the available empirical evidence suggests that feature-level representations are activated during reading aloud and speech production. In a series of masked-priming experiments that employed the reading aloud task, we investigated effects of phonological features, such as voicing, place of articulation, and constriction location, on response latencies in English and Russian. We propose a hypothesis that predicts greater likelihood of obtaining feature-priming effects when the onsets of the prime and the target share more feature values than when they share fewer. We found that prime-target pairs whose onsets differed only in voicing (e.g., /p/-/b/) primed each other consistently in Russian, as has already been found in English. Response latencies for prime-target pairs whose onsets differed in place of articulation (e.g., /b/-/d/) patterned differently in English and Russian. Prime-target pairs whose onsets differed in constriction location only (e.g., /s/ and /ʂ/) did not yield a priming effect in Russian. We conclude that feature-priming effects are modulated not only by the phonological similarity between the onsets of primes and targets but also by the dynamics of feature activation and by the language-specific relationship between orthography and phonology. Our findings suggest that feature-level representations need to be included in models of reading aloud and of speech production if we are to move forward with theorizing in these research domains. (PsycInfo Database Record (c) 2021 APA, all rights reserved). more...
- Published
- 2023
27. Lexical Category-Governed Neutralization to Coronal and Non-Coronal Place of Articulation in Latent Consonants: The Case of Shipibo-Konibo and Capanahua (Pano)
- Author
-
Jose Elias-Ulloa
- Subjects
Panoan languages ,place of articulation ,dorsal neutralization ,labial neutralization ,latent segments ,harmonic alignment ,Language and Literature - Abstract
This study documents and accounts for the behavior of the place of articulation of latent segments in the Panoan languages Shipibo-Konibo and Capanahua. In these languages, the lexical category of the word governs the place of articulation (PoA) of latent consonants. Latent segments only surface when they are syllabified as syllable onsets. They surface as coronal consonants when they are part of verbs; but they occur as non-coronal consonants when they belong to nouns or adjectives. In non-verb forms, by default, they are neutralized to dorsal in Shipibo-Konibo, and to labial in Capanahua. The analysis proposed consists in using the well-known markedness hierarchy on PoA, |Labial, Dorsal > Coronal > Pharyngeal|, and harmonically aligning it with a morphological markedness hierarchy in which non-verb forms are more marked than verb forms: |NonVerb > Verb|. This creates two fixed rankings of markedness constraints: one on verb forms in which, as expected, coronal/laryngeal is deemed the least marked PoA, and another one on non-verb forms in which the familiar markedness on PoA is reversed so that labial and dorsal become the least marked places of articulation. The study shows that although both Panoan languages follow the general cross-linguistic tendency to have coronal as a default PoA, this default can be overridden by morphology. more...
- Published
- 2021
- Full Text
- View/download PDF
28. Fricative vowels as an intermediate stage of vowel apicalization
- Author
-
Feng Ling and Fang Hu
- Subjects
Sound change ,Linguistics and Language ,Place of articulation ,Tongue dorsum ,Phonology ,01 natural sciences ,Language and Linguistics ,Linguistics ,Intermediate stage ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Phonological rule ,Vowel ,0103 physical sciences ,0305 other medical science ,Articulation (phonetics) ,010301 acoustics ,Mathematics - Abstract
Diphthongization and apicalization are two commonly detected phonetic and/or phonological processes for the development of high vowels, with the process of apicalization being of particular importance to the phonology of Chinese dialects. This paper describes acoustics and articulation of fricative vowels in the Suzhou dialect of Wu Chinese. Acquiring frication initiates the sound change. The production of fricative vowels in Suzhou is characterized by visible turbulent frication from the spectrograms, and a significant lower Harmonics-to-Noise Ratio vis-à-vis the plain counterparts. The acoustic study suggests that spectral characteristics of fricative vowels play a more important role in defining the vowel contrasts. The fricative high front vowels have comparatively greater F1 and smaller F2 and F3 values than their plain counterparts, and in the acoustic F1/F2 plane, the fricative vowels are located in an intermediate position between their plain and apical counterparts. The articulatory study revealed that that not only tongue dorsum but also tongue blade are involved in the production of fricative high front vowels in Suzhou. Phonologically, plain high front vowels, fricative high front vowels, and apical vowels distinguish in active place of articulation, namely being anterodorsal, laminal, and apical respectively; and frication becomes a concomitant and redundant feature in the production of fricative or apical vowels. It is concluded that the fine-grained phonetic details suggest that the fricative high front vowels in Suzhou is at an intermediate stage of vowel apicalization in terms of both acoustics and articulation. more...
- Published
- 2022
- Full Text
- View/download PDF
29. Sounds of Assamese Language
- Author
-
Sarma, Mousmita, Sarma, Kandarpa Kumar, Kacprzyk, Janusz, Series editor, Sarma, Mousmita, and Sarma, Kandarpa Kumar
- Published
- 2014
- Full Text
- View/download PDF
30. Phonemic Representations and Categories
- Author
-
Steinschneider, Mitchell, Cohen, Yale E., editor, Popper, Arthur N., editor, and Fay, Richard R., editor
- Published
- 2013
- Full Text
- View/download PDF
31. Classification of Fricatives Using Novel Modulation Spectrogram Based Features
- Author
-
Malde, Kewal D., Chittora, Anshu, Patil, Hemant A., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Maji, Pradipta, editor, Ghosh, Ashish, editor, Murty, M. Narasimha, editor, Ghosh, Kuntal, editor, and Pal, Sankar K., editor more...
- Published
- 2013
- Full Text
- View/download PDF
32. Place of articulation shifts in sound change: A gradual road to the unmarked
- Author
-
Eirini Apostolopoulou
- Subjects
Cultural Studies ,Sound change ,Linguistics and Language ,Literature and Literary Theory ,Computer science ,Place of articulation ,Language and Linguistics ,Linguistics - Abstract
This paper investigates place of articulation shifts involving heterosyllabic C[non-coronal]C[coronal] clusters. Such phenomena are found, among other languages, in the diachrony of Italiot Greek, where three typologically different historical stages are observed: (a) no shifts; (b) dorsal > labial shift; (c) dorsal, labial > coronal shift. Drawing on Rice's (1994) model of the Place node and the markedness hierarchy dorsal ≺ labial ≺ coronal (with “≺” denoting ‘more marked than’) (de Lacy 2002), I maintain that these shifts reduce the markedness of codas. The gradual typological changes are accounted for in terms of Property Theory (Alber & Prince 2015). more...
- Published
- 2022
- Full Text
- View/download PDF
33. Lietuvių ir latvių kalbų trankieji priebalsiai: lokuso lygčių rezultatai
- Author
-
Jolita Urbanavičienė and Inese Indričāne
- Subjects
Standard Lithuanian ,Standard Latvian ,acoustic phonetics ,locus equations ,obstruents ,sonority ,palatalization ,place of articulation ,Philology. Linguistics ,P1-1091 - Abstract
Straipsnyje pirmą kartą lietuvių kalbos trankieji priebalsiai tiriami lokuso lygčių (locus equations) metodu, kuris akustinės fonetikos tyrimuose plačiai taikomas tiek balsių ir priebalsių koartikuliacijos lygiui nustatyti, tiek priebalsių artikuliacijos vietai tirti. Be to, tai pirmasis lyginamasis lietuvių ir latvių kalbų trankiųjų priebalsių tyrimas, atliktas pagal tą pačią metodiką. Straipsnyje pateikiama abiejų baltų kalbų priebalsių akustinė klasifikacija, sukurta remiantis universaliais Tarptautinės fonetikų asociacijos (TFA) sukurtais principais. Abiejų kalbų lokuso lygčių rezultatai lyginami pagal tris kriterijus: balsingumą, palatalizaciją ir artikuliacijos vietą. more...
- Published
- 2016
- Full Text
- View/download PDF
34. Electrophysiological and behavioral measures of some speech contrasts in varied attention and noise.
- Author
-
Morris, David Jackson, Tøndering, John, and Lindgren, Magnus
- Subjects
- *
VOWELS , *MASKING (Psychology) , *SPEECH , *DIFFERENCE sets - Abstract
Abstract This paper investigates the salience of speech contrasts in noise, in relation to how listening attention affects scalp-recorded cortical responses. The contrasts that were examined with consonant-vowel syllables, were place of articulation, vowel length and voice-onset time (VOT) and our analysis focuses on the correspondence between the effect of attention on the electrophysiology and the decrement in behavioral results when noise was added to the stimuli. Normal-hearing subjects (n = 20) performed closed-set syllable identification in no noise, 0, 4 and 8 dB signal-noise ratio (SNR). Identification in noise decreased markedly for place of articulation, moderately for vowel length and marginally for VOT. The same syllables were used in two electrophysiology conditions, where subjects attended to the stimuli, and also while their attention was diverted to a visual discrimination task. Differences in global field power between the attention conditions from each contrast showed that that the effect of attention was negligible for place of articulation. They implied offset encoding of vowel length and were early (starting at 117 ms), and of high amplitude (>3 μV) for VOT. There were significant correlations between the difference in syllable identification in no noise and 0 dB SNR and the electrophysiology results between attention conditions for the VOT contrast. Comparison of the two attention conditions with microstate analysis showed a significant difference in the duration of microstate class D. These results show differential integration of attention and syllable processing according to speech contrast and they suggest that there is correspondence between the salience of a contrast in noise and the effect of attention on the evoked electrical response. Highlights • Electrophysiological attention differences for VOT and perception are correlated. • VOT perception is robust to noise and affected by the attention of the listener. • EEG measures of place of articulation are little affected by listening attention. • Clustered microstate analysis reveals differences in listening attention. [ABSTRACT FROM AUTHOR] more...
- Published
- 2019
- Full Text
- View/download PDF
35. Sonority Is Different.
- Author
-
Scheer, Tobias
- Subjects
MORPHEMICS ,SKELETON - Abstract
The paper argues that sonority on the one hand and other segmental properties such as place of articulation (labiality etc.) and laryngeal properties (voicing etc.) on the other hand are different in kind and must therefore not be represented alike: implementations on a par e.g. as features ([±voc], [±son], [±lab], [±voice] etc.) are misled. Arguments come from a number of broad, cross-linguistically stable facts concerning visibility of items below and above the skeleton in phonological and morphological processing: sonority, but no other segmental property, is taken into account when syllable structure is built (upward visibility); processes located above the skeleton (infixation, phonologically conditioned allomorphy, stress, tone, positional strength) do make reference to sonority, but never to labiality, voicing etc. (downward visibility). Approaches are discussed where sonority is encoded as structure, rather than as primes (features or Elements). In some cases not only sonority but also other segmental properties are structuralized, a solution that does not do justice to the insight that sonority and melody are different in kind. Also, the approaches that structuralize sonority are not concerned with the question how the representations they entertain come into being: representations are not contained in the phonetic signal that is the input to the linguistic system, nor do they fall from heaven - they are built by some computation. It is therefore concluded that what really segregates sonority and melody is their belonging to two distinct computational systems (modules in the Fodorian sense) which operate over distinct vocabularies and produce distinct structure: sonority primes are used to build syllable structure, while other computations take other types of primes as an input. The computation carrying out a palatalization for example works with melodic primes. The segment, then, is a lexical recording that has different compartments containing domain-specific primes [
, more...] SEGMENT . This is also the case of the morpheme, which hosts three compartments [, , ] MORPHEME . [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
36. Articulatory Skills in Malayalam Speaking in Children with Down Syndrome.
- Author
-
ABRAHAM, ANITHA NAITTEE and SREED, N.
- Subjects
- *
DOWN syndrome , *VOICE analysis , *STANDARDIZED tests , *ERROR analysis in mathematics , *DUTCH language - Abstract
The speech of children with Down syndrome (DS) is most often unintelligible and their speech sound errors are well documented. However, the error patterns may differ depending on the phonological characteristics of language spoken. Most of the reports on speech errors of DS are from English language. Few other languages like Dutch were also studied. Hence studies in other languages with distinct phonological characteristics are warranted. Malayalam is one such Dravidian language with unique phonological features. Hence, the aim of the present study was to investigate the speech sound errors exhibited by children with DS in Malayalam and compare it with that of mental age matched typically developing children. Participants included ten children with DS and ten typically developing (TD) children. The articulatory errors of the children were assessed using a standardized articulation test. The responses were subjected to place, manner, and voicing error analyses and Percentage of Consonants Correct (PCC) was computed. Language specific error patterns were observed in the results of the present study when compared to available reports. The most erroneous place of articulation was retroflex and manners of articulation were approximants, trills, and flaps, which are the unique phonemes in Malayalam. Alveolar and glottals were also erroneous when compared to bilabials, labiodentals and dentals. This difficulty in speech production of children with DS is discussed with respect to the anatomical, physiological, and cognitive deficits. Also the need for targeting articulatory skills in children with DS is highlighted. [ABSTRACT FROM AUTHOR] more...
- Published
- 2019
37. Consonant–vowel interaction in Sichuan Chinese: An element-based analysis.
- Author
-
Chen, Shunting and van de Weijer, Jeroen
- Subjects
- *
ARTICULATION (Education) , *CONSONANTS , *LABIAL frenulum , *PHONOLOGY , *GENERALIZATION - Abstract
This paper focuses on the representation of place of articulation features in consonants and vowels, on the basis of interaction between consonant and vowel place. New data from Sichuan Chinese are examined, which show at least four types of consonant–vowel interaction: labial attraction, two types of coronal attraction and velar attraction. Since consonants and vowels show patterns of interaction at all places of articulation, we argue that consonant and vowel place should be described using the same representational elements. We propose that the relevant generalizations reflect the historical development (not synchronic alternation), and show that an account using standard Dependency Phonology unary features can capture these facts. [ABSTRACT FROM AUTHOR] more...
- Published
- 2018
- Full Text
- View/download PDF
38. More on the articulation of devoiced [u] in Tokyo Japanese: effects of surrounding consonants
- Author
-
Shigeto Kawahara and Jason A. Shaw
- Subjects
Consonant ,Linguistics and Language ,Acoustics and Ultrasonics ,Place of articulation ,Tongue dorsum ,Phonetics ,Tongue body ,Language and Linguistics ,Linguistics ,Japan ,Tongue ,Voice ,Humans ,Affect (linguistics) ,Tokyo ,Articulation (phonetics) ,Psychology ,Gesture - Abstract
Past work investigating the lingual articulation of devoiced vowels in Tokyo Japanese has revealed optional but categorical deletion. Some devoiced vowels retained a full lingual target, just like their voiced counterparts, whereas others showed trajectories that are best modelled as targetless, i.e., linear interpolation between the surrounding vowels. The current study explored the hypothesis that this probabilistic deletion is modulated by the identity of the surrounding consonants. A new EMA experiment with an extended stimulus set replicates the core finding of Shaw, Jason & Shigeto Kawahara. 2018b. The lingual gesture of devoiced [u] in Japanese. Journal of Phonetics 66. 100–119. DOI:https://doi.org/10.1016/j.wocn.2017.09.007 that Japanese devoiced [u] sometimes lacks a tongue body raising gesture. The current results moreover show that surrounding consonants do indeed affect the probability of tongue dorsum targetlessness. We found that deletion of devoiced vowels is affected by the place of articulation of the preceding consonant; deletion is more likely following a coronal fricative than a labial fricative. Additionally, we found that the manner combination of the flanking consonants, fricative–fricative versus fricative–stop, also has an effect, at least for some speakers; however, unlike the effect of C1 place, the direction of the manner combination effect varies across speakers with some deleting more often in fricative–stop environments and others more often in fricative–fricative environments. more...
- Published
- 2021
- Full Text
- View/download PDF
39. A contrastive analysis of southern welsh and cockney accents
- Author
-
Abdulfattah Omar, Hissah Nasser Alothman, Gaida Saad Alqreeni, Amjad Khaled Alharbi, and Shada Alanazi
- Subjects
Consonant ,Cockney ,Linguistics and Language ,Celtic languages ,Place of articulation ,Pronunciation ,Language and Linguistics ,Linguistics ,language.human_language ,Education ,Welsh ,Accent (music) ,language ,Psychology ,Contrastive analysis - Abstract
This study is concerned with comparing the pronunciation in Southern Welsh, a Celtic language, and Cockney, an English dialect , regarding the place of articulation. The study uses a comparative method to shed light on the similarities and differences between the two accents. The data were collected from YouTube videos of speakers of Southern Welsh and Cockney and the consonant sound systems were analysed and compared. This study answers two main research questions: Do Southern Welsh and Cockney accents have the same consonants? What are the phonological differences between Southern Welsh and Cockney regarding place of articulation? The findings show that there are some phonological differences between Sothern Welsh and Cockney in terms of bilabial, labiodental, dental, alveolar, lateral, palatal, velar, and uvular sounds. However, they are similar in terms of post-alveolar and glottal sounds. Awareness of these phonological differences is important for EFL learners to develop strong competencies in dealing with these accents which are gaining an increasing popularity due to the unprecedented spread of social media networks and applications. more...
- Published
- 2021
- Full Text
- View/download PDF
40. Phonetic reduction and paradigm uniformity effects in spontaneous speech
- Author
-
U. Marie Engemann and Ingo Plag
- Subjects
Linguistics and Language ,Cognitive Neuroscience ,Place of articulation ,Speech recognition ,String (computer science) ,Variety (linguistics) ,Language and Linguistics ,language.human_language ,Reduction (complexity) ,New Zealand English ,Duration (music) ,language ,Word (group theory) ,Plural ,Mathematics - Abstract
Recent work on the acoustic properties of complex words has found that morphological information may influence the phonetic properties of words, e.g. acoustic duration. Paradigm uniformity has been proposed as one mechanism that may cause such effects. In a recent experimental study Seyfarth et al. (2017) found that the stems of English inflected words (e.g. frees) have a longer duration than the same string of segments in a homophonous mono-morphemic word (e.g. freeze), due to the co-activation of the longer articulatory gesture of the bare stem (e.g. free). However, not all effects predicted by paradigm uniformity were found in that study, and the role of frequency-related phonetic reduction remained inconclusive. The present paper tries to replicate the effect using conversational speech data from a different variety of English (i.e. New Zealand English), using the QuakeBox Corpus (Walsh et al. 2013). In the presence of word-form frequency as a predictor, stems of plurals were not found to be significantly longer than the corresponding strings of comparable non-complex words. The analysis revealed, however, a frequency-induced gradient paradigm uniformity effect: plural stems become shorter with increasing frequency of the bare stem. more...
- Published
- 2021
- Full Text
- View/download PDF
41. Phonological and phonetic properties of nasal substitution in Sasak and Javanese
- Author
-
Albert Lee, Diana Archangeli, Jonathan Yip, and Lang Qin
- Subjects
Sasak ,Javanese ,nasal substitution ,ultrasound language research ,abstract phonological relations ,place of articulation ,Language. Linguistic theory. Comparative grammar ,P101-410 - Abstract
Austronesian languages such as Sasak and Javanese have a pattern of morphological nasal substitution, where nasals alternate with homorganic oral obstruents—except that [s] is described as alternating with [ɲ], not with [n]. This appears to be an abstract morphophonological relation between [s] and [ɲ] where other parts of the paradigm have a concrete homorganic relation. Articulatory ultrasound data were collected of productions of [t, n, ʨ, ɲ], along with [s] and its nasal counterpart from two languages, from 10 Sasak and 8 Javanese speakers. Comparisons of lingual contours using a root mean square analysis were evaluated with linear mixed-effects regression models, a method that proves reliable for testing questions of phonological neutralization. In both languages, [t, n, s] exhibit a high degree of articulatory similarity, whereas postalveolar [ʨ] and its nasal counterpart [ɲ] exhibited less similarity. The nasal counterpart of [s] was identical in articulation to [ɲ]. This indicates an abstract, rather than concrete, relationship between [s] and its morphophonological nasal counterpart, with the two sounds not sharing articulatory place in either Sasak or Javanese. more...
- Published
- 2017
- Full Text
- View/download PDF
42. Locus equations and the place of articulation for the Latvian sonorants
- Author
-
Jana Taperte
- Subjects
Latvian ,acoustic phonetics ,sonorants ,place of articulation ,coarticulation ,vowel F2 transitions ,locus theory ,Philology. Linguistics ,P1-1091 - Abstract
In the article, the sonorant consonants of Standard Latvian are investigated using locus equations. The aim of the study is to examine whether locus equations can be considered as efficient descriptors of consonantal place of articulation both within the group of sonorants and across different manner classes in Standard Latvian.Two-type sequences were analyzed: (1) the CV part of isolated nonsense CVC syllables, where C is one of the sonorants [m; n; ɲ; l; ʎ; r] and V is one of the vowels [i(ː); e(ː); æ(ː); ɑ(ː); ɔ(ː); u(ː)]; (2) the V(ː)C part of isolated nonsense V(ː)CV (VCV for [ŋ]) structure utterances, where C is one of the nasals [m; n; ɲ; ŋ] and V is one of the vowels [i; e; æ; ɑ; ɔ; u]. Each utterance was recorded in three repetitions by every of 10 native Latvian speakers (five males and five females), thus 3420 items were analyzed in total. Statistical analysis of locus equation slopes and y-intercepts both for the sonorants and for the whole consonant inventory of Standard Latvian was performed in order to test the relevance of these indices for discriminating places of articulation across different manner classes.By plotting the data for the whole consonant inventory in slope-by-intercept space it is possible to distinguish between the groups of palatals/dentals/alveolars and labials/velars, while the results of statistical analysis show significant difference among all place categories. According to the results, there are certain coarticulatory mechanisms associated with particular places of constriction for the Latvian consonants that allow linking locus equation data to different place categories, although they are also affected by manner and voicing. Nevertheless, place of articulation as a determinant of coarticulatory patterns overrules these factors when other possible influences are excluded. more...
- Published
- 2014
- Full Text
- View/download PDF
43. Calibration of Consonant Perception to Room Reverberation
- Author
-
Norbert Kopčo, Barbara G. Shinn-Cunningham, Eleni L. Vlahou, and Kanako Ueno
- Subjects
Consonant ,Auditory perception ,Linguistics and Language ,Reverberation ,Anechoic chamber ,Acoustics ,media_common.quotation_subject ,Speech recognition ,Place of articulation ,Context (language use) ,01 natural sciences ,Language and Linguistics ,03 medical and health sciences ,Speech and Hearing ,0302 clinical medicine ,Phonetics ,Perception ,0103 physical sciences ,Calibration ,Humans ,Speech ,030223 otorhinolaryngology ,010301 acoustics ,media_common ,Mathematics ,Manner of articulation ,Speech Perception ,Voice - Abstract
Purpose We examined how consonant perception is affected by a preceding speech carrier simulated in the same or a different room, for different classes of consonants. Carrier room, carrier length, and carrier length/target room uncertainty were manipulated. A phonetic feature analysis tested which phonetic categories are influenced by the manipulations in the acoustic context of the carrier. Method Two experiments were performed, each with nine participants. Targets consisted of 10 or 16 vowel–consonant (VC) syllables presented in one of two strongly reverberant rooms, preceded by a multiple-VC carrier presented in either the same room, a different reverberant room, or an anechoic room. In Experiment 1, the carrier length and the target room randomly varied from trial to trial, whereas in Experiment 2, they were fixed within a block of trials. Results Overall, a consistent carrier provided an advantage for consonant perception compared to inconsistent carriers, whether in anechoic or differently reverberant rooms. Phonetic analysis showed that carrier inconsistency significantly degraded identification of the manner of articulation, especially for stop consonants and, in one of the rooms, also of voicing. Carrier length and carrier/target uncertainty did not affect adaptation to reverberation for individual phonetic features. The detrimental effects of anechoic and different reverberant carriers on target perception were similar. Conclusions The strength of calibration varies across different phonetic features, as well as across rooms with different levels of reverberation. Even though place of articulation is the feature that is affected by reverberation the most, it is the manner of articulation and, partially, voicing for which room adaptation is observed. more...
- Published
- 2021
- Full Text
- View/download PDF
44. Adult perception of stop consonant voicing in American-English-learning toddlers: Voice onset time and secondary cues
- Author
-
Laura L. Koenig and Elaine R. Hitchcock
- Subjects
Adult ,Speech Acoustics ,medicine.medical_specialty ,Speech perception ,Acoustics and Ultrasonics ,Place of articulation ,American English ,Voice-onset time ,Phonetics ,Audiology ,United States ,Arts and Humanities (miscellaneous) ,Child, Preschool ,Stop consonant ,Speech Perception ,Voice ,otorhinolaryngologic diseases ,medicine ,Humans ,Speech ,Cues ,Psychology - Abstract
Most studies of speech perception employ highly controlled stimuli. It is not always clear how such results extend to the processing of natural speech. In a series of experiments, we progressively explored the role of voice onset time (VOT) and potential secondary cues in adult labeling of stressed syllable-initial /b d p t/ produced by typically developing two-year-old learners of American English. Taken together, the results show the following: (a) Adult listeners show phoneme boundaries in labeling functions comparable to what have been established for adult speech. (b) Adult listeners can be sensitive to distributional properties of the stimulus set, even in a study that employs highly varied naturalistic productions from multiple speakers. (c) Secondary cues are available in the speech of two-year-olds, and these may influence listener judgments. Cues may differ across places of articulation and the VOT continuum. These results can lend insight into how clinicians judge child speech during assessment and also have implications for our understanding of the role of primary and secondary acoustic cues in adult perception of child speech. more...
- Published
- 2021
- Full Text
- View/download PDF
45. Place
- Author
-
van der Hulst, Harry, author
- Published
- 2020
- Full Text
- View/download PDF
46. Aeroacoustic differences between the Japanese fricatives [ɕ] and [ç]
- Author
-
Tsukasa Yoshinaga, Kikuo Maekawa, and Akiyoshi Iida
- Subjects
Physics ,Acoustics and Ultrasonics ,Sibilant ,Broadband noise ,Place of articulation ,Speech recognition ,Speech sounds ,Acoustics ,Speech Acoustics ,Amplitude ,Japan ,Arts and Humanities (miscellaneous) ,Phonetics ,Coronal plane ,Voice ,Humans ,Speech ,Vocal tract - Abstract
application/pdf, Toyohashi University of Technology, National Institute for Japanese Language and Linguistics
- Published
- 2021
47. Sibilant production in Hebrew-speaking adults: Apical versus laminal.
- Author
-
Icht, Michal and Ben-David, Boaz M.
- Subjects
- *
COLLEGE students , *ANALYSIS of variance , *PROBABILITY theory , *SPEECH therapy , *REPEATED measures design , *DATA analysis software , *DESCRIPTIVE statistics ,PHYSIOLOGICAL aspects of speech - Abstract
The Hebrew IPA charts describe the sibilants /s, z/ as ‘alveolar fricatives’, where the place of articulation on the palate is the alveolar ridge. The point of constriction on the tongue is not defined – apical (tip) or laminal (blade). Usually, speech and language pathologists (SLPs) use the apical placement in Hebrew articulation therapy. Some researchers and SLPs suggested that acceptable /s, z/ could be also produced with the laminal placement (i.e. the tip of the tongue approximating the lower incisors). The present study focused at the clinical level, attempting to determine the prevalence of these alternative points of constriction on the tongue for /s/ and /z/ in three different samples of Hebrew-speaking young adults (totaln= 242), with typical articulation. Around 60% of the participants reported using the laminal position, regardless of several speaker-related variables (e.g. tongue-thrust swallowing, gender). Laminal production was more common in /s/ (than /z/), coda (than onset) position of the sibilant, mono- (than di-) syllabic words, and with non-alveolar (than alveolar) adjacent consonants. Experiment 3 revealed no acoustical differences between apical and laminal productions of /s/ and of /z/. From a clinical perspective, we wish to raise the awareness of SLPs to the prevalence of the two placements when treating Hebrew speakers, noting that tongue placements were highly correlated across sibilants. Finally, we recommend adopting a client-centred practice, where tongue placement is matched to the client. We further recommend selecting targets for intervention based on our findings, and separating between different prosodic positions in treatment. [ABSTRACT FROM PUBLISHER] more...
- Published
- 2018
- Full Text
- View/download PDF
48. Deep Neural Network based Place and Manner of Articulation Detection and Classification for Bengali Continuous Speech.
- Author
-
Bhowmik, Tanmay, Chowdhury, Amitava, and Das Mandal, Shyamal Kumar
- Subjects
SPEECH processing systems ,ARTICULATION (Speech) ,ARTIFICIAL neural networks ,DEEP learning ,INFORMATION storage & retrieval systems - Abstract
The phonological features are the most basic unit of a speech knowledge hierarchy. This paper reports about detection and classification of phonological features from Bengali continuous speech. The phonological features are based on place and manner of articulation. All the experiments are performed by a deep neural network based framework. Two different models are designed for the detection and classification task. The deep-structured models are pre-trained by stacked autoencoder. The C-DAC speech corpus is used for continuous spoken Bengali speech data. Frame wise cepstral representation is provided in the input layer of the deep-structured model. Speech data from multiple speakers has been used to confirm speaker-independency. In detection task, the system achieved 86.19% average overall accuracy. In the classification task, accuracy for the classification of place of articulation remains low with 50.2% while in manner-based classification, the system delivered an improved performance with 98.9% accuracy. [ABSTRACT FROM AUTHOR] more...
- Published
- 2018
- Full Text
- View/download PDF
49. Interaction of attention and acoustic factors in dichotic listening for fused words.
- Author
-
McCulloch, Katie, Lachner Bass, Natascha, Dial, Heather, Hiscock, Merrill, and Jansen, Ben
- Subjects
- *
INTERACTION (Philosophy) , *DICHOTIC listening tests , *AVERSIVE stimuli , *ATTENTION , *PLACE of articulation - Abstract
Two dichotic listening experiments examined the degree to which the right-ear advantage (REA) for linguistic stimuli is altered by a “top-down” variable (i.e., directed attention) in conjunction with selected “bottom-up” (acoustic) variables. Halwes fused dichotic words were administered to 99 right-handed adults with instructions to attend to the left or right ear, or to divide attention equally. Stimuli in Experiment 1 were presented without noise or mixed with noise that was high-pass or low-pass filtered, or unfiltered. The stimuli themselves in Experiment 2 were high-pass or low-pass filtered, or unfiltered. The initial consonants of each dichotic pair were categorized according to voice onset time (VOT) and place of articulation (PoA). White noise extinguished both the REA and selective attention, and filtered noise nullified selective attention without extinguishing the REA. Frequency filtering of the words themselves did not alter performance. VOT effects were inconsistent across experiments but PoA analyses indicated that paired velar consonants (/k/ and /g/) yield a left-ear advantage and paradoxical selective-attention results. The findings show that ear asymmetry and the effectiveness of directed attention can be altered by bottom-up variables. [ABSTRACT FROM PUBLISHER] more...
- Published
- 2017
- Full Text
- View/download PDF
50. Türk Runik Alfabesindeki "Çift Ünsüz" İşaretleri Üzerine.
- Author
-
Sultanzade, Vügar
- Abstract
Copyright of bilig: Journal of Social Sciences of the Turkish World is the property of bilig: Journal of Social Sciences of the Turkish World and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) more...
- Published
- 2017
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.