284 results on '"place of articulation"'
Search Results
2. Effects of voice onset time and place of articulation on perception of dichotic Turkish syllables
- Author
-
Eskicioglu, Emre, Taslica, Serhat, Guducu, Cagdas, Oniz, Adile, and Ozgoren, Murat
- Published
- 2025
- Full Text
- View/download PDF
3. Estimation of place of articulation of fricatives from spectral features.
- Author
-
Nataraj, K. S., Pandey, Prem C., and Dasgupta, Hirak
- Abstract
An investigation is carried out for speaker-independent acoustic-to-articulatory mapping for fricative utterances using simultaneously acquired speech signals and articulatory data. The relation of the place of articulation with the spectral characteristics is examined using several earlier reported spectral features and six proposed spectral features (maximum-sum segment centroid, normalized sum of absolute spectral slopes, and four spectral energy features). A method is presented for estimating the place of articulation using a feedforward neural network. It is evaluated using a dataset comprising utterances with a mix of phonetic contexts and from multiple speakers, five-fold cross-validation, and networks with different hidden layers and neurons. The six proposed spectral features used as the input feature set resulted in the lowest estimation error and low sensitivity to the training data size. Estimation using this feature set with an optimal network provided a correlation coefficient of 0.978 and an RMS error of 2.54 mm. The errors were smaller than the differences between the adjacent places, indicating that the method may be helpful in providing visual feedback of articulatory efforts in speech training aids. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Emergent learning bias and the underattestation of simple patterns.
- Author
-
O'Hara, Charlie
- Subjects
MACHINE learning ,REINFORCEMENT learning - Abstract
This paper investigates the typology of word-initial and word-final place of articulation contrasts in stops, revealing two typological skews. First, languages tend to make restrictions based on syllable position alone, rather than banning particular places of articulation in word-final position. Second, restrictions based on place of articulation alone are underrepresented compared to restrictions that are based on position. This paper argues that this typological skew is the result of an emergent bias found in agent-based models of generational learning using the Perceptron learning algorithm and MaxEnt grammar using independently motivated constraints. Previous work on agent-based learning with MaxEnt has found a simplicity bias (Pater and Moreton 2012) which predicts the first typological skew, but fails to predict the second skew. This paper analyzes the way that the set of constraints in the grammar affects the relative learnability of different patterns, creating learning biases more elaborate than a simplicity bias, and capturing the observed typology. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Factors affecting voice onset time in English stops: English proficiency, gender, place of articulation, and vowel context.
- Author
-
Chun-yin Doris Chen, Wei-yunWinona Liu, and Tzu-Fen Nellie Yeh
- Subjects
LANGUAGE ability ,ENGLISH language ,VOWELS ,NATIVE language ,LANGUAGE transfer (Language learning) ,HUMAN voice ,SECOND language acquisition - Abstract
This study investigates the voice onset time (VOT) of L2 English stops produced by native speakers of Chinese at different L2 proficiency levels. Four factors were examined: English proficiency, gender, place of articulation, and vowel context. High and low achievers of English were recruited as the experimental groups and native speakers of English as the control group. Each group consisted of 16 participants, 8 males and 8 females. Each participant took part in a read-aloud task, in which the target words were presented in an embedded sentence. The results showed that the effects of L2 proficiency were significant in that high achievers outperformed low achievers who were affected by L1 negative transfer more seriously when producing native-like English stops. Additionally, when producing English stops, velar stops had significantly greater VOT values than either bilabial or alveolar ones. However, no significant gender differences were found. Male and female participants produced similar VOT values in English stops. Last, the vowel context was also a significant factor. The VOT lengths differed according to the context of a following vowels. More specifically, the VOT of a stop is significantly longer when followed by a tense vowel. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Voice Onset Time of Mankiyali Language: An Acoustic Analysis.
- Author
-
Ullah, Shakir, Anjum, Uzma, and Saleem, Tahir
- Subjects
LANGUAGE revival ,ENDANGERED languages ,NATIVE language ,HUMAN voice ,DOCUMENTATION ,LANGUAGE & languages - Abstract
The endangered Indo-Aryan language Mankiyali, spoken in northern Pakistan, lacks linguistic documentation and necessitates research. This study explores the Voice Onset Time (VOT) values of Mankiyali's stop consonants to determine the duration of sound release, characterized as negative, positive, and zero VOTs. The investigation aims to identify the laryngeal categories present in the language. Using a mixed methods approach, data were collected from five native male speakers via the Zoom H6 platform. The study employed the theoretical framework of Fant's (1970) source filter model and analyzed each phoneme using PRAAT software. Twenty-five tokens of a single phoneme were recorded across the five speakers. The results reveal that Mankiyali encompasses three laryngeal categories: voiceless unaspirated (VLUA) stops, voiceless aspirated (VLA) stops, and voiced unaspirated (VDUA) stops. The study highlights significant differences in VOTs based on place of articulation and phonation. In terms of phonation, the VLUA bilabial stop /p/, alveolar stop /t/, and velar stop /k/ exhibit shorter voicing lag compared to their VLA counterparts /p
h , th , kh /. All VLUA and VLA stops display +VOT values, while all VDUA stops exhibit -VOT values. Regarding place of articulation, the bilabial /p/ demonstrates a longer voicing lag than the alveolar /t/ but a shorter lag than the velar /k/. Additionally, the results indicate similarities in voicing lag among the VDUA stops /b, d, g/. This study offers valuable insights into the phonetic and phonological aspects of Mankiyali and holds potential significance for the language's preservation. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
7. Emergent learning bias and the underattestation of simple patterns
- Author
-
O’Hara, Charlie
- Published
- 2023
- Full Text
- View/download PDF
8. What's in a Japanese kawaii 'cute' name? A linguistic perspective.
- Author
-
Gakuji Kumagai
- Subjects
BEHAVIORAL sciences ,SOUND symbolism ,ANIME ,FEMININITY ,PHONETICS ,GESTURE - Abstract
While the concept termed as kawaii is often translated into English as 'cute' or 'pretty', it has multiple connotations. It is one of the most significant topics of investigation in behavioural science and Kansei/affective engineering. This study aims to explore linguistic (phonetic and phonological) features/units associated with kawaii. Specifically, it examines, through experimental methods, what kinds of phonetic and phonological features are associated with kawaii, in terms of the following three consonantal features: place of articulation, voicing/frequency, and manner of articulation. The results showed that the features associated with kawaii are: [labial], [high frequency], and [sonorant]. The factors associated with kawaii may include the pouting gesture, babyishness, smallness, femininity, and roundness. The study findings have practical implications due to their applicability regarding the naming of anime characters and products characterised by kawaii. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Velarization in English and Arabic.
- Author
-
Al-Rahman Eltaif, Sua'ad Abd
- Subjects
ENGLISH language ,SOFT palate ,SPEECH - Abstract
Copyright of Journal of Tikrit University for Humanities is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
10. AUDIOVISUAL SPEECH PERCEPTION IN DIOTIC AND DICHOTIC LISTENING CONDITIONS.
- Author
-
Vinay, Sandhya
- Subjects
- *
SPEECH perception , *VOWELS , *EXPERIMENTAL design , *ANALYSIS of variance , *AUDITORY perception , *DICHOTIC listening tests , *ARTICULATION disorders , *CONSONANTS , *PHONETICS - Abstract
Background: Speech perception is multisensory, relying on auditory as well as visual information from the articulators. Watching articulatory gestures which are either congruent or incongruent with the speech audio can change the auditory percept, indicating that there is a complex integration of auditory and visual stimuli. A speech segment is comprised of distinctive features, notably voice onset time (VOT) and place of articulation (POA). Understanding the importance of each of these features for audiovisual (AV) speech perception is critical. The present study investigated the perception of AV consonant-vowel (CV) syllables with various VOTs and POAs under two conditions: diotic incongruent and dichotic congruent. Material and methods: AV stimuli comprised diotic and dichotic CV syllables with stop consonants (bilabial /pa/ and /ba/; alveolar /ta/ and /da/; and velar /ka/ and /ga/) presented with congruent and incongruent video CV syllables with stop consonants. There were 40 righthanded normal hearing young adults (20 females, mean age 23 years, SD = 2.4 years) and 20 males (mean age 24 years, SD = 2.1 years) who participated in the experiment. Results: In the diotic incongruent AV condition, short VOT (voiced CV syllables) of the visual segments were identified when auditory segments had a CV syllable with long VOT (unvoiced CV syllables). In the dichotic congruent AV condition, there was an increase in identification of the audio segment when the subject was presented with a video segment congruent to either ear, in this way overriding the otherwise presented ear advantage in dichotic listening. Distinct visual salience of bilabial stop syllables had greater visual influence (observed as greater identification scores) than velar stop syllables and thus overrode the acoustic dominance of velar syllables. Conclusions: The findings of the present study have important implications for understanding the perception of diotic incongruent and dichotic congruent audiovisual CV syllables in which the stop consonants have different VOT and POA combinations. Earlier findings on the effect of VOT on dichotic listening can be extended to AV speech having dichotic auditory segments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Classification of Arabic fricative consonants according to their places of articulation.
- Author
-
Elfahm, Youssef, Abajaddi, Nesrine, Mounir, Badia, Elmaazouzi, Laila, Mounir, Ilham, and Farchi, Abdelmajid
- Subjects
AUTOMATIC speech recognition ,FRICATIVES (Phonetics) ,SUPPORT vector machines ,CLASSIFICATION ,HUMAN voice - Abstract
Many technology systems have used voice recognition applications to transcribe a speaker's speech into text that can be used by these systems. One of the most complex tasks in speech identification is to know, which acoustic cues will be used to classify sounds. This study presents an approach for characterizing Arabic fricative consonants in two groups (sibilant and nonsibilant). From an acoustic point of view, our approach is based on the analysis of the energy distribution, in frequency bands, in a syllable of the consonantvowel type. From a practical point of view, our technique has been implemented, in the MATLAB software, and tested on a corpus built in our laboratory. The results obtained show that the percentage energy distribution in a speech signal is a very powerful parameter in the classification of Arabic fricatives. We obtained an accuracy of 92% for non-sibilant consonants /f, χ, ɣ, ʕ, ћ, and h/, 84% for sibilants /s, sҁ, z, Ӡ and ∫/, and 89% for the whole classification rate. In comparison to other algorithms based on neural networks and support vector machines (SVM), our classification system was able to provide a higher classification rate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. EXPLORING A RECENTLY DEVELOPED ROMANIAN SPEECH CORPUS IN TERMS OF COARTICULATION PHENOMENA ACROSS WORD BOUNDARIES.
- Author
-
NICULESCU, OANA
- Subjects
ARTICULATION (Speech) ,ROMANIAN language ,DELETION (Linguistics) ,SPEECH ,FRICATIVES (Phonetics) ,PLACE of articulation ,OBSTRUENTS (Phonetics) - Abstract
The purpose of this article is twofold. On the one hand, we aim to investigate coarticulation phenomena in word-final position pertaining to standard Romanian spontaneous speech. The analysis focuses on deletion processes, most notably the deletion of the definite article-l, external hiatus repair mechanisms, word-final obstruent devoicing and voicing phenomena, as well as fricativization of the voiced postalveolar affricate. On the other hand, we aim to showcase the benefits of working on a recently developed Romanian speech corpus by correlating the transcripts with the audio recordings and automatically extracting the relevant acoustic data pertaining to each of the aforementioned connected speech processes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
13. A Comparative Study of Dari Persian and English Language Consonant Sounds.
- Author
-
Usmanyar, Mahmood
- Subjects
PERSIAN language ,LISTENING comprehension ,ENGLISH language ,CONSONANTS ,LANGUAGE policy ,NATIVE language - Abstract
This research article compares the consonant sounds of English and Dari Persian language in terms of state of larynx, place and manner of articulation. This research article aims to determine similarities and differences between the consonant systems of English and Dari Persian language which can be useful for teachers and learners of both languages, especially in listening and speaking skills. In this research article, the qualitative method has been used to find similar and different consonant sounds. In this research article, it was found out that eighteen consonants are similar in between, two consonant sounds are slightly similar, 4 English consonants are not present in Dari Persian, and 3 Dari Persian consonants are not present in English language. It is believed that one's mother tongue obviously has influence on second or foreign language. That is, one's own language pronunciation habits are so strong that they are extremely difficult to break. On the other hand, mispronouncing the sounds in spoken language can cause miscommunication or misunderstanding. Therefore, this research article can help teachers and learners of English with Dari Persian as the first language and vice versa to maintain effective and meaningful communication while listening and speaking with more focus on the sounds which are different between the first and the second or foreign language. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. A contrastive analysis of southern welsh and cockney accents.
- Author
-
Alharbi, Amjad, Alqreeni, Gaida, Alothman, Hissah, Alanazi, Shatha, and Omar, Abdulfattah
- Subjects
PHONOLOGICAL awareness ,COMPARATIVE method ,SOUND systems ,SOCIAL networks ,CONSONANTS ,SOCIAL media - Abstract
This study is concerned with comparing the pronunciation in Southern Welsh, a Celtic language, and Cockney, an English dialect, regarding the place of articulation. The study uses a comparative method to shed light on the similarities and differences between the two accents. The data were collected from YouTube videos of speakers of Southern Welsh and Cockney and the consonant sound systems were analysed and compared. This study answers two main research questions: Do Southern Welsh and Cockney accents have the same consonants? What are the phonological differences between Southern Welsh and Cockney regarding place of articulation? The findings show that there are some phonological differences between Sothern Welsh and Cockney in terms of bilabial, labiodental, dental, alveolar, lateral, palatal, velar, and uvular sounds. However, they are similar in terms of post-alveolar and glottal sounds. Awareness of these phonological differences is important for EFL learners to develop strong competencies in dealing with these accents which are gaining an increasing popularity due to the unprecedented spread of social media networks and applications. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Development of Minimal Pair Test in Tamil (MPT-T)
- Author
-
Kavitha Vijayakumar, Saranyaa Gunalan, and Ranjith Rajeshwaran
- Subjects
paediatric cochlear implantees ,place of articulation ,vowel change ,vowel length ,Medicine - Abstract
Introduction: Speech perception testing provides an accurate measurement of the child’s ability to perceive and distinguish the various phonetic segments and patterns of the sounds. From among the many types of speech stimuli used, minimal pairs can also be used to assess the phoneme recognition skills. Thus, the study focused on developing Minimal Pair Test in Tamil (MPT-T). Aim: To develop and validate the MPT in Tamil on Normal Hearing (NH) children and paediatric Cochlear Implantees. Materials and Methods: It was an experimental study which included school going children in the age range of six to eight years and the duration of the study was 12 months. The test was developed in two phases. The first phase focussed on the construction of the word list, recording of the word pairs and the preparation of the test. The second phase was administration of the test on NH children and paediatric cochlear implantees. The test scores were analysed using Mann Whitney U test, Kruskal Wallis and Wilcoxon signed-rank test. The results showed a statistical significance between the NH group and the paediatric cochlear implantees. Results: The present study included 40 NH children and 15 paediatric cochlear implantees through purposive sampling method. The specific speech feature analysis of the paediatric cochlear implantees revealed that there was difficulty identifying the word pairs differing in Vowel Length (VL) and the best performed feature was Place of Articulation (POA). The results showed statistical significance between the NH group and the paediatric cochlear implantees. Conclusion: The developed test can be effectively used in clinic for assessing speech perception abilities of pediatric Cochlear Implantees and also in planning the rehabilitative goals.
- Published
- 2021
- Full Text
- View/download PDF
16. Development of Minimal Pair Test in Tamil (MPT-T).
- Author
-
VIJAYAKUMAR, KAVITHA, GUNALAN, SARANYAA, and RAJESHWARAN, RANJITH
- Subjects
MANN Whitney U Test ,PERCEPTION testing ,SCHOOL children ,SPEECH perception - Abstract
Introduction: Speech perception testing provides an accurate measurement of the child's ability to perceive and distinguish the various phonetic segments and patterns of the sounds. From among the many types of speech stimuli used, minimal pairs can also be used to assess the phoneme recognition skills. Thus, the study focussed on developing Minimal Pair Test in Tamil (MPT-T). Aim: The aim of the present study was to develop and validate the MPT-T in Tamil on Normal Hearing (NH) children and paediatric cochlear implantees (CI). Materials and Methods: It was an experimental study which included school going children in the age range of six to eight years and the duration of the study was 12 months. The test was developed in two phases. The first phase focussed on the construction of the word list, recording of the word pairs and the preparation of the test. The second phase was administration of the test on NH children and paediatric cochlear implantees. The test scores were analysed using Mann Whitney U test, Kruskal Wallis and Wilcoxon Sign ranked test. The results showed a statistical significance between the NH group and the paediatric cochlear implantees. Results: The present study included 40 NH children and 15 paediatric cochlear implantees through purposive sampling method. The specific speech feature analysis of the paediatric cochlear implantees revealed that there was difficulty identifying the word pairs differing in Vowel Length (VL) and the best performed feature was Place of Articulation (POA). The results showed statistical significance between the NH group and the paediatric cochlear implantees. Conclusion: The developed test can be effectively used in clinic for assessing speech perception abilities of pediatric Cochlear Implantees (CI) and also in planning the rehabilitative goals. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Phonetics of Consonants
- Author
-
Fuchs, Susanne and Birkholz, Peter
- Published
- 2019
- Full Text
- View/download PDF
18. Stop Voicing and F0 Perturbation in Pahari
- Author
-
Nazia Rashid, Abdul Qadir Khan, Ayesha Sohail, and Bilal Ahmed Abbasi
- Subjects
Pahari ,perturbation ,fundamental frequency ,voicing ,place of articulation ,Philology. Linguistics ,P1-1091 - Abstract
The present study has been carried out to investigate the perturbation effect of the voicing of initial stops on the fundamental frequency (F0) of the following vowels in Pahari. Results show that F0 values are significantly higher following voiceless unaspirated stops than voiced stops. F0 contours indicate an initially falling pattern for vowel [a:] after voiced and voiceless unaspirated stops. A rising pattern after voiced stops and a falling pattern after voiceless unaspirated stops is observed after [i:] and [u:]. These results match Umeda (1981) who found that F0 of a vowel following voiceless stops starts high and drops sharply, but when the vowel follows a voiced stop, F0 starts at a relatively low frequency followed by a gradual rise. The present data show no statistically significant difference between the F0 values of vowels with different places of articulation. Place of articulation is thus the least influencing factor.
- Published
- 2021
- Full Text
- View/download PDF
19. Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions
- Author
-
Maëva Michon, Gonzalo Boncompte, and Vladimir López
- Subjects
orofacial movements ,place of articulation ,ERPs ,viseme ,articuleme ,speech motor system ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue.
- Published
- 2020
- Full Text
- View/download PDF
20. STOP VOICING AND F0 PERTURBATION IN PAHARI.
- Author
-
RASHID, Nazia, KHAN, Abdul Qadir, SOHAIL, Ayesha, and ABBASI, Bilal Ahmed
- Subjects
VOWELS - Abstract
Copyright of Acta Linguistica Asiatica is the property of Acta Linguistica Asiatica and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
21. Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions.
- Author
-
Michon, Maëva, Boncompte, Gonzalo, and López, Vladimir
- Subjects
FORECASTING ,PHONEME (Linguistics) ,ELECTROPHYSIOLOGY ,SPEECH perception ,VISUAL perception ,SPEECH - Abstract
The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Lexical Category-Governed Neutralization to Coronal and Non-Coronal Place of Articulation in Latent Consonants: The Case of Shipibo-Konibo and Capanahua (Pano)
- Author
-
Jose Elias-Ulloa
- Subjects
Panoan languages ,place of articulation ,dorsal neutralization ,labial neutralization ,latent segments ,harmonic alignment ,Language and Literature - Abstract
This study documents and accounts for the behavior of the place of articulation of latent segments in the Panoan languages Shipibo-Konibo and Capanahua. In these languages, the lexical category of the word governs the place of articulation (PoA) of latent consonants. Latent segments only surface when they are syllabified as syllable onsets. They surface as coronal consonants when they are part of verbs; but they occur as non-coronal consonants when they belong to nouns or adjectives. In non-verb forms, by default, they are neutralized to dorsal in Shipibo-Konibo, and to labial in Capanahua. The analysis proposed consists in using the well-known markedness hierarchy on PoA, |Labial, Dorsal > Coronal > Pharyngeal|, and harmonically aligning it with a morphological markedness hierarchy in which non-verb forms are more marked than verb forms: |NonVerb > Verb|. This creates two fixed rankings of markedness constraints: one on verb forms in which, as expected, coronal/laryngeal is deemed the least marked PoA, and another one on non-verb forms in which the familiar markedness on PoA is reversed so that labial and dorsal become the least marked places of articulation. The study shows that although both Panoan languages follow the general cross-linguistic tendency to have coronal as a default PoA, this default can be overridden by morphology.
- Published
- 2021
- Full Text
- View/download PDF
23. Lietuvių ir latvių kalbų trankieji priebalsiai: lokuso lygčių rezultatai
- Author
-
Jolita Urbanavičienė and Inese Indričāne
- Subjects
Standard Lithuanian ,Standard Latvian ,acoustic phonetics ,locus equations ,obstruents ,sonority ,palatalization ,place of articulation ,Philology. Linguistics ,P1-1091 - Abstract
Straipsnyje pirmą kartą lietuvių kalbos trankieji priebalsiai tiriami lokuso lygčių (locus equations) metodu, kuris akustinės fonetikos tyrimuose plačiai taikomas tiek balsių ir priebalsių koartikuliacijos lygiui nustatyti, tiek priebalsių artikuliacijos vietai tirti. Be to, tai pirmasis lyginamasis lietuvių ir latvių kalbų trankiųjų priebalsių tyrimas, atliktas pagal tą pačią metodiką. Straipsnyje pateikiama abiejų baltų kalbų priebalsių akustinė klasifikacija, sukurta remiantis universaliais Tarptautinės fonetikų asociacijos (TFA) sukurtais principais. Abiejų kalbų lokuso lygčių rezultatai lyginami pagal tris kriterijus: balsingumą, palatalizaciją ir artikuliacijos vietą.
- Published
- 2016
- Full Text
- View/download PDF
24. Electrophysiological and behavioral measures of some speech contrasts in varied attention and noise.
- Author
-
Morris, David Jackson, Tøndering, John, and Lindgren, Magnus
- Subjects
- *
VOWELS , *MASKING (Psychology) , *SPEECH , *DIFFERENCE sets - Abstract
Abstract This paper investigates the salience of speech contrasts in noise, in relation to how listening attention affects scalp-recorded cortical responses. The contrasts that were examined with consonant-vowel syllables, were place of articulation, vowel length and voice-onset time (VOT) and our analysis focuses on the correspondence between the effect of attention on the electrophysiology and the decrement in behavioral results when noise was added to the stimuli. Normal-hearing subjects (n = 20) performed closed-set syllable identification in no noise, 0, 4 and 8 dB signal-noise ratio (SNR). Identification in noise decreased markedly for place of articulation, moderately for vowel length and marginally for VOT. The same syllables were used in two electrophysiology conditions, where subjects attended to the stimuli, and also while their attention was diverted to a visual discrimination task. Differences in global field power between the attention conditions from each contrast showed that that the effect of attention was negligible for place of articulation. They implied offset encoding of vowel length and were early (starting at 117 ms), and of high amplitude (>3 μV) for VOT. There were significant correlations between the difference in syllable identification in no noise and 0 dB SNR and the electrophysiology results between attention conditions for the VOT contrast. Comparison of the two attention conditions with microstate analysis showed a significant difference in the duration of microstate class D. These results show differential integration of attention and syllable processing according to speech contrast and they suggest that there is correspondence between the salience of a contrast in noise and the effect of attention on the evoked electrical response. Highlights • Electrophysiological attention differences for VOT and perception are correlated. • VOT perception is robust to noise and affected by the attention of the listener. • EEG measures of place of articulation are little affected by listening attention. • Clustered microstate analysis reveals differences in listening attention. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Sonority Is Different.
- Author
-
Scheer, Tobias
- Subjects
MORPHEMICS ,SKELETON - Abstract
The paper argues that sonority on the one hand and other segmental properties such as place of articulation (labiality etc.) and laryngeal properties (voicing etc.) on the other hand are different in kind and must therefore not be represented alike: implementations on a par e.g. as features ([±voc], [±son], [±lab], [±voice] etc.) are misled. Arguments come from a number of broad, cross-linguistically stable facts concerning visibility of items below and above the skeleton in phonological and morphological processing: sonority, but no other segmental property, is taken into account when syllable structure is built (upward visibility); processes located above the skeleton (infixation, phonologically conditioned allomorphy, stress, tone, positional strength) do make reference to sonority, but never to labiality, voicing etc. (downward visibility). Approaches are discussed where sonority is encoded as structure, rather than as primes (features or Elements). In some cases not only sonority but also other segmental properties are structuralized, a solution that does not do justice to the insight that sonority and melody are different in kind. Also, the approaches that structuralize sonority are not concerned with the question how the representations they entertain come into being: representations are not contained in the phonetic signal that is the input to the linguistic system, nor do they fall from heaven - they are built by some computation. It is therefore concluded that what really segregates sonority and melody is their belonging to two distinct computational systems (modules in the Fodorian sense) which operate over distinct vocabularies and produce distinct structure: sonority primes are used to build syllable structure, while other computations take other types of primes as an input. The computation carrying out a palatalization for example works with melodic primes. The segment, then, is a lexical recording that has different compartments containing domain-specific primes [
, ] SEGMENT . This is also the case of the morpheme, which hosts three compartments [, , ] MORPHEME . [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
26. Articulatory Skills in Malayalam Speaking in Children with Down Syndrome.
- Author
-
ABRAHAM, ANITHA NAITTEE and SREED, N.
- Subjects
- *
DOWN syndrome , *VOICE analysis , *STANDARDIZED tests , *ERROR analysis in mathematics , *DUTCH language - Abstract
The speech of children with Down syndrome (DS) is most often unintelligible and their speech sound errors are well documented. However, the error patterns may differ depending on the phonological characteristics of language spoken. Most of the reports on speech errors of DS are from English language. Few other languages like Dutch were also studied. Hence studies in other languages with distinct phonological characteristics are warranted. Malayalam is one such Dravidian language with unique phonological features. Hence, the aim of the present study was to investigate the speech sound errors exhibited by children with DS in Malayalam and compare it with that of mental age matched typically developing children. Participants included ten children with DS and ten typically developing (TD) children. The articulatory errors of the children were assessed using a standardized articulation test. The responses were subjected to place, manner, and voicing error analyses and Percentage of Consonants Correct (PCC) was computed. Language specific error patterns were observed in the results of the present study when compared to available reports. The most erroneous place of articulation was retroflex and manners of articulation were approximants, trills, and flaps, which are the unique phonemes in Malayalam. Alveolar and glottals were also erroneous when compared to bilabials, labiodentals and dentals. This difficulty in speech production of children with DS is discussed with respect to the anatomical, physiological, and cognitive deficits. Also the need for targeting articulatory skills in children with DS is highlighted. [ABSTRACT FROM AUTHOR]
- Published
- 2019
27. Consonant–vowel interaction in Sichuan Chinese: An element-based analysis.
- Author
-
Chen, Shunting and van de Weijer, Jeroen
- Subjects
- *
ARTICULATION (Education) , *CONSONANTS , *LABIAL frenulum , *PHONOLOGY , *GENERALIZATION - Abstract
This paper focuses on the representation of place of articulation features in consonants and vowels, on the basis of interaction between consonant and vowel place. New data from Sichuan Chinese are examined, which show at least four types of consonant–vowel interaction: labial attraction, two types of coronal attraction and velar attraction. Since consonants and vowels show patterns of interaction at all places of articulation, we argue that consonant and vowel place should be described using the same representational elements. We propose that the relevant generalizations reflect the historical development (not synchronic alternation), and show that an account using standard Dependency Phonology unary features can capture these facts. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
28. Phonological and phonetic properties of nasal substitution in Sasak and Javanese
- Author
-
Albert Lee, Diana Archangeli, Jonathan Yip, and Lang Qin
- Subjects
Sasak ,Javanese ,nasal substitution ,ultrasound language research ,abstract phonological relations ,place of articulation ,Language. Linguistic theory. Comparative grammar ,P101-410 - Abstract
Austronesian languages such as Sasak and Javanese have a pattern of morphological nasal substitution, where nasals alternate with homorganic oral obstruents—except that [s] is described as alternating with [ɲ], not with [n]. This appears to be an abstract morphophonological relation between [s] and [ɲ] where other parts of the paradigm have a concrete homorganic relation. Articulatory ultrasound data were collected of productions of [t, n, ʨ, ɲ], along with [s] and its nasal counterpart from two languages, from 10 Sasak and 8 Javanese speakers. Comparisons of lingual contours using a root mean square analysis were evaluated with linear mixed-effects regression models, a method that proves reliable for testing questions of phonological neutralization. In both languages, [t, n, s] exhibit a high degree of articulatory similarity, whereas postalveolar [ʨ] and its nasal counterpart [ɲ] exhibited less similarity. The nasal counterpart of [s] was identical in articulation to [ɲ]. This indicates an abstract, rather than concrete, relationship between [s] and its morphophonological nasal counterpart, with the two sounds not sharing articulatory place in either Sasak or Javanese.
- Published
- 2017
- Full Text
- View/download PDF
29. Locus equations and the place of articulation for the Latvian sonorants
- Author
-
Jana Taperte
- Subjects
Latvian ,acoustic phonetics ,sonorants ,place of articulation ,coarticulation ,vowel F2 transitions ,locus theory ,Philology. Linguistics ,P1-1091 - Abstract
In the article, the sonorant consonants of Standard Latvian are investigated using locus equations. The aim of the study is to examine whether locus equations can be considered as efficient descriptors of consonantal place of articulation both within the group of sonorants and across different manner classes in Standard Latvian.Two-type sequences were analyzed: (1) the CV part of isolated nonsense CVC syllables, where C is one of the sonorants [m; n; ɲ; l; ʎ; r] and V is one of the vowels [i(ː); e(ː); æ(ː); ɑ(ː); ɔ(ː); u(ː)]; (2) the V(ː)C part of isolated nonsense V(ː)CV (VCV for [ŋ]) structure utterances, where C is one of the nasals [m; n; ɲ; ŋ] and V is one of the vowels [i; e; æ; ɑ; ɔ; u]. Each utterance was recorded in three repetitions by every of 10 native Latvian speakers (five males and five females), thus 3420 items were analyzed in total. Statistical analysis of locus equation slopes and y-intercepts both for the sonorants and for the whole consonant inventory of Standard Latvian was performed in order to test the relevance of these indices for discriminating places of articulation across different manner classes.By plotting the data for the whole consonant inventory in slope-by-intercept space it is possible to distinguish between the groups of palatals/dentals/alveolars and labials/velars, while the results of statistical analysis show significant difference among all place categories. According to the results, there are certain coarticulatory mechanisms associated with particular places of constriction for the Latvian consonants that allow linking locus equation data to different place categories, although they are also affected by manner and voicing. Nevertheless, place of articulation as a determinant of coarticulatory patterns overrules these factors when other possible influences are excluded.
- Published
- 2014
- Full Text
- View/download PDF
30. Place
- Author
-
van der Hulst, Harry, author
- Published
- 2020
- Full Text
- View/download PDF
31. Aeroacoustic differences between the Japanese fricatives [ɕ] and [ç]
- Author
-
Tsukasa Yoshinaga, Kikuo Maekawa, and Akiyoshi Iida
- Subjects
Physics ,Acoustics and Ultrasonics ,Sibilant ,Broadband noise ,Place of articulation ,Speech recognition ,Speech sounds ,Acoustics ,Speech Acoustics ,Amplitude ,Japan ,Arts and Humanities (miscellaneous) ,Phonetics ,Coronal plane ,Voice ,Humans ,Speech ,Vocal tract - Abstract
application/pdf, Toyohashi University of Technology, National Institute for Japanese Language and Linguistics
- Published
- 2021
32. Sibilant production in Hebrew-speaking adults: Apical versus laminal.
- Author
-
Icht, Michal and Ben-David, Boaz M.
- Subjects
- *
COLLEGE students , *ANALYSIS of variance , *PROBABILITY theory , *SPEECH therapy , *REPEATED measures design , *DATA analysis software , *DESCRIPTIVE statistics ,PHYSIOLOGICAL aspects of speech - Abstract
The Hebrew IPA charts describe the sibilants /s, z/ as ‘alveolar fricatives’, where the place of articulation on the palate is the alveolar ridge. The point of constriction on the tongue is not defined – apical (tip) or laminal (blade). Usually, speech and language pathologists (SLPs) use the apical placement in Hebrew articulation therapy. Some researchers and SLPs suggested that acceptable /s, z/ could be also produced with the laminal placement (i.e. the tip of the tongue approximating the lower incisors). The present study focused at the clinical level, attempting to determine the prevalence of these alternative points of constriction on the tongue for /s/ and /z/ in three different samples of Hebrew-speaking young adults (totaln= 242), with typical articulation. Around 60% of the participants reported using the laminal position, regardless of several speaker-related variables (e.g. tongue-thrust swallowing, gender). Laminal production was more common in /s/ (than /z/), coda (than onset) position of the sibilant, mono- (than di-) syllabic words, and with non-alveolar (than alveolar) adjacent consonants. Experiment 3 revealed no acoustical differences between apical and laminal productions of /s/ and of /z/. From a clinical perspective, we wish to raise the awareness of SLPs to the prevalence of the two placements when treating Hebrew speakers, noting that tongue placements were highly correlated across sibilants. Finally, we recommend adopting a client-centred practice, where tongue placement is matched to the client. We further recommend selecting targets for intervention based on our findings, and separating between different prosodic positions in treatment. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
33. Deep Neural Network based Place and Manner of Articulation Detection and Classification for Bengali Continuous Speech.
- Author
-
Bhowmik, Tanmay, Chowdhury, Amitava, and Das Mandal, Shyamal Kumar
- Subjects
SPEECH processing systems ,ARTICULATION (Speech) ,ARTIFICIAL neural networks ,DEEP learning ,INFORMATION storage & retrieval systems - Abstract
The phonological features are the most basic unit of a speech knowledge hierarchy. This paper reports about detection and classification of phonological features from Bengali continuous speech. The phonological features are based on place and manner of articulation. All the experiments are performed by a deep neural network based framework. Two different models are designed for the detection and classification task. The deep-structured models are pre-trained by stacked autoencoder. The C-DAC speech corpus is used for continuous spoken Bengali speech data. Frame wise cepstral representation is provided in the input layer of the deep-structured model. Speech data from multiple speakers has been used to confirm speaker-independency. In detection task, the system achieved 86.19% average overall accuracy. In the classification task, accuracy for the classification of place of articulation remains low with 50.2% while in manner-based classification, the system delivered an improved performance with 98.9% accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. Interaction of attention and acoustic factors in dichotic listening for fused words.
- Author
-
McCulloch, Katie, Lachner Bass, Natascha, Dial, Heather, Hiscock, Merrill, and Jansen, Ben
- Subjects
- *
INTERACTION (Philosophy) , *DICHOTIC listening tests , *AVERSIVE stimuli , *ATTENTION , *PLACE of articulation - Abstract
Two dichotic listening experiments examined the degree to which the right-ear advantage (REA) for linguistic stimuli is altered by a “top-down” variable (i.e., directed attention) in conjunction with selected “bottom-up” (acoustic) variables. Halwes fused dichotic words were administered to 99 right-handed adults with instructions to attend to the left or right ear, or to divide attention equally. Stimuli in Experiment 1 were presented without noise or mixed with noise that was high-pass or low-pass filtered, or unfiltered. The stimuli themselves in Experiment 2 were high-pass or low-pass filtered, or unfiltered. The initial consonants of each dichotic pair were categorized according to voice onset time (VOT) and place of articulation (PoA). White noise extinguished both the REA and selective attention, and filtered noise nullified selective attention without extinguishing the REA. Frequency filtering of the words themselves did not alter performance. VOT effects were inconsistent across experiments but PoA analyses indicated that paired velar consonants (/k/ and /g/) yield a left-ear advantage and paradoxical selective-attention results. The findings show that ear asymmetry and the effectiveness of directed attention can be altered by bottom-up variables. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
35. Analysis of Spanish consonant recognition in 8-talker babble.
- Author
-
Moreno-Torres, Ignacio, Otero, Pablo, Luna-Ramírez, Salvador, and Garayzábal Heinze, Elena
- Subjects
- *
SPANISH language , *CONSONANTS , *VOWELS , *LANGUAGE & languages , *PLACE of articulation , *ORAL communication - Abstract
This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C x 5V, spoken by 2 talkers) in 8-talker babble (–6, –2, +2 dB). A ranking of resistance to noise was obtained using the signal detection d′ measure, and confusion patterns were analyzed using a graphical method (confusion graphs). The resulting ranking indicated the existence of three resistance groups: (1) high resistance: /t∫, s, ʝ/; (2) mid resistance: /r, l, m, n/; and (3) low resistance: /t, θ, x, g, b, d, k, f, p/. Confusions involved mostly place of articulation and voicing errors, and occurred especially among consonants in the same resistance group. Three perceptual confusion groups were identified: the three low-energy fricatives (i.e., /f, θ, x/), the six stops (i.e., /p, t, k, b, d, g/), and three consonants with clear formant structure (i.e., /m, n, l/). The factors underlying consonant resistance and confusion patterns are discussed. The results are compared with data from other languages. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. Knowledge-Based Features for Place Classification of Unvoiced Stops
- Author
-
Karjigi Veena and Rao Preeti
- Subjects
knowledge-based features ,place of articulation ,Science ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The classification of unvoiced stops in consonant–vowel (CV) syllables, segmented from continuous speech, is investigated by features related to speech production. As burst and vocalic transitions contribute to identification of stops in the CV context, features are computed from both regions. Although formants are the truly discriminating articulatory features, their estimation from the speech signal is a challenge especially in unvoiced regions like the release burst of stops. This may be compensated partially by sub-band energy-based features. In this work, formant features from the vocalic region are combined with features from the burst region comprising sub-band energies, as well as features from a formant tracking method developed for unvoiced regions. The overall combination of features at the classifier level obtains an accuracy of 84.4%, which is significantly better than that obtained with solely sub-band features on unvoiced stops in CV syllables of TIMIT.
- Published
- 2013
- Full Text
- View/download PDF
37. Palatalization in Central Bùlì
- Author
-
George Akanlig-Pare
- Subjects
Consonant ,feature geometry ,bùlì ,Computer science ,Place of articulation ,synchronic ,palatalization ,lcsh:History of scholarship and learning. The humanities ,Linguistics ,Front vowel ,Vowel ,Feature geometry ,lcsh:AZ20-999 ,Palatalization (phonetics) ,diachronic ,Generative grammar - Abstract
Palatalization is a process through which non-palatal consonants acquire palatality, either through a shift in place of articulation from a non-palatal region to the hard palate or through the superimposition of palatal qualities on a non-palatal consonant. In both cases, there is a front, non-low vowel or a palatal glide that triggers the process. In this paper, I examine the palatalization phenomena in Bùlì using Feature Geometry within the non-linear generative phonological framework. I argue that both full and secondary palatalization occur in Buli. The paper further explains that, the long high front vowel /i:/, triggers the formation of a palato-alveolar affricate which is realized in the Central dialect of Bùlì, where the Northern and Central dialects retain the derived palatal stop.
- Published
- 2021
38. A common neural mechanism for speech perception and movement initiation specialized for place of articulation
- Author
-
A.M. Keane
- Subjects
cognitive functioning ,speech perception ,hand movement ,place of articulation ,general motor programmer ,hand preference ,Psychology ,BF1-990 ,Neurophysiology and neuropsychology ,QP351-495 - Abstract
It had been proposed that there was a single shared mechanism in the left hemisphere of the brain, called the general motor programmer, that was necessary for not only the perception of speech but the initiation of hand movement also. The present study is an attempt to find out what the general motor programmer is specifically specialized for and to decipher whether timing or place of articulation is the defining feature of this proposed mechanism. It was found that the shared aspect between speech perception and movement initiation, and thus the basis of the general motor programmer, appears to be place of articulation. The findings suggest a motor programming basis for the perception of speech, which has implications for brain damage as well as for general principles of brain functioning.
- Published
- 2016
- Full Text
- View/download PDF
39. Phonological and phonetic properties of nasal substitution in Sasak and Javanese.
- Author
-
Archangeli, Diana, Yip, Jonathan, Lang Qin, and Lee, Albert
- Subjects
SASAK language ,JAVANESE language ,PHONOLOGY ,PLACE of articulation ,LANGUAGE research - Published
- 2017
- Full Text
- View/download PDF
40. Mental representations of vowel features asymmetrically modulate activity in superior temporal sulcus.
- Author
-
Scharinger, Mathias, Domahs, Ulrike, Klein, Elise, and Domahs, Frank
- Subjects
- *
MENTAL representation , *TEMPORAL lobe , *NEUROSCIENCES , *PHONETICS , *UNDERSPECIFICATION (Linguistics) , *AUDITORY cortex physiology , *BRAIN mapping , *SPEECH perception , *PHYSIOLOGY - Abstract
Research in auditory neuroscience illustrated the importance of superior temporal sulcus (STS) for speech sound processing. However, evidence for abstract processing beyond the level of phonetics in STS has remained elusive. In this study, we follow an underspecification approach according to which the phonological representation of vowels is based on the presence vs. absence of abstract features. We hypothesized that phonological mismatch in a same/different task is governed by underspecification: A less specified vowel in second position of same/different minimal pairs (e.g. [e]) compared to its more specified counterpart in first position (e.g. [o]) should result in stronger activation in STS than in the reverse presentation. Whole-brain analyses confirmed this hypothesis in a bilateral cluster in STS. However, this effect interacted with the feature-distance between first and second vowel and was most pronounced for a minimal, one-feature distance, evidencing the benefit of phonological information for processing acoustically minimal sound differences. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
41. Co-articulatory Cues for Communication: An Investigation of Five Environments.
- Author
-
Pycha, Anne
- Subjects
- *
ANALYSIS of variance , *COLLEGE students , *STATISTICAL correlation , *EXPERIMENTAL design , *PHONETICS , *SPEECH perception , *PROMPTS (Psychology) , *REPEATED measures design , *DESCRIPTIVE statistics , *INFERENTIAL statistics ,PHYSIOLOGICAL aspects of speech - Abstract
We hypothesized that speakers adjust co-articulation in vowel–consonant (VC) sequences in order to provide listeners with enhanced perceptual cues to C, and that they do so specifically in those situations where primary cues to C place of articulation tend to be diminished. We tested this hypothesis in a speech production study of American English, measuring the duration and extent of VC formant transitions in five conditioning environments – consonant voicing, phrasal position, sentence accent, vowel quality, and consonant place – that modulate primary cues to C place in different ways. Results partially support our hypothesis. Although speakers did not exhibit greater temporal co-articulation in contexts that tend to diminish place cues, they did exhibit greater spatial co-articulation. This finding suggests that co-articulation serves specific communicative goals. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
42. Aerodynamic and Laryngeal Characteristics of Place-Dependent Voice Onset Time Differences.
- Author
-
Eshghi, Marziye, Alemi, Mehdi, and Zajac, David J.
- Subjects
- *
AERODYNAMICS , *LARYNX , *SOUND , *SPEECH ,PHYSIOLOGICAL aspects of speech - Abstract
Objective: To investigate aerodynamic and laryngeal factors associated with place-dependent voice onset time (VOT) differences. Methods: The speech materials were /pΛ, tΛ, kΛ/, each produced 15 times by 10 adult English speakers in the carrier phrase "say __ again". The sound pressure level was targeted within a ±3 dB range. Intraoral air pressure (Po) was obtained using a buccal-sulcus approach. VOT, Po, maximum Po declination rate (MPDR), duration of the laryngeal devoicing gesture (LDG), occlusion duration, and the duration of the Po drop to baseline (atmosphere) to the onset of voicing (PDOV) were determined for each stop. Results: VOT was longer for the alveolar and velar stops compared to the bilabial stop. A constant LDG was observed for all stops regardless of place of articulation. Occlusion duration, however, was significantly shorter for the alveolar and velar stops compared to the bilabial stop. Aerodynamically, Po was greatest for the velar stop, intermediate for the alveolar stop, and smallest for the bilabial stop. MPDR index showed a slower rate of Po drop for the velar and alveolar stops compared with the bilabial stop. PDOV was found to be longer for /p/ than /t/ and /k/. Conclusion: Findings provide empirical evidence for the inter-related roles of Po, rate of Po change, and laryngeal factors in place-dependent variations of VOT. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
43. No evidence of somatotopic place of articulation feature mapping in motor cortex during passive speech perception.
- Author
-
Arsenault, Jessica and Buchsbaum, Bradley
- Subjects
- *
ARTICULATION (Speech) , *MOTOR cortex , *SPEECH perception , *MULTIVARIATE analysis , *FUNCTIONAL magnetic resonance imaging - Abstract
The motor theory of speech perception has experienced a recent revival due to a number of studies implicating the motor system during speech perception. In a key study, Pulvermüller et al. (2006) showed that premotor/motor cortex differentially responds to the passive auditory perception of lip and tongue speech sounds. However, no study has yet attempted to replicate this important finding from nearly a decade ago. The objective of the current study was to replicate the principal finding of Pulvermüller et al. (2006) and generalize it to a larger set of speech tokens while applying a more powerful statistical approach using multivariate pattern analysis (MVPA). Participants performed an articulatory localizer as well as a speech perception task where they passively listened to a set of eight syllables while undergoing fMRI. Both univariate and multivariate analyses failed to find evidence for somatotopic coding in motor or premotor cortex during speech perception. Positive evidence for the null hypothesis was further confirmed by Bayesian analyses. Results consistently show that while the lip and tongue areas of the motor cortex are sensitive to movements of the articulators, they do not appear to preferentially respond to labial and alveolar speech sounds during passive speech perception. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
44. A comparison of cepstral coefficients and spectral moments in the classification of Romanian fricatives.
- Author
-
Spinu, Laura and Lilley, Jason
- Subjects
- *
CEPSTRUM analysis (Mechanics) , *ROMANIAN language , *FRICATIVES (Phonetics) , *CODING theory , *HIDDEN Markov models , *VARIANCES - Abstract
In this paper we explore two methods for the classification of fricatives. First, for the coding of the speech, we compared two sets of acoustic measures obtained from a corpus of Romanian fricatives: (a) spectral moments and (b) cepstral coefficients. Second, we compared two methods of determining the regions of the segments from which the measures would be extracted. In the first method, the phonetic segments were divided into three regions of approximately equal duration. In the second method, Hidden Markov Models (HMMs) were used to divide each segment into three regions such that the variances of the measures within each region were minimized. The corpus we analyzed consists of 3674 plain and palatalized word-final fricatives from four places of articulation, produced by 31 native speakers of Romanian (20 females). We used logistic regression to classify fricatives by place, voicing, palatalization status, and gender. We found that cepstral coefficients reliably outperformed spectral moments in all classification tasks, and that using regions determined by HMM yielded slightly higher correct classification rates than using regions of equal duration. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. Using ultrasound tongue imaging to support the phonetic transcription of childhood speech sound disorders
- Author
-
Eleanor Sugden and Joanne Cleland
- Subjects
Linguistics and Language ,medicine.medical_specialty ,Modality (human–computer interaction) ,RJ ,Place of articulation ,Phonetic transcription ,Reproducibility of Results ,Audiology ,Speech Sound Disorder ,Language and Linguistics ,P1 ,Speech and Hearing ,Inter-rater reliability ,medicine.anatomical_structure ,Transcription (linguistics) ,Speech Production Measurement ,Tongue ,Phonetics ,Motor speech ,medicine ,Humans ,Psychology ,Child ,Reliability (statistics) - Abstract
Purpose: This study aims to determine whether adding an additional modality (ultrasound tongue imaging) improves the inter-rater reliability of phonetic transcription in childhood speech sound disorders (SSDs) and whether it enables the identification of different or additional errors in children’s speech.Method: Twenty-three English speaking children aged 5–13 years with SSDs of unknown origin were recorded producing repetitions of /aCa/ for all places of articulation with simultaneous audio recording and probe-stabilized ultrasound. Two types of transcriptions were undertaken off-line: (1) ultrasound-aided transcription by two ultrasound-trained speech-language pathologists (SLPs) and (2) traditional phonetic transcription from audio recordings, completed by the same two SLPs and additionally by two different SSD specialist SLPs. We classified transcriptions and errors into ten different subcategories and compared: the number of consonants identified as in error by each transcriber; the inter-rater reliability; and the relative frequencies of error types identified by the different types of transcriber.Results: Error-detection rates were significantly different across the transcription types, with the ultrasound-aided transcribers identifying more errors (χ2(2) = 9.388, p = 0.009). Analysis revealed that these additional errors were identified on the dynamic ultrasound image despite being phonetically transcribed as correct, suggestive of subtle motor errors being present. Interrater reliability for classifying the type of error was substantial (κ = 0.72) for the ultrasound-aided transcribers and ranged from fair to moderate for the audio-only transcribers (κ = 0.38 to 0.52). Ultrasound-aided transcribers were more likely to identify covert errors such as increased variability and abnormal timing than the audio-only transcribers.Conclusions: This study provides preliminary evidence that using ultrasound tongue imaging in the assessment phase of the clinical management of childhood SSDs is useful for improving transcription reliability and detecting subtle covert errors.
- Published
- 2021
46. Electrocorticographic Representations of Segmental Features in Continuous Speech
- Author
-
Fabien eLotte, Jonathan S Brumberg, Peter eBrunner, Aysegul eGunduz, Anthony L Ritaccio, Cuntai eGuan, and Gerwin eSchalk
- Subjects
electrocorticography (ECoG) ,Speech Processing ,Voicing ,place of articulation ,manner of articulation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface (electrocorticography (ECoG)) to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates.
- Published
- 2015
- Full Text
- View/download PDF
47. Ainda o ponto de articulação das sibilantes na alteração fonológica primária: dados de crianças portuguesas
- Author
-
Ana Margarida Ramalho and Maria João Freitas
- Subjects
Português europeu ,Fonologia ,medicine.medical_specialty ,Speech production ,Sonorant ,Place of articulation ,Traços distintivos ,Continuant ,Audiology ,Specific language impairment ,Desenvolvimento fonológico atípico ,medicine.disease ,lcsh:Philology. Linguistics ,lcsh:P1-1091 ,medicine ,Voice ,Syllable ,Sibilantes ,Psychology ,Phonological Disorder - Abstract
Investigação sobre o perfil fonológico das crianças portuguesas com alterações fonológicas é ainda escassa. No presente artigo, contribui-se para o avanço do trabalho nesta área, estudando-se a aquisição do contraste de ponto de articulação coronal [±anterior] na classe das sibilantes, com base em dados relativos a sete crianças com alterações fonológicas primárias (associadas a perturbações dos sons da fala ou a perturbação do desenvolvimento da linguagem). Os dados foram recolhidos através da aplicação do teste CLCP‑PE, tendo sido transcritos e analisados no software de análise fonológica PHON. Os resultados demonstraram a existência de padrões preferenciais e ordens de aquisição quanto à emergência e estabilização dos contrastes inerentes ao ponto de articulação das sibilantes ([-soante; -contínuo; coronal [±anterior]] >> [-soante; +contínuo; coronal [±anterior]] >> [+soante; +contínuo; coronal [±anterior]]) e ao vozeamento ([-vozeado]>>[+vozeado]), assim como o impacto da constituência silábica (Ataque >> Coda), tendo sido observada uma inesperada preferência por [s] para a Coda fricativa.
- Published
- 2019
48. Arabic 'Raa' between Plosiveness and Friction
- Author
-
Tareq Ibrahim Al-Ziyadat
- Subjects
Space (punctuation) ,Linguistics and Language ,Literature and Literary Theory ,Arabic ,Place of articulation ,lcsh:PR1-9680 ,Language and Linguistics ,language.human_language ,Linguistics ,lcsh:English literature ,lcsh:Philology. Linguistics ,raa phoneme, plosiveness, friction, release, beats ,lcsh:P1-1091 ,language ,Electrical and Electronic Engineering ,Alphabet ,Articulation (phonetics) ,Psychology - Abstract
The study aims to elucidate plosiveness and friction in the “Raa”, (the tenth alphabet in Arabic) benefiting from what the ancient and modern scholars said on the issue. The core issue of the study is Sibawey’s classification of the “Raa” a tense phoneme in whose articulation the sound repeatedly flows leaning toward the articulation of “Lam” (23 Arabic alphabet) avoiding laxity. Had not the sound repeated, we wouldn’t have had the “Raa”. Tensity (plosiveness) and frication are two contradictory features which can never have the same place of articulation. The sound is articulated at stages, each of which has its own features. After analysis, it was found that the articulation of “Raa” passes through three stages. In the second, in the space between vocal cords and top of the tongue the “Raa” is fricative, while in the third, the closure stage between top of the tongue and hard palate, the “Raa” is plosive but this plosiveness is less in intensity than that of plosive phonemes. Therefore the “Raa” can be neither plosive, nor fricative, but in between “medial”.
- Published
- 2019
49. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study
- Author
-
Karsten eSpecht, Florian Johannes Baumgartner, Jörg eStadler, Kenneth eHugdahl, and Stefan ePollmann
- Subjects
Auditory Cortex ,functional magnetic resonance imaging ,dynamic causal modelling ,ultra-high field ,voice onset time ,place of articulation ,Psychology ,BF1-990 - Abstract
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.
- Published
- 2014
- Full Text
- View/download PDF
50. Fundamental Frequency and Phonation Differences in the Production of Stop Laryngeal Contrasts of Endangered Shina
- Author
-
Qandeel Hussain
- Subjects
Linguistics and Language ,medicine.medical_specialty ,phonation ,Place of articulation ,Language and Literature ,Contrast (statistics) ,Fundamental frequency ,Audiology ,spectral tilt ,Language and Linguistics ,Shina ,Tilt (optics) ,Aspirated consonant ,F0 ,medicine ,Phonation ,typology ,acoustics ,Mathematics ,Dardic - Abstract
Shina is an endangered Indo-Aryan (Dardic) language spoken in Gilgit, Northern Pakistan. The present study investigates the acoustic correlates of Shina’s three-way stop laryngeal contrast across five places of articulation. A wide range of acoustic correlates were measured including fundamental frequency (F0), spectral tilt (H1*-H2*, H1*-A1*, H1*-A2*, and H1*-A3*), and cepstral peak prominence (CPP). Voiceless aspirated stops were characterized by higher fundamental frequency, spectral tilt, and cepstral peak prominence, compared to voiceless unaspirated and voiced unaspirated stops. These results suggest that Shina is among those languages which have a raising effect of aspiration on the pitch and spectral tilt onsets of the following vowels. Positive correlations among fundamental frequency, spectral tilt, and cepstral peak prominence were observed. The findings of this study will contribute to the phonetic documentation of endangered Dardic languages.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.