1,254 results on '"multimodal communication"'
Search Results
2. Intra‐individual modulations and inter‐individual variations of female signals in the domestic canary (Serinus canaria).
- Author
-
Le Gal, Camille, Derégnaucourt, Sébastien, and Amy, Mathieu
- Subjects
- *
SEXUAL cycle , *CANARIES , *SEXUAL selection , *BETROTHAL , *SONGBIRDS , *BIRDSONGS - Abstract
During courtship, animals perform conspicuous and elaborate signals. In birds, courtship involved often mutual engagement by both partners but most research on courtship behaviours has focused on male signals despite of growing interest for female signals in recent years. Here, we show that female domestic canaries (Serinus canaria) have the ability to modulate their sexual response to male songs. To do so, we exposed females to two types of song (very attractive and moderately attractive songs) during two consecutive reproductive cycles. We measured both visual (copulation solicitation displays, CSD) and vocal signals (copulation solicitation trills, CST; contact calls, CC and simple trills, ST) emitted by the females during song broadcast. We observed that females could modify the characteristics of their signals (duration and the number of elements of CSD, duration, frequency and number of notes of calls) depending on song attractiveness and the number of times they were exposed to a male's song. We also found that some females always emitted more signals than others (i.e. stable inter‐individual differences) regardless of the song attractiveness and across reproductive cycles. Further studies are necessary to check whether female signals constitute sexual ornaments and if they could stimulate male canaries during courtship. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Expanding the scope: multimodal dimensions in aphasia discourse analysis--preliminary findings.
- Author
-
Dutta, Manaswita and Mohapatra, Bijoyaa
- Subjects
DISCOURSE analysis ,LANGUAGE ability ,APHASIA ,BRAIN injuries ,SCORING rubrics ,MODAL logic - Abstract
Background: Aphasia, resulting from acquired brain injury, disrupts language processing and usage, significantly impacting individuals' social communication and life participation. Given the limitations of traditional assessments in capturing the nuanced challenges faced by individuals with aphasia, this study seeks to explore the potential benefits of integrating multimodal communication elements into discourse analysis to better capture narrative proficiency in this population. Objective: This study examined how incorporating multimodal communication elements (e.g., physical gestures, writing, drawing) into discourse analysis may affect the narrative outcomes of persons with aphasia compared to those observed using methods that exclude multimodal considerations. Methods: Participants included individuals with chronic aphasia and age-and education-matched healthy controls who completed a storytelling task--the Bear and the Fly story. Macrolinguistic scores were obtained using verbal-only and multimodal scoring approaches. Additionally, the frequency and type of multimodal communication use during storytelling were examined in relation to aphasia characteristics. Statistical analyses included both within-group and between-group comparisons as well as correlational analyses. Results: Individuals with aphasia scored significantly higher in terms of their macrolinguistic abilities when multimodal scoring was considered compared to verbal-only scoring. Within the aphasia group, there were prominent differences noted in macrolinguistic scores for both fluent and nonfluent aphasia. Specifically, both groups scored higher on Main Concepts when multimodal scoring was considered, with the nonfluent group demonstrating significantly higher Main Concept and total macrolinguistic rubric scores in multimodal scoring compared to verbal scoring on the storytelling task. Additionally, aphasia severity showed moderate positive correlations with total macrolinguistic scores, indicating that individuals with less severe aphasia tended to produce higher quality narratives. Lastly, although persons with aphasia used different types of nonverbal modalities (i.e., drawing, writing), the use of meaning-laden gestures was most predominant during storytelling, emphasizing the importance of multimodal elements in communication for individuals with aphasia. Conclusion: Our preliminary study findings underscore the importance of considering multimodal communication in assessing discourse performance among individuals with aphasia. Tailoring assessment approaches based on aphasia subtypes can provide valuable insights into linguistic abilities and inform targeted intervention strategies for improving communication outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Vocal-visual combinations in wild chimpanzees.
- Author
-
Mine, Joseph G., Wilke, Claudia, Zulberti, Chiara, Behjati, Melika, Bosshard, Alexandra B., Stoll, Sabine, Machanda, Zarin P., Manser, Andri, Slocombe, Katie E., and Townsend, Simon W.
- Subjects
ANIMAL communication ,HUMAN-animal communication ,SOUNDS ,CHIMPANZEES ,RELATIVES ,SIGNALS & signaling - Abstract
Living organisms throughout the animal kingdom habitually communicate with multi-modal signals that use multiple sensory channels. Such composite signals vary in their communicative function, as well as the extent to which they are recombined freely. Humans typically display complex forms of multi-modal communication, yet the evolution of this capacity remains unknown. One of our two closest living relatives, chimpanzees, also produce multi-modal combinations and therefore may offer a valuable window into the evolutionary roots of human communication. However, a currently neglected step in describing multi-modal systems is to disentangle non-random combinations from those that occur simply by chance. Here we aimed to provide a systematic quantification of communicative behaviour in our closest living relatives, describing non-random combinations produced across auditory and visual modalities. Through recording the behaviour of wild chimpanzees from the Kibale forest, Uganda we generated the first repertoire of non-random combined vocal and visual components. Using collocation analysis, we identified more than 100 vocal-visual combinations which occurred more frequently than expected by chance. We also probed how multi-modal production varied in the population, finding no differences in the number of visual components produced with vocalisations as a function of age, sex or rank. As expected, chimpanzees produced more visual components alongside vocalizations during longer vocalization bouts, however, this was only the case for some vocalization types, not others. We demonstrate that chimpanzees produce a vast array of combined vocal and visual components, exhibiting a hitherto underappreciated level of multi-modal complexity. Significance: In humans and non-humans, acoustic communicative signals are typically accompanied by visual information. Such "multi-modal communication" has been argued to function for increasing redundancy as well as for creating new meaning. However, a currently neglected step when describing multi-modal systems and their functions is to disentangle non-random combinations from those that occur simply by chance. These data are essential to providing a faithful illustration of a species' multi-modal communicative behaviour. Through recording the behaviour of wild chimpanzees from the Kibale forest, Uganda we aimed to bridge this gap in understanding and generated the first repertoire of non-random combined vocal and visual components in animals. Our data suggest chimpanzees combine many components flexibly and these results have important implications for our understanding of the complexity of multi-modal communication already existing in the last common ancestor of humans and chimpanzees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Infant‐directed communication: Examining the many dimensions of everyday caregiver‐infant interactions.
- Author
-
Kosie, Jessica E. and Lew‐Williams, Casey
- Subjects
- *
SPEECH & gesture , *CAREGIVERS , *FAMILY policy , *COMMUNICATIVE competence , *INFANTS - Abstract
Everyday caregiver‐infant interactions are dynamic and multidimensional. However, existing research underestimates the dimensionality of infants' experiences, often focusing on one or two communicative signals (e.g., speech alone, or speech and gesture together). Here, we introduce "infant‐directed communication" (IDC): the suite of communicative signals from caregivers to infants including speech, action, gesture, emotion, and touch. We recorded 10 min of at‐home play between 44 caregivers and their 18‐ to 24‐month‐old infants from predominantly white, middle‐class, English‐speaking families in the United States. Interactions were coded for five dimensions of IDC as well as infants' gestures and vocalizations. Most caregivers used all five dimensions of IDC throughout the interaction, and these dimensions frequently overlapped. For example, over 60% of the speech that infants heard was accompanied by one or more non‐verbal communicative cues. However, we saw marked variation across caregivers in their use of IDC, likely reflecting tailored communication to the behaviors and abilities of their infant. Moreover, caregivers systematically increased the dimensionality of IDC, using more overlapping cues in response to infant gestures and vocalizations, and more IDC with infants who had smaller vocabularies. Understanding how and when caregivers use all five signals—together and separately—in interactions with infants has the potential to redefine how developmental scientists conceive of infants' communicative environments, and enhance our understanding of the relations between caregiver input and early learning. Research Highlights: Infants' everyday interactions with caregivers are dynamic and multimodal, but existing research has underestimated the multidimensionality (i.e., the diversity of simultaneously occurring communicative cues) inherent in infant‐directed communication.Over 60% of the speech that infants encounter during at‐home, free play interactions overlap with one or more of a variety of non‐speech communicative cues.The multidimensionality of caregivers' communicative cues increases in response to infants' gestures and vocalizations, providing new information about how infants' own behaviors shape their input.These findings emphasize the importance of understanding how caregivers use a diverse set of communicative behaviors—both separately and together—during everyday interactions with infants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Personalization of industrial human–robot communication through domain adaptation based on user feedback.
- Author
-
Mukherjee, Debasmita, Hong, Jayden, Vats, Haripriya, Bae, Sooyeon, and Najjaran, Homayoun
- Subjects
HUMAN behavior ,INDUSTRIAL robots ,MACHINE-to-machine communications ,FACIAL expression ,BUSINESS communication - Abstract
Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human–robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human–robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human–robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user's facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human–robot collaboration applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Embodying scenes of moral disorder: Bodily gestures as a site of signification in feminist TikTok activism
- Author
-
Sigurdardottir Heba and Rautajoki Hanna
- Subjects
multimodal communication ,bodily rhetoric ,tiktok ,feminist activism ,cultural politics ,Communication. Mass media ,P87-96 - Abstract
Platformisation has facilitated the emergence of new rhetorical spaces where activists can creatively structure and present their political statements according to medium-specific affordances. In this article, we examine a group of activists on TikTok who topicalise gender-based violence and gender inequality through multimodal means of communication in short videos. We approached these videos as rhetorical arenas for communicating compelling messages with the aim of appealing to the audience. We used digital ethnography, multimodal discourse analysis, and approaches to the mediality of the body to conduct our investigation and analyses. Our results explicate how affective cues and bodily signals are put to rhetorical and political use in TikTok stories to pursue social change. That is, bodily performances stage interrelational positionings in a visual format, conveying affective evaluations and judgements about the state of the world and forming rhizomatic threads of messages to mobilise support and to affirm identification among the members of the movement.
- Published
- 2024
- Full Text
- View/download PDF
8. 'Mediatized Diaspora': Modelling the transnational influences of media in diaspora.
- Author
-
Shehata, Mostafa
- Subjects
COUNTRY of origin (Immigrants) ,MASS media influence ,SOCIAL perception ,POLITICAL communication ,SOCIAL reality - Abstract
This article seeks to model the socio-political influences of media use in diaspora. Participant observation and semi-structured interviews were used to collect data from Tunisian diaspora members residing in France, Denmark and Sweden. The article proposes 'mediatized diaspora' as a new model explaining the transnational intertwining between media and politics in the diaspora. Four sets of influences enabled by multimodal communication were defined and discussed as components of the model: (1) aspirations for the country of origin; (2) political meaning-making; (3) engaging in political actions; and (4) perceptions of social reality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Grammatical structures of emoji in Japanese-language text conversations
- Author
-
Kazuki Sekine and Manaka Ikuta
- Subjects
Emoji ,Grammar ,Multimodal communication ,Japanese ,Consciousness. Cognition ,BF309-499 - Abstract
Abstract Emojis have become a ubiquitous part of everyday text communication worldwide. Cohn et al. (Cognit Res Princ Implic 4(1):1–18, 2019) studied the grammatical structure of emoji usage among English speakers and found a correlation between the sequence of emojis used and English word order, tending towards an subject–verb–object (SVO) sequence. However, it remains unclear whether emoji usage follows a universal grammar or whether it is influenced by native language grammar. Therefore, this study explored the potential influence of Japanese grammar on emoji usage by Japanese speakers. Twenty adults, all native Japanese speakers, participated in pairs. In Experiment 1, participants engaged in conversations through Google Hangouts on iPads. The experiment consisted of four conversation rounds of approximately 8 min each. The first two rounds involved one participant using only written Japanese and the other using only emojis and punctuation, with roles reversed in the second round. The third round required both participants to use only emojis and punctuation. The results indicated that participants preferred subject–object–verb (SOV) or object–verb (OV) sequences, with OV patterns being more common. This pattern reflects a distinctive attribute of Japanese grammatical structure, marked by the frequent omission of the subject. Experiment 2 substituted emojis for words, showing nouns were more commonly replaced than verbs due to the difficulty in conveying complex meanings. Reduced subject replacements again emphasised Japanese grammatical structure. In essence, emoji usage reflects native language structures, but complexities are challenging to convey, resulting in simplified sequences. This study offers insights for enhancing emoji-based communication and interface design, with implications for translation and broader communication.
- Published
- 2024
- Full Text
- View/download PDF
10. Multimodal Discourse Analysis on Black Panther: Wakanda Forever Movie Poster
- Author
-
Lalu Angger Harjantoko, Patrisius Istiarto Djiwandono, and Daniel Ginting
- Subjects
black panther ,multimodal communication ,movie poster ,visual grammar ,representational meaning ,English language ,PE1-3729 - Abstract
Communication strategies in the modern period have become progressively multimodal with technological advancement. Multimodal communication involves forming a meaning through several modes than just language. This includes verbal and non-verbal communication, such as movie poster. Movie poster is one of the most effective promotional techniques to introduce the movie to potential audiences. Movie posters feature verbal and visual language as communication modes that can be interpreted to a certain meaning, including the movie’s theme, plot, and message. The visual grammar theory by Kress and van Leeuwen (2006) propose metafunctions to analyze image, including representational meaning. In representational meaning, communication modes must be able to represent elements and their relations in a world outside of their representational system as perceived by people. This paper demonstrates how various modes in “Black Panther: Wakanda Forever” movie poster are integrated to represent meaning and symbolism in the movie through representational meaning.
- Published
- 2024
- Full Text
- View/download PDF
11. INVESTIGATING MULTIMODAL SCIENTIFIC COMMUNICATION: AN ANALYSIS OF COMMUNICATIVE MODES IN BIOLOGY AND ENGINEERING RESEARCH ARTICLES
- Author
-
Manel BRAHMI and Asma NESBA
- Subjects
biology ,engineering ,modes ,multimodal communication ,scientific discourse. ,Social sciences (General) ,H1-99 ,Education ,Psychology ,BF1-990 - Abstract
The dissemination of knowledge in science and technology relies on efficient communication, with a noticeable shift towards multimodal communication. This research explores the subtle patterns of multimodal scientific communication to comprehend how several modes work together to enhance discourse quality. Twenty research articles, ten from the field of Engineering and ten from Biology, written over the previous three years by researchers of the University of El Oued, Algeria were collected from credible journals and then examined using a qualitative content analysis. The findings demonstrated that the authors used a multimodal approach in their research articles, presenting their research outcomes through a combination of textual explanations and visual components including graphs, tables, images, and diagrams. The study’s findings highlighted the interdependent nature of text and images and their importance in communicating scientific discourse. Furthermore, the data displays differences in the modes selected by these researchers, which reflects the specificities and particular needs of each discipline in choosing the modes. The study suggests that researchers’ active participation in workshops and training sessions can improve their proficiency in multimodal scientific communication. Moreover, it recommends additional research to broaden the area of enquiry and improve knowledge of multimodal communication in scientific discourse.
- Published
- 2024
- Full Text
- View/download PDF
12. Toward a semiotics of midwifery: Multimodal communication's effects on accessibility, equity, and power dynamics.
- Author
-
Celeste, Jane
- Subjects
- *
CUSTOMER experience , *PERCEPTUAL motor learning , *HEALTH literacy , *POWER (Social sciences) , *PATIENT education - Abstract
According to semiotics, we live in a world of signs, where almost anything can act as a signifier and convey meaning. But what of the semiotic landscape of midwifery? What signs are present within a client's multi‐sensory experience of their midwifery care? How are these signs functioning to increase equity and accessibility? Or worse, how might certain aspects of the client's experience communicate unjust power dynamics? Semiotics allows us to examine a wide communicative and educational environment. By paying particular attention to the multivalent meanings of different signs—be they written, visual, oral, or even physical—we can start to see how multimodal communication plays a vital role in a client's perception of equity and power. One way to improve client experience is by approaching education and semiotic experience from the same place as trauma‐informed care. A more health‐literate sensitive approach viewed through the lens of semiotics assumes all clients have little previous knowledge or comfort within a care setting. This hyperawareness and criticality of the semiotic environment would allow midwives to acknowledge various sensory and communicative biases and intentionally redesign the entire client experience. The semiotic landscape is then curated to meet the needs of the most important audience—those marginalized and discriminated against whether that is because of education, finances, race, gender, or any other intersectional identity. We must acknowledge the fact that all sign systems can either reinforce abusive power relations or work to improve them. For what is at stake here is not just a client's overall comfort, but their full understanding of the care they are receiving, the options they have, and their autonomy within their entire perinatal experience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Human aeroecology.
- Author
-
Derrick, Donald, Gick, Bryan, Jermy, Mark, and Pantelic, Jovan
- Subjects
INDOOR air quality ,BODY odor ,SPEECH perception ,BENTHIC zone ,LAND surface temperature ,ODORS ,COUGH - Abstract
This article provides an overview on human aeroecology, covering a range of subjects including the impact of air distribution systems on airborne disease transmission, the role of soundscapes in promoting restoration, the effects of face masks on speech perception, and the use of IoT sensors for controlling cooking emissions. The document also covers topics such as phytoremediation, green roofs, and the relationship between indoor air quality and human performance. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
14. Grammatical structures of emoji in Japanese-language text conversations.
- Author
-
Sekine, Kazuki and Ikuta, Manaka
- Subjects
COMPARATIVE grammar ,NATIVE language ,JAPANESE language ,WORD order (Grammar) ,EMOTICONS & emojis ,VERBS - Abstract
Emojis have become a ubiquitous part of everyday text communication worldwide. Cohn et al. (Cognit Res Princ Implic 4(1):1–18, 2019) studied the grammatical structure of emoji usage among English speakers and found a correlation between the sequence of emojis used and English word order, tending towards an subject–verb–object (SVO) sequence. However, it remains unclear whether emoji usage follows a universal grammar or whether it is influenced by native language grammar. Therefore, this study explored the potential influence of Japanese grammar on emoji usage by Japanese speakers. Twenty adults, all native Japanese speakers, participated in pairs. In Experiment 1, participants engaged in conversations through Google Hangouts on iPads. The experiment consisted of four conversation rounds of approximately 8 min each. The first two rounds involved one participant using only written Japanese and the other using only emojis and punctuation, with roles reversed in the second round. The third round required both participants to use only emojis and punctuation. The results indicated that participants preferred subject–object–verb (SOV) or object–verb (OV) sequences, with OV patterns being more common. This pattern reflects a distinctive attribute of Japanese grammatical structure, marked by the frequent omission of the subject. Experiment 2 substituted emojis for words, showing nouns were more commonly replaced than verbs due to the difficulty in conveying complex meanings. Reduced subject replacements again emphasised Japanese grammatical structure. In essence, emoji usage reflects native language structures, but complexities are challenging to convey, resulting in simplified sequences. This study offers insights for enhancing emoji-based communication and interface design, with implications for translation and broader communication. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Common ground in AAC: how children who use AAC and teaching staff shape interaction in the multimodal classroom.
- Author
-
Ibrahim, Seray, Clarke, Michael, Vasalou, Asimina, and Bezemer, Jeff
- Subjects
- *
SCHOOL environment , *FACILITATED communication , *PERSUASION (Rhetoric) , *CONVERSATION , *TASK performance , *ELEMENTARY schools , *QUALITATIVE research , *SCIENTIFIC observation , *EMPIRICAL research , *DESCRIPTIVE statistics , *COMMUNICATION devices for people with disabilities , *BODY language , *SPECIAL education schools , *TEACHER-student relationships , *BODY movement , *VIDEO recording , *CHILDREN - Abstract
Children who use augmentative and alternative communication (AAC) are multimodal communicators. However, in classroom interactions involving children and staff, achieving mutual understanding and accomplishing task-oriented goals by attending to the child's unaided AAC can be challenging. This study draws on excerpts of video recordings of interactions in a classroom for 6–9-year-old children who used AAC to explore how three child participants used the range of multimodal resources available to them – vocal, movement-based, and gestural, technological, temporal – to shape (and to some degree, co-control) classroom interactions. Our research was concerned with examining achievements and problems in establishing a sense of common ground and the realization of child agency. Through detailed multimodal analysis, this paper renders visible different types of practices rejecting a request for clarification, drawing new parties into a conversation, disrupting whole-class teacher talk-through which the children in the study voiced themselves in persuasive ways. It concludes by suggesting that multimodal accounts paint a more nuanced picture of children's resourcefulness and conversational asymmetry that highlights children's agency amidst material, semiotic, and institutional constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Combinatoriality and Compositionality in Communication, Skills, Tool Use, and Language.
- Author
-
Gontier, Nathalie, Hartmann, Stefan, Pleyer, Michael, and Rodrigues, Evelina Daniela
- Subjects
- *
BIRD communication , *PRIMATES , *PRIMATOLOGY , *HOMINIDS - Abstract
Combinatorial behavior involves combining different elements into larger aggregates with meaning. It is generally contrasted with compositionality, which involves the combining of meaningful elements into larger constituents whose meaning is derived from its component parts. Combinatoriality is commonly considered a capacity found in primates and other animals, whereas compositionality often is considered uniquely human. Questioning the validity of this claim, this multidisciplinary special issue of the International Journal of Primatology unites papers that each study aspects of combinatoriality and compositionality found in primate and bird communication systems, tool use, skills, and human language. The majority of authors conclude that compositionality is evolutionarily preceded by combinatoriality and that neither are uniquely human. This introduction briefly introduces readers to the major findings and issues raised by the contributors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Expanding the scope: multimodal dimensions in aphasia discourse analysis—preliminary findings
- Author
-
Manaswita Dutta and Bijoyaa Mohapatra
- Subjects
aphasia ,discourse ,multimodal communication ,gestures ,connected speech ,macrolinguistic quality ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
BackgroundAphasia, resulting from acquired brain injury, disrupts language processing and usage, significantly impacting individuals’ social communication and life participation. Given the limitations of traditional assessments in capturing the nuanced challenges faced by individuals with aphasia, this study seeks to explore the potential benefits of integrating multimodal communication elements into discourse analysis to better capture narrative proficiency in this population.ObjectiveThis study examined how incorporating multimodal communication elements (e.g., physical gestures, writing, drawing) into discourse analysis may affect the narrative outcomes of persons with aphasia compared to those observed using methods that exclude multimodal considerations.MethodsParticipants included individuals with chronic aphasia and age-and education-matched healthy controls who completed a storytelling task—the Bear and the Fly story. Macrolinguistic scores were obtained using verbal-only and multimodal scoring approaches. Additionally, the frequency and type of multimodal communication use during storytelling were examined in relation to aphasia characteristics. Statistical analyses included both within-group and between-group comparisons as well as correlational analyses.ResultsIndividuals with aphasia scored significantly higher in terms of their macrolinguistic abilities when multimodal scoring was considered compared to verbal-only scoring. Within the aphasia group, there were prominent differences noted in macrolinguistic scores for both fluent and nonfluent aphasia. Specifically, both groups scored higher on Main Concepts when multimodal scoring was considered, with the nonfluent group demonstrating significantly higher Main Concept and total macrolinguistic rubric scores in multimodal scoring compared to verbal scoring on the storytelling task. Additionally, aphasia severity showed moderate positive correlations with total macrolinguistic scores, indicating that individuals with less severe aphasia tended to produce higher quality narratives. Lastly, although persons with aphasia used different types of nonverbal modalities (i.e., drawing, writing), the use of meaning-laden gestures was most predominant during storytelling, emphasizing the importance of multimodal elements in communication for individuals with aphasia.ConclusionOur preliminary study findings underscore the importance of considering multimodal communication in assessing discourse performance among individuals with aphasia. Tailoring assessment approaches based on aphasia subtypes can provide valuable insights into linguistic abilities and inform targeted intervention strategies for improving communication outcomes.
- Published
- 2024
- Full Text
- View/download PDF
18. An Outlook for AI Innovation in Multimodal Communication Research
- Author
-
Henlein, Alexander, Bauer, Anastasia, Bhattacharjee, Reetu, Ćwiek, Aleksandra, Gregori, Alina, Kügler, Frank, Lemanski, Jens, Lücking, Andy, Mehler, Alexander, Prieto, Pilar, Sánchez-Ramón, Paula G., Schepens, Job, Schulte-Rüther, Martin, Schweinberger, Stefan R., von Eiff, Celina I., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, and Duffy, Vincent G., editor
- Published
- 2024
- Full Text
- View/download PDF
19. 3D Multimodal Socially Interactive Robot with ChatGPT Active Listening
- Author
-
Pasternak, Katarzyna, Duarte, Christopher, Ojalvo, Julio, Lisetti, Christine, Visser, Ubbo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Buche, Cédric, editor, Rossi, Alessandra, editor, Simões, Marco, editor, and Visser, Ubbo, editor
- Published
- 2024
- Full Text
- View/download PDF
20. Crowdsourcing-Based Approbation of Communicative Behaviour Elements on the F-2 Robot: Perception Peculiarities According to Respondents
- Author
-
Volkova, Liliya, Kotov, Artemy, Ignatev, Andrey, Kacprzyk, Janusz, Series Editor, Samsonovich, Alexei V., editor, and Liu, Tingting, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Brave Broken Heart? The Story of the Late Radio Journalist Suna Venter
- Author
-
Bergh, Luna, Nkoala, Sisanda, editor, and Motsaathebe, Gilbert, editor
- Published
- 2024
- Full Text
- View/download PDF
22. The "Multimodal Spiral": Rethinking the Communication Curriculum at an English as a Medium of Instruction Institution.
- Author
-
Overstreet, Matthew, Carbonell, Curtis, and Akhmedjanova, Diana
- Subjects
- *
CURRICULUM , *COMMUNICATION education , *INTERNATIONAL communication , *CURRICULUM planning - Abstract
The rise of English as a Medium of Instruction (EMI) threatens to upend traditional teaching and learning practices. Writing, speaking, and communication instruction will all need to evolve. This article presents a case study of one institution's efforts to design and implement a communication curriculum responsive to the unique demands of the EMI environment. The curriculum proposed enacts an interdisciplinary, multimodal approach to the teaching of communication. We discuss the specifics of the curriculum, the process of its creation, the principles underlying it, and how these principles play out in practice. In doing so, we hope to provide a model both for global communication instruction and future curricular design efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. COVID-19 crosslinguistic and multimodal public health communication strategies: Social justice or emergency political strategy?
- Author
-
NDLANGAMANDLA, Sibusiso C., CHAKA, Chaka, SHANGE, Thembeka, and SHANDU-PHETLA, Thulile
- Subjects
MEDICAL communication ,PUBLIC communication ,SOCIAL justice ,COVID-19 pandemic ,COMMUNICATION strategies ,COUNTRIES ,ANTHROPOLOGICAL linguistics - Abstract
The current paper explores crosslinguistic and multimodal health communication strategies employed by the South African government during the COVID-19 pandemic in 2020-2022. Some governments used multiple languages, yet in most cases, English monolingualism was a predominant form of communication. This paper utilised a multimodal critical discourse analysis to explore public health communication by government officials in South Africa and by members of the National Coronavirus Command Council mandated to combat the spread of COVID-19 in South Africa. The paper interrogates how this language and messaging limited or enabled linguistic equity and social justice. The paper concludes that in a country such as South Africa, for any government's initiative to promote linguistic and social justice, it ought to be 'languaged' and messaged through the linguistic repertoires that the majority of its citizens understand; if not, it is doomed to fail as was the case with the South African government's COVID-19 communication strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. The analysis of gestural and verbal pragmatic markers produced by Mild Cognitive Impaired participants during longitudinal and autobiographical interviews.
- Author
-
Duboisdindien, Guillaume
- Subjects
- *
COMMUNICATIVE competence , *EMPATHY , *MILD cognitive impairment , *RESEARCH funding , *INTERVIEWING , *QUESTIONNAIRES , *PSYCHOLINGUISTICS , *LONGITUDINAL method , *AUTOBIOGRAPHY , *COMMUNICATIVE disorders , *AGING , *NEUROPSYCHOLOGICAL tests - Abstract
This corpus-based study presents a multimodal analysis of verbal pragmatic markers and non-verbal pragmatic markers in elderly people with Mild Cognitive Impairment aged over 75 years. The corpus collection and analysis methodology has been described in the Belgian CorpAGEst transversal study and the French VintAGE longitudinal and transversal oriented pilot studies. The protocols are available online in both English and French. Our general findings indicate that with ageing, verbal pragmatic markers acquire an interactive function that allows people with MCI to maintain intersubjective relationships with their interlocutor. Furthermore, at the non-verbal level, gestural manifestations are increasingly used over time with a preference for non-verbal pragmatic markers with a referential function and an adaptive function. We aim to show the benefits of linguistic and interactional scientific investigation methods through cognitive impaired ageing for clinicians and family caregivers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Attention Drives Visual Processing and Audiovisual Integration During Multimodal Communication.
- Author
-
Seijdel, Noor, Schoffelen, Jan-Mathijs, Hagoort, Peter, and Drijvers, Linda
- Abstract
During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A survey of technologies supporting design of a multimodal interactive robot for military communication
- Author
-
Paul, Sheuli
- Published
- 2023
- Full Text
- View/download PDF
27. How teachers emphasize their speech: gestures and self-repetitions during group interaction with toddlers.
- Author
-
Casla, Marta, Moreno-Núñez, Ana, Alam, Florencia, and Rosemberg, Celia
- Abstract
AbstractWhen interacting with young children, adults often self-repeat their own utterances that vary in sequences of adjacent utterances called variation sets (VS) (Küntay and Slobin 1996). These repetitions benefit children’s linguistic development because they emphasize form and meaning. This paper analyzes the use of VS during group interaction and from a multimodal point of view. Sixteen teachers were video-recorded during interaction with two-year-old children in Spanish nursery schools. Results show that the use of VS is particularly frequent in these settings and that they are typically combined with gestures. Teachers directed their VS more often to a group of children than to a single child, and those VS directed to the group were more often accompanied by gestures than the VS directed to individuals. The results also show the influence of group size on children’s responses. The study thus sheds new light on our understanding of child-directed speech (CDS), as well as the need to adapt the speech to single children during interaction in large groups. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Species- and sex-specific chemical composition from an internal gland-like tissue of an African frog family.
- Author
-
Schäfer, Marvin, Sydow, David, Schauer, Maria, Doumbia, Joseph, Schmitt, Thomas, and Rödel, Mark-Oliver
- Abstract
Intraspecific chemical communication in frogs is understudied and the few published cases are limited to externally visible and male-specific breeding glands. Frogs of the family Odontobatrachidae, a West African endemic complex of five morphologically cryptic species, have large, fatty gland-like strands along their lower mandible. We investigated the general anatomy of this gland-like strand and analysed its chemical composition. We found the strand to be present in males and females of all species. The strand varies in markedness, with well-developed strands usually found in reproductively active individuals. The strands are situated under particularly thin skin sections, the vocal sac in male frogs and a respective area in females. Gas-chromatography/mass spectrometry and multivariate analysis revealed that the strands contain sex- and species-specific chemical profiles, which are consistent across geographically distant populations. The profiles varied between reproductive and non-reproductive individuals. These results indicate that the mandibular strands in the Odontobatrachidae comprise a so far overlooked structure (potentially a gland) that most likely plays a role in the mating and/or breeding behaviour of the five Odontobatrachus species. Our results highlight the relevance of multimodal signalling in anurans, and indicate that chemical communication in frogs may not be restricted to sexually dimorphic, apparent skin glands. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Hand Gestures Have Predictive Potential During Conversation: An Investigation of the Timing of Gestures in Relation to Speech.
- Author
-
ter Bekke, Marlijn, Drijvers, Linda, and Holler, Judith
- Subjects
- *
SPEECH & gesture , *GESTURE , *HAND signals , *CONVERSATION , *SPEECH - Abstract
During face‐to‐face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face‐to‐face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co‐speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face‐to‐face conversation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Using networks to visualize, analyse and interpret multimodal communication.
- Author
-
Hex, Severine B.S.W. and Rubenstein, Daniel I.
- Subjects
- *
ANIMAL behavior , *NETWORK analysis (Communication) , *ZEBRAS , *EQUUS , *ANIMAL communication - Abstract
Multimodality is a virtually ubiquitous feature of communication. With the increasing interest in how animals, including humans, use multimodal and multicomponent signals in social interactions, there is an acute need for standardized and rigorous tools that will allow us to visualize and analyse these signals as they occur in naturalistic interactions as a complex, integrated system. Network theory is a powerful methodology for intuitively visualizing and investigating the relationships between entities. Here, we propose a new framework for analysing multimodal communication. Using a case study of natural multimodal interactions in wild plains zebras, Equus quagga , we introduce the descriptive power of network metrics by providing an objective set of metrics to (1) describe the relationships between simultaneously produced signals within and between modalities and (2) infer signal meaning and function. We embed these tools in a theoretical framework that can be used to interpret and describe both the global structure of the repertoire and the role individual signals play in shaping and modulating meaning. Next, we review an array of common questions in animal behaviour that could benefit from the use of multimodal networks to facilitate meaningful comparisons of communication across social and environmental contexts, timescales and species. Finally, we discuss extending the use of network analyses to multimodal communication through the use of directed networks and the challenges to be overcome from this application. • Networks can describe relations between simultaneously produced multimodal signals. • We propose a framework for using network theory to analyse multimodal communication. • We use multimodal interactions in plains zebras to illustrate these techniques. • Network theory can be applied to a range of interdisciplinary, comparative questions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. African penguins utilize their ventral dot patterns for individual recognition.
- Author
-
Baciadonna, Luigi, Solvi, Cwyn, Terranova, Francesca, Godi, Camilla, Pilenga, Cristina, and Favaro, Livio
- Subjects
- *
PENGUINS , *ANIMAL communication - Abstract
Birds are known to be highly social and visual animals. Yet no specific visual feature has been identified to be responsible for individual recognition in birds. Here, using a differential looking paradigm across five experiments, we demonstrated that African penguins, Spheniscus demersus, spontaneously discriminated between life-size photographs of their monogamous, lifelong partner and a nonpartner colonymate using their ventral dot patterns. Our findings challenge the assumption of limited visual involvement in penguin communication and suggest a rather complex and flexible recognition process in these birds. The combination of our current results and previous findings, which showed cross-modal (visual/auditory) recognition in these animals, suggests that African penguins use their ventral dot patterns to individually recognize their colonymates. Our results provide the first evidence of a specific visual cue responsible for spontaneous individual recognition by a bird, and highlight the importance of considering all sensory modalities in the study of animal communication. • African penguins spontaneously recognize their partner visually. • Penguins rely strongly on their ventral dot patterns for individual recognition. • Penguins may have holistic representations of other penguins. • We challenge the idea of a limited visual involvement in penguins' communications. • Results suggest a complex and flexible recognition system in African penguins. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Multimodal exploration of the thank God expressive construction and its implications for translation.
- Author
-
Martínez, Fernando Casanova
- Subjects
COMPUTATIONAL linguistics ,LINGUISTIC analysis ,RAPID tooling ,COMMUNICATION patterns ,TRANSLATING & interpreting - Abstract
Multimodal research in communication and translation studies is increasingly recognized, yet it remains incompletely explored. Leveraging computational linguistics with both Praat for acoustic analysis and the OpenPose and Rapid Annotator tools for visual analysis, this study delves into the intricate dynamics of the expressive construction thank God, providing a comprehensive examination of both visual and acoustic dimensions. Our objective is to uncover nuanced patterns of multimodal communication embedded within this expression and their implications for Translation and Interpreting. Through an analysis of linguistic features and co-speech gestures present in thank God, we aim to deepen our comprehension of how meaning crisscrosses modalities. Our findings underscore the necessity of a multimodal approach in language studies, emphasizing the requisite to preserve emotional and contextual nuances. The analysis unveils the phonological relevance of the duration of the construction's second vowel, a key factor for translation. Additionally, data reveals a correlation between the emotion of relief and gestures executed with both hands closer to the chest. Overall, these findings contribute to advancing both multimodal communication research and translation studies, shedding light on the role of multimodal analysis in understanding language and translation dynamics, particularly in the context of constructions like thank God. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A survey of technologies supporting design of a multimodal interactive robot for military communication
- Author
-
Sheuli Paul
- Subjects
Sensor fusion ,Human–machine teaming (HMT) ,Multimodal communication ,Multimodal interactive robotic system (MIRS) ,Spoken dialogue system (SDS) ,Visual story telling (VST) ,Military Science - Abstract
Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.
- Published
- 2023
- Full Text
- View/download PDF
34. Communication in Owl Monkeys
- Author
-
Spence-Aizenberg, Andrea, García de la Chica, Alba, Evans, Sian, Fernandez-Duque, Eduardo, Barrett, Louise, Series Editor, and Fernandez-Duque, Eduardo, editor
- Published
- 2023
- Full Text
- View/download PDF
35. A Roadmap for Technological Innovation in Multimodal Communication Research
- Author
-
Gregori, Alina, Amici, Federica, Brilmayer, Ingmar, Ćwiek, Aleksandra, Fritzsche, Lennart, Fuchs, Susanne, Henlein, Alexander, Herbort, Oliver, Kügler, Frank, Lemanski, Jens, Liebal, Katja, Lücking, Andy, Mehler, Alexander, Nguyen, Kim Tien, Pouw, Wim, Prieto, Pilar, Rohrer, Patrick Louis, Sánchez-Ramón, Paula G., Schulte-Rüther, Martin, Schumacher, Petra B., Schweinberger, Stefan R., Struckmeier, Volker, Trettenbrein, Patrick C., von Eiff, Celina I., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Duffy, Vincent G., editor
- Published
- 2023
- Full Text
- View/download PDF
36. Reconstructed multisensoriality. Reading The Catcher in the Rye
- Author
-
Tenchini Maria Paola and Sozzi Andrea
- Subjects
multimodal communication ,multisensoriality ,nonverbal behavior in literature (novels) ,character psychology ,Philosophy. Psychology. Religion ,Psychology ,BF1-990 - Abstract
In natural face-to-face interactions, verbal communication always occurs in association with expressions of nonverbal behavior. The functional contribution of these multimodal aspects to the meaning of the message and to its effects fulfils multiple communicative functions that differ according primarily to the speaker’s intentions, to the interpersonal relations between the speaker and the addressee, to the nature of the message, and to the context.
- Published
- 2023
- Full Text
- View/download PDF
37. Digital twin‐based framework for wireless multimodal interactions over long distance.
- Author
-
Kang, Mancong, Li, Xi, Ji, Hong, and Zhang, Heli
- Subjects
- *
HAPTIC devices , *REINFORCEMENT learning , *DEEP reinforcement learning , *MACHINE learning , *DIGITAL twins , *STATISTICAL decision making - Abstract
Summary: Wireless multimodal interactions over long distance (WMILD) would give rise to numerous thrilling applications, such as remote touching and immersive teleoperations. However, long distances can induce large propagation delays, which makes it difficult to meet the ultra‐low latency requirements in haptic‐visual interactions. Considering existing works mainly focused on the wireless access part, this paper designs an end‐to‐end framework for general WMILD applications based on the digital twin (DT) technology and proposes an intelligent resource allocation and parameter compression scheme to guarantee WMILD performance under constraint network resources. In the framework, user device can acquire real‐time remote interactions by performing local interactions with nearby base station (BS), where a DT of the remote side is deployed to predict the remote haptic‐visual feedbacks. A reliable DT updating process is carefully designed to guarantee the DT accurately model its dynamic physical counterpart. To optimize the updating reliability, we formulate the resource allocation and parameter compression to be a constraint‐Markov decision problem, under the constraints on energy consumption, multimodal interactions and updating latencies. Then, a safe deep reinforcement learning algorithm is proposed to adapt resources and compression according to the dynamic DT updating workload, multimodal data‐streams and remote transmission capacities. Simulation shows the framework can achieve high updating reliability compared with baselines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. A road map of jumping spider behavior.
- Author
-
Nelson, Ximena J.
- Subjects
- *
SPIDER behavior , *ROAD maps , *JUMPING spiders , *ANTIPREDATOR behavior , *HABITAT selection , *NUMBERS of species - Abstract
The largest family of spiders, jumping spiders (Salticidae), is known for performing complex visually mediated predatory and courtship behavior. As cursorial predators, they rely on their sensory systems to identify objects at a distance. Based on these assessments, salticids perform flexible and target-specific behavioral sequences which demonstrate a high level of cognitive processing. Recent studies have highlighted the role of other sensory modalities in these processes, such as chemoreception and mechanoreception, and elucidated the visual cues used for object identification, including motion, color, contrast, and shape-based cues. Until recently, sensory modalities other than vision were largely overlooked, but current advances in technology now allow us to probe their sensory and cognitive capabilities, as well as how these are shaped by experience. In this review, I provide an overview of current knowledge of salticid behavior and the sensory systems underpinning this behavior, and highlight areas in need of further research. This review focusses on our understanding of salticid communication, parental behavior, personality, antipredator behavior, and diet, as well as habitat selection. I argue that a historical vision-based focus on a small number of species due to their coloration or their unusual behavior provides a springboard for a deeper understanding of the general cognitive and sensory attributes that have evolved in this lineage, of which we yet have much to learn. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Impact of Multimodal Digital Media Communication on Generation Z's Language Use and Literacy Practices.
- Author
-
Shamim, Fauzia and Riaz, Muhammad Naeem
- Abstract
The impact of increased use of digital communication mediated through social media was observed during COVID-19, in particular, on students' language and its use in the classroom and their literacy practices. This led to an investigation of the use of multimodal digital communication in the language of Generation Z users at the focal university. Social-semiotic theory of multimodality in digital communication provided the theoretical framework for the study. A quantitative survey was done with 394 respondents on the frequency of use of different apps for different purposes, as well as students' perceptions of the impact of social media on their literacy practices. Subsequently, qualitative interviews were done to gain a more in-depth understanding of the survey results. The results of this mixed-methods study indicate that Generation Z users are well aware of the affordances and constraints of different social media platforms and apps and use this knowledge judiciously for varied purposes and audiences in their digital communication. This has also impacted their crafting and interpreting of digital multimodal messages. The study findings have implications for teaching English (and other languages); similarly, other disciplines also need to take into account students' changing literacy practices to enhance their learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Toddler-directed and adult-directed gesture frequency in monolingual and bilingual caregivers.
- Author
-
Molnar, Monika, Leung, Kai Ian, Santos Herrera, Jodee, and Giezen, Marcel
- Subjects
- *
TODDLERS , *CAREGIVERS , *LINGUISTIC context , *COMMUNICATION strategies - Abstract
Aims and objectives: This study was designed to assess whether bilingual caregivers, compared with monolingual caregivers, modify their nonverbal gestures to match the increased communicative and/or cognitive-linguistic demands of bilingual language contexts – as would be predicted based on the 'Facilitative Strategy Hypothesis'. Methodology: We examined the rate of gestures (i.e., representational and beat gestures) in monolingual and bilingual caregivers when retelling a cartoon story to their child or to an adult, in a monolingual and a bilingual context ('synonym' context for monolingual caregivers). Data and analysis: We calculated the frequency of all gestures, representational gestures, and beat gestures for each addressee (adult-directed vs. toddler-directed) and language context (monolingual vs. bilingual/synonym), separately for the monolingual and the bilingual caregivers. Using linear mixed models, we contrasted monolingual versus bilingual caregivers' gesture frequency. Findings/conclusions: Bilingual caregivers gesture more than monolingual caregivers, irrespective of addressee and language context. Furthermore, we found evidence in support of the Facilitative Strategy Hypothesis across both monolingual and bilingual caregivers, as both groups increased the rate of their representational gestures in the child-directed retelling. Furthermore, both bilingual and monolingual caregivers used more gestures in the context of increased communicative demands (language mixing or using synonyms for monolingual caregivers). Originality: To our knowledge, this is the first study of gesture use in child-directed communication in monolingual and bilingual caregivers. Significance/implications: Independent of their monolingual or bilingual status, caregivers adjust their multimodal communication strategies (specifically gestures) when interacting with their children. Furthermore, under increased communicative demands, both groups of caregivers further increase their gesture rate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. More than words - how second language learners initiate and respond during shared picture book reading interactions.
- Author
-
Kappenberg, Aleksandra and Licandro, Ulla
- Subjects
- *
SECOND language acquisition , *EARLY childhood education , *COMMUNICATION strategies , *GESTURE , *PICTURE books for children - Abstract
Initiating conversations and responding to initiations of others via gestural and verbal means are prerequisites for participating in Early Childhood Education and Care (ECEC) interactions. However, research to date has not addressed the multimodal initiations and responses of young second language learners (SLLs) in natural ECEC settings. This study investigated their initiations and responses during shared picture book reading interactions with ECEC practitioners with a focus on modalities, strategies, as well as the multimodal meanings and types of utterances. Participants were 30 SLLs in German ECEC institutions. Results showed that SLLs responded to the initiations of the practitioners predominantly verbally and initiated interactions mostly multimodally. Furthermore, the children produced more combinations of multi-word utterances and gestures in their initiations than in their responses. These findings complement the body of work demonstrating the varied communicative skills young SLLs use in their interactions as well as by underlining the importance of gestural cues in everyday ECEC interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. How The Danish Girl was adapted and recontextualized through multimedia.
- Author
-
Zhang, Zhen
- Abstract
The Danish Girl has three different versions – novel, screenplay and film. Through comparison, this study explored how Lucinda Coxon (2015) and Tom Hooper (2015) adapted David Ebershoff's novel (2000), The Danish Girl, to the screenplay and the film of the same name, and how multimodal semiotic resources contributed to recontextualizing the screenplay and the film. This research selected a scene, which represented the 'remade' type, from The Danish Girl to instantiate recontextualization in depth. The study found that the adaptations of The Danish Girl mixed different methods. The study also found that the way of recontextualizing the film was more complex than that of the screenplay. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. The Role of Representational Gestures and Speech Synchronicity in Auditory Input by L2 and L1 Speakers.
- Author
-
Cavicchio, Federica and Busà, Maria Grazia
- Subjects
SPEECH & gesture ,COINCIDENCE ,SPEECH ,NATIVE language ,SECOND language acquisition - Abstract
Speech and gesture are two integrated and temporally coordinated systems. Manual gestures can help second language (L2) speakers with vocabulary learning and word retrieval. However, it is still under-investigated whether the synchronisation of speech and gesture has a role in helping listeners compensate for the difficulties in processing L2 aural information. In this paper, we tested, in two behavioural experiments, how L2 speakers process speech and gesture asynchronies in comparison to native speakers (L1). L2 speakers responded significantly faster when gestures and the semantic relevant speech were synchronous than asynchronous. They responded significantly slower than L1 speakers regardless of speech/gesture synchronisation. On the other hand, L1 speakers did not show a significant difference between asynchronous and synchronous integration of gestures and speech. We conclude that gesture-speech asynchrony affects L2 speakers more than L1 speakers. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Attentional Relevance Modulates Nonverbal Attractiveness Perception in Multimodal Display.
- Author
-
Hu, Yanbing, Mou, Zhen, and Jiang, Xiaoming
- Subjects
- *
PERSONAL beauty , *NONVERBAL communication , *CONFIDENCE intervals , *REGRESSION analysis , *ATTENTION , *COMMUNICATION , *DESCRIPTIVE statistics , *DATA analysis software , *BODY image - Abstract
Physical attractiveness is the degree to which a person's physical characteristics are considered pleasing and appealing and plays an important role in interpersonal communication. The majority of existing studies have focused on the perception of attractiveness from one modality (e.g., face or voice). This study aims to explore how individuals perceive attractiveness from multimodal information of different attentional relevance. In Experiment 1, twenty-four participants rated the attractiveness of recorded voices of vowels and sentences on a 9-point scale. In Experiment 2, a new group of 64 participants rated the attractiveness of audiovisual stimuli in an adapted Garner paradigm which directed their attention to different modalities of information. In the face-attending task, participants rated facial attractiveness while ignoring the voice; In the voice-attending task, participants rated vocal attractiveness while ignoring the face. The linear-regression results showed that F0 and harmonic-to-noise-ratio predicted the attractiveness of vowels; semantic valence modulated the perceived attractiveness of sentences. The linear mixed-effects model showed that, while attentional irrelevance generally attenuated the perceived attractiveness of either face or voice in multimodal stimuli, only the effect of facial attractiveness persisted under the voice-attending task. These findings demonstrated that the allocation of attentional relevance to certain communicative modality alters human nonverbal attractiveness perception. More importantly, the modulation of top-down processes on multimodal attractiveness integration aligns with the late integration framework and provides evidence for the cognitive underpinnings supporting multimodal communication. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Revisiting how we operationalize joint attention
- Author
-
Gabouer, Allison and Bortfeld, Heather
- Subjects
Cognitive and Computational Psychology ,Psychology ,Basic Behavioral and Social Science ,Pediatric ,Behavioral and Social Science ,Mental Health ,Clinical Research ,Attention ,Child Development ,Child ,Preschool ,Humans ,Infant ,Parent-Child Relations ,Joint attention ,Coding scheme ,Parent-child interactions ,Multimodal communication ,Infant social cognition ,Clinical Sciences ,Cognitive Sciences ,Developmental & Child Psychology ,Biomedical and clinical sciences - Abstract
Parent-child interactions support the development of a wide range of socio-cognitive abilities in young children. As infants become increasingly mobile, the nature of these interactions change from person-oriented to object-oriented, with the latter relying on children's emerging ability to engage in joint attention. Joint attention is acknowledged to be a foundational ability in early child development, broadly speaking, yet its operationalization has varied substantially over the course of several decades of developmental research devoted to its characterization. Here, we outline two broad research perspectives-social and associative accounts-on what constitutes joint attention. Differences center on the criteria for what qualifies as joint attention and regarding the hypothesized developmental mechanisms that underlie the ability. After providing a theoretical overview, we introduce a joint attention coding scheme that we have developed iteratively based on careful reading of the literature and our own data coding experiences. This coding scheme provides objective guidelines for characterizing mulitmodal parent-child interactions. The need for such guidelines is acute given the widespread use of this and other developmental measures to assess atypically developing populations. We conclude with a call for open discussion about the need for researchers to include a clear description of what qualifies as joint attention in publications pertaining to joint attention, as well as details about their coding. We provide instructions for using our coding scheme in the service of starting such a discussion.
- Published
- 2021
46. Talking Images
- Author
-
Ferrara, Silvia, Cartolano, Mattia, and Ottaviano, Ludovica
- Subjects
Multimodal communication ,Images ,Emblems ,Symbols ,Visual codes ,Written codes ,Symbol-making ,Symbol-perception ,Semiotics ,Archaeology ,Paleoanthropology ,Linguistic anthropology ,Cultural anthropology ,Anthropology of images ,Silvia Ferrara ,Mattia Cartolano ,Ludovica Ottaviano ,thema EDItEUR::G Reference, Information and Interdisciplinary subjects::GT Interdisciplinary studies::GTD Semiotics / semiology ,thema EDItEUR::J Society and Social Sciences::JH Sociology and anthropology::JHM Anthropology::JHMC Social and cultural anthropology ,thema EDItEUR::C Language and Linguistics::CF Linguistics ,thema EDItEUR::N History and Archaeology::NK Archaeology::NKA Archaeological theory - Abstract
This innovative collection offers a holistic portrait of the multimodal communication potential of images from the Upper Paleolithic through to today, showcasing image-based creativity throughout the centuries. The volume seeks to extend the boundaries of our understanding of what language and writing can do to show how language can be understood as part of broader codes, as well as how images and figural objects can contribute to meaning-making in communication. The book is divided into four parts, each exploring a different dimension of the interplay between representation, symbolic meaning, and perception in the study of images, drawing on case studies from around the world. The first part looks at cognitive approaches to the earliest symbol-making while the second considers the interaction between images and writing in early scripts. The third part addresses images outside their boxes, showcasing how ancient communication devices can be reinterpreted. The final part features chapters reflecting on embodied semiotic approaches to the representation of images. This book will be of interest to scholars in semiotics, archaeology, cognitive psychology, and linguistic and cultural anthropology.
- Published
- 2025
- Full Text
- View/download PDF
47. RoboFinch: A versatile audio‐visual synchronised robotic bird model for laboratory and field research on songbirds
- Author
-
Ralph Simon, Judith Varkevisser, Ezequiel Mendoza, Klaus Hochradel, Rogier Elsinga, Peter G. Wiersma, Esmee Middelburg, Eva Zoeter, Constance Scharff, Katharina Riebel, and Wouter Halfwerk
- Subjects
language evolution ,multimodal communication ,robotic bird model ,robotics ,social behaviour ,song learning ,Ecology ,QH540-549.5 ,Evolution ,QH359-425 - Abstract
Abstract Singing in birds is accompanied by beak, head and throat movements. The role of these visual cues has long been hypothesised to be an important facilitator in vocal communication, including social interactions and song acquisition, but has seen little experimental study. To address whether audio‐visual cues are relevant for birdsong we used high‐speed video recording, 3D scanning, 3D printing technology and colour‐realistic painting to create RoboFinch, an open source adult‐mimicking robot which matches temporal and chromatic properties of songbird vision. We exposed several groups of juvenile zebra finches during their song developmental phase to one of six singing robots that moved their beaks synchronised to their song and compared them with birds in a non‐synchronised and two control treatments. Juveniles in the synchronised treatment approached the robot setup from the start of the experiment and progressively increased the time they spent singing, contra to the other treatment groups. Interestingly, birds in the synchronised group seemed to actively listen during tutor song playback, as they sung less during the actual song playback compared to the birds in the asynchronous and audio‐only control treatments. Our open source RoboFinch setup thus provides an unprecedented tool for systematic study of the functionality and integration of audio‐visual cues associated with song behaviour. Realistic head and beak movements aligned to specific song elements may allow future studies to assess the importance of multisensory cues during song development, sexual signalling and social behaviour. All software and assembly instructions are open source, and the robot can be easily adapted to other species. Experimental manipulations of stimulus combinations and synchronisation can further elucidate how audio‐visual cues are integrated by receivers and how they may enhance signal detection, recognition, learning and memory.
- Published
- 2023
- Full Text
- View/download PDF
48. Developing music improvisation workshops for preschool children through Action Research
- Author
-
MacGlone, Una M., MacDonald, Raymond, and Wilson, Graeme
- Subjects
372.87 ,improvisation ,preschool children ,multimodal communication ,music education ,mixed methods - Abstract
Improvisation in music is an important skill, which is increasingly valued, and an essential part of curricula at all educational levels. However, understandings of improvisation are conflicting and contradictory approaches exist within improvisation pedagogy. Creative and learning processes from free improvisation are used in Higher Education, and with Secondary and Primary children, but there is scarce research with young children. This is despite potential alignment with preschool curricula, which emphasise creativity and social skills. The aims of this PhD were to investigate and improve a novel method of delivering music education to preschool children through improvisation, emphasising personal creativity and socio-musical responsiveness. The research questions were as follows: How can children's creativity and engagement in group improvisation be appreciated and evaluated? This question had two further sub questions: What are parents' and teachers' attitudes and beliefs about the children, creativity and music?, and, What are the children's conceptualisations of the workshops? The second research question was: Do the workshop programme, teaching approaches and methods change through two cycles of Action Research? A Pragmatic theoretical stance supported Mixed Methods within an Action Research design, providing a suitable model for enquiry through action, analysis, and planned change. Workshop materials were designed for two 6-week cycles of Action Research for different groups of preschool children (seven in cycle I, six in Cycle II; aged 4-5) in 2016. Prior to the workshops, two original theoretical constructs were proposed and then refined through the process of analysis: Creative musical agency (CMA) and socio-musical aptitude (S-MA). CMA is instantiated when a child creates and executes novel musical material independently in a group improvisation. S-MA is instantiated when child creates a musical response in relation and with reference to, another child's musical idea in a group improvisation. Video data of the children's improvisations were sampled and analysed using multimodal video analysis, to gain a rich, nuanced picture of social and musical interactions and expressions of creativity during the children's improvisations. This involved coding for instances of CMA and S-MA in different musical parameters. In-depth interviews with the children's parents and teachers and children's talk from the workshops were subjected to Thematic Analysis. Two experts rated 39 clips of the children's improvisations as showing CMA, S-MA or neither and were interviewed to explore their views further. In parents' and teachers' interviews, the types of strategies they employed were shaped by whether or not they perceived a child as confident and able to share. Their conceptions of children's creativity were through descriptions of their art activities as well as making up stories and role play. In contrast, music was not readily conceptualised as a creative activity and being musical was understood as possessing technical skill on an instrument. All of the adults identified as non-musical, even though they participated in musical activities with the children. In children's talk, their understandings of improvising were mediated in distinct ways: previous musical experiences, expressive descriptions of their improvisations, and combinations of these with musical terms. Video analysis indicated that for 10/13 children, the number of CMA and S-MA events increased over the workshop programme. The range of musical parameters for improvising increased through the workshop programme. Between the experts' video clip ratings there was a slight agreement for CMA (Kappa 0.21 and moderate agreement for S-MA (Kappa 0.5). They accounted for this by proposing that the teacher mediated some children's CMA events. Video analysis showed children looking at the teacher before 57% of CMA events. The workshop model changed from a linear succession of tasks with a talk section at the end to iterative cycle of playing and talking, as the original model was not effective in facilitating the children's discourse. This study is the first to use improvisation with a group of this size and age. Two novel constructs of CMA and S-MA offer a promising means to apprehend and evaluate young children's creativity and engagements in group improvisation. Children's perspectives in creative tasks are under reported; the distinct understandings of improvisation that emerged here are important in appreciating conceptual as well as musical development at this age. Parents and teachers value music and creativity but their own musical identities may affect how they create music with children. The refined workshop model offers a flexible and responsive template; by capturing children's understanding of their playing, informed pedagogical choices can be made. Recommendations for future research include creating more CMA and S-MA based activities, and investigating effective teacher training for future delivery. More qualitative studies could investigate children's cognitive processes in group creativity. Music is a collection of skills, therefore, developing conceptualisations of music education as improving creativity, social skills and critical thinking, presents a powerful argument for teaching and appreciating music in these ways from the start of young children's education.
- Published
- 2020
- Full Text
- View/download PDF
49. Response to comment on 'Parasite defensive limb movements enhance acoustic signal attraction in male little torrent frogs'
- Author
-
Longhui Zhao, Wouter Halfwerk, and Jianguo Cui
- Subjects
multimodal communication ,mating displays ,physical movements ,Amolops torrentis ,parasite ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Recently we showed that limb movements associated with anti-parasite defenses can enhance acoustic signal attraction in male little torrent frogs (Amolops torrentis), which suggests a potential pathway for physical movements to become co-opted into mating displays (Zhao et al., 2022). Anderson et al. argue for alternative explanations of our results and provide a reanalysis of part of our data (Anderson et al., 2023). We acknowledge some of the points raised and provide an additional analysis in support of our hypothesis.
- Published
- 2023
- Full Text
- View/download PDF
50. Gesture use in L1-Turkish and L2-English: Evidence from emotional narrative retellings.
- Author
-
Emir Özder, Levent, Özer, Demet, and Göksun, Tilbe
- Subjects
- *
EMOTION recognition , *NARRATIVES , *SPEECH - Abstract
Bilinguals tend to produce more co-speech hand gestures to compensate for reduced communicative proficiency when speaking in their L2. We here investigated L1-Turkish and L2-English speakers' gesture use in an emotional context. We specifically asked whether and how (1) speakers gestured differently while retelling L1 versus L2 and positive versus negative narratives and (2) gesture production during retellings was associated with speakers' later subjective emotional intensity ratings of those narratives. We asked 22 participants to read and then retell eight emotion-laden narratives (half positive, half negative; half Turkish, half English). We analysed gesture frequency during the entire retelling and during emotional speech only (i.e., gestures that co-occur with emotional phrases such as "happy"). Our results showed that participants produced more representational gestures in L2 than in L1; however, they used more representational gestures during emotional content in L1 than in L2. Participants also produced more co-emotional speech gestures when retelling negative than positive narratives, regardless of language, and more beat gestures co-occurring with emotional speech in negative narratives in L1. Furthermore, using more gestures when retelling a narrative was associated with increased emotional intensity ratings for narratives. Overall, these findings suggest that (1) bilinguals might use representational gestures to compensate for reduced linguistic proficiency in their L2, (2) speakers use more gestures to express negative emotional information, particularly during emotional speech, and (3) gesture production may enhance the encoding of emotional information, which subsequently leads to the intensification of emotion perception. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.