Back to Search
Start Over
The effect of unimodal and multimodal information in a low-level detection task in younger and older adults (face-to-face - lab experiment)
- Publication Year :
- 2024
- Publisher :
- Open Science Framework, 2024.
-
Abstract
- Previous research has shown an age by multimodal interaction in low-level detection tasks such that older adults benefit more from multisensory information than young adults (Laurienti et al., 2006; Hugenschmidt et al., 2009). In contrast, our lab (OSF Registries | The effect of unimodal and multimodal information on lexical decision and free recall in younger and older adults) found a multimodal benefit over unimodal information for both age groups for lexical decision accuracy and memory recall. The different findings may have been due to our study using a multimodal condition that did not use a sufficiently redundant multimodal condition. In our study, both the auditory and visual targets were the same language-based word stimuli, likely processed by language parts of the brain for both modalities. This is different to the work from Laurienti’s lab as they used the actual colour red or blue alongside the auditory word for their multimodal condition (Laurienti et al., 2006; Hugenschmidt et al., 2009). Our lab attempted to replicate Laurienti’s paradigm which is based on the response to the colour of a disk that is red or blue (visual colour), the words red or blue (auditory speech), or both of the above (multimodal: colour-speech) (|The effect of unimodal and multimodal information in a low-level detection task in younger and older adults). We included additional conditions in which the red and blue disks are replaced with the words ‘red’ and ‘blue’, either alone (visual word) or in combination with the spoken words (multimodal: word-speech). We hypothesised that young and older adults would benefit equally from the multimodal stimuli when both contain language information, but that older adults will benefit more than young adults when the multimodal stimuli contribute language information that is not present in the visual stimulus. The results showed no interaction effects between age group (young, old) and modality (auditory speech, visual colour, visual words, audiovisual colour-speech, audiovisual word-speech) for accuracy or response time. However, a multimodal benefit was found over comparable unimodal information for both age groups for response time. Therefore, we did not replicate the multisensory information boost for older adults compared to young adults (Laurienti et al., 2006). The lack of congruency between our findings and Laurienti et al.’s (2006) prompted our lab to contact the corresponding author. After helpful discussions with the author, our lab aimed to directly replicate Lauirenti et al. (2006) by including task-irrelevant green stimuli (OSF Registries | The effect of unimodal and multimodal information in a low-level detection task in younger and older adults (Experiment 3)). Task-irrelevant green stimuli were to be ignored by the participant when presented unimodally (visual colour or auditory speech). In contrast, during a multimodal (colour-speech) presentation, the green stimulus was only ever presented along with a blue or red stimulus (i.e., there were no congruent green multisensory trials) and in this instance, participants were instructed to respond to the red or blue stimulus and ignore the green. Therefore, Laurienti’s paradigm required participants to always respond to the colour of a disk that is red or blue (visual colour), the words red or blue (auditory speech), or whenever red or blue is presented in at least one of the modalities (visual, auditory) in the multimodal (visual colour - auditory speech) presentation. We hypothesised that older adults would disproportionally benefit from the integration of multisensory information in comparison to young adults. However, similar to our labs previous findings, the results did not mirror those of Laurienti et al. (2006), rather a multimodal benefit was found over the visual condition for both age groups for response time. The key difference between our lab’s replication of Laurienti et al. (2006) is that our study was conducted online, whereas Laurienti et al.’s study was laboratory based. Therefore, the discrepancy between the findings could be due to the two testing formats. The current study will replicate Laurienti et al.’s (2006) two-alternative forced-choice discrimination task by conducting in-lab testing. In line with the findings of Laurienti et al. (2006), we hypothesise that older adults will disproportionally benefit from the integration of multisensory information in comparison to young adults.
- Subjects :
- Social and Behavioral Sciences
Subjects
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi...........7317e536e1cf19c61137df9c76bfa67e
- Full Text :
- https://doi.org/10.17605/osf.io/fd9w8