2017
DOI: 10.5709/acp-0209-2
|View full text |Cite
|
Sign up to set email alerts
|

Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

Abstract: Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
98
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 127 publications
(102 citation statements)
references
References 128 publications
(276 reference statements)
3
98
0
1
Order By: Relevance
“…As described previously, performing two tasks with common sensory modalities (e.g. visual) leads to an important interference on task performance (Arrighi, Lunardi et al 2011, Wahn andKonig 2017). The results observed in Experiment 2 seem to support that this type of combination impacts training effectiveness as well, compared to when the two tasks are from different sensory modalities (Experiment 1).…”
Section: Discussionsupporting
confidence: 69%
See 2 more Smart Citations
“…As described previously, performing two tasks with common sensory modalities (e.g. visual) leads to an important interference on task performance (Arrighi, Lunardi et al 2011, Wahn andKonig 2017). The results observed in Experiment 2 seem to support that this type of combination impacts training effectiveness as well, compared to when the two tasks are from different sensory modalities (Experiment 1).…”
Section: Discussionsupporting
confidence: 69%
“…Previously, Arrighi and colleagues (2011) found that a MOT task selectively interfered with a visual discrimination task sharing common attentional resources, compared to an auditory discrimination task (Arrighi, Lunardi et al 2011). In their review, Wahn and Konig (2017) also supported that simultaneously performing a visual spatial attention task and an object-based attention task recruits partially shared attentional resources which can explain a superior interference compared to when tasks are from different sensory modalities (Wahn and Konig 2017). Based on this evidence, it is suggested that the perceptual nature of the combined task in Experiment 2 was responsible for the greater interference compared to Experiment 1 (motor decision-making task).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Participants performed the visual search task faster when receiving the viewing information via the tactile or auditory modalities as compared to the visual sensory modality. This multisensory benefit has been explained by the larger availability of attentional resources across sensory modalities compared to within one sensory modality . These results suggest that an already existing collective benefit in joint visuospatial tasks can be enhanced by exchanging information between coactors via other sensory modalities than vision.…”
Section: Joint Visuospatial Tasksmentioning
confidence: 81%
“…This raises the critical question of whether allocation of attention and encoding of signal probability are performed in a modality-specific fashion or interactively across sensory modalities. Previous research has suggested that spatial attention relies on cognitive resources that are partially shared across sensory modalities (Eimer & Driver, 2001; Wahn & König 2015, 2017). For instance, Spence & Driver (1996) manipulated spatial attention by presenting signals with a higher probability in the attended relative to unattended hemifield in one modality only (i.e., primary modality).…”
Section: Introductionmentioning
confidence: 99%