Objective: This study examined the effectiveness of using informative peripheral visual and tactile cues to support task switching and interruption management. Background: Effective support for the allocation of limited attentional resources is needed for operators who must cope with numerous competing task demands and frequent interruptions in data-rich, event-driven domains. One prerequisite for meeting this need is to provide information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. Method: Thirty participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to attend to a separate visual task. The location, frequency, and duration parameters of these cues represented the domain, importance, and expected completion time, respectively, of the interrupting task. Results: The findings show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention to the interruption, indicating adherence to experimenter instructions. Erroneous task-switching behavior (nonadherence to experimenter instructions) was mostly caused by misinterpretation of cues. Conclusion: The effectiveness of informative peripheral visual and tactile cues for supporting interruption management was validated in this study. However, the specific implementation of these cues requires further work and needs to be tailored to specific domain requirements. Application: The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or process control.
The design of multimodal interfaces rarely takes into consideration recent data suggesting the existence of considerable crossmodal spatial and temporal links in attention. This can be partly explained by the fact that crossmodal links have been studied almost exclusively in spartan laboratory settings with simple cues and tasks. As a result, it is not clear whether they scale to more complex settings. To examine this question, participants in this experiment drove a simulated military vehicle and were periodically presented with lateralized visual indications marking locations of roadside mines and safe areas of travel. Valid and invalid auditory and tactile cues preceded these indications at varying stimulus-onset asynchronies. The findings confirm that the location and timing of crossmodal cue combinations affect response time and accuracy in complex domains as well. In particular, presentation of crossmodal cues at SOAs below 500ms and tactile cuing resulted in lower accuracy and longer response times.
Multimodal information presentation has been proposed as a means to support timesharing in complex data-rich environments. To ensure the effectiveness of this approach, it is necessary to consider performance effects of recently discovered crossmodal spatial and temporal links in attention, as well as their interaction with other performance-shaping factors. The main goals of this research were to confirm that performance effects of crossmodal links in spatial attention scale to complex environments and to examine how these effects vary as a function of cue modality, signal timing, and workload. In the present study, set in a driving simulation, spatially valid and invalid auditory and tactile cues preceded the presentation of visual targets at various stimulus-onset asynchronies and under different levels of workload induced by simulated wind gusts of varied intensity. The findings from this experiment confirm that visual target identification accuracies and response times are, overall, more accurate and faster when validlycued. Significant interactions were found between cue validity, stimulus onset asynchrony (SOA), and cue modality, such that valid tactile cueing is most beneficial at shorter (100 -200 ms) SOAs, while valid auditory cueing resulted in faster responses than invalid cueing at 500 ms SOAs, but slower responses at 1000 ms SOAs. Tactile error rates were significantly higher than auditory error rates at various interactions of modality and SOA. These findings were robust across all workload conditions. They highlight the need for context-sensitive information presentation and can inform the design of multimodal interfaces for a wide range of application domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.