Why are dual-task costs reduced with ideomotor (IM) compatible tasks (Greenwald & Shulman, 1973; Lien, Proctor & Allen, 2002)? In the present experiments, we first examine three different measures of single-task performance (pure single-task blocks, mixed blocks, and long stimulus onset asynchrony [SOA] trials in dual-task blocks) and two measures of dual-task performance (simultaneous stimulus presentation blocks and simultaneous stimulus presentation trials in blocks with mixed SOAs), and show that these different measures produce different estimates of the cost. Next we examine whether the near elimination of costs can be explained by assuming that one or both of the tasks bypasses capacity-limited central operations. The results indicate that both tasks must be IM-compatible to nearly eliminate the dual-task costs, suggesting that the relationship between the tasks plays a critical role in overlapping performance.
Dual-task costs can be greatly reduced or even eliminated when both tasks use highly-compatible S-R associations. According to Greenwald (Journal of Experimental Psychology: Human Perception and Performance, 30, 632-636, 2003), this occurs because the appropriate response can be accessed without engaging performance-limiting response selection processes, a proposal consistent with the embodied cognition framework in that it suggests that stimuli can automatically activate motor codes (e.g., Pezzulo et al., New Ideas in Psychology, 31(3), 270-290, 2013). To test this account, we reversed the stimulus-response mappings for one or both tasks so that some participants had to "do the opposite" of what they perceived. In these reversed conditions, stimuli resembled the environmental outcome of the alternative (incorrect) response. Nonetheless, reversed tasks were performed without costs even when paired with an unreversed task. This finding suggests that the separation of the central codes across tasks (e.g., Wickens, 1984) is more critical than the specific S-R relationships; dual-task costs can be avoided when the tasks engage distinct modality-based systems.
Studies investigating the effect of emotional expression on spatial orienting to a gazed-at location have produced mixed results. The present study investigated the role of affective context in the integration of emotion processing and gaze-triggered orienting. In three experiments, a face gazed nonpredictively to the left or right, and then its expression became fearful or happy. Participants identified (Experiments 1 and 2) or detected (Experiment 3) a peripheral target presented 225 or 525 ms after the gaze cue onset. In Experiments 1 and 3 the targets were either threatening (a snarling dog) or nonthreatening (a smiling baby); in Experiment 2 the targets were neutral. With emotionally-valenced targets, the gaze-cuing effect was larger when the face was fearful compared to happy --but only with the longer cue-target interval. With neutral targets, there was no interaction between gaze and expression. Our results indicate that a meaningful context optimizes attentional integration of gaze and expression information. Keywords gaze direction; emotional expression; visual attention; affective context Dynamic information from faces provides us with a rich source of data about our environment. For example, shifts in other people's direction of gaze tell us where they are attending and can serve to direct our attention to potentially important objects or events that might be outside our current line of sight. Moreover, other people's facial expressions can indicate how they feel about the object or event to which they are attending. Being sensitive to these social visual cues should help us respond more efficiently to events when it is advantageous to react quickly.There is ample evidence that both gaze direction and emotional expression are processed quickly and automatically. Numerous attentional cuing studies have demonstrated that gaze direction cues can trigger an automatic shift of spatial attention to a gazed-at location (e.g., Driver et al., 1999;Friesen & Kingstone, 1998; for a review, see . This orienting effect occurs even when the gaze direction cues are not predictive of target location, and when the interval between the onset of the gaze cue and the NIH Public Access Author ManuscriptCogn Emot. Author manuscript; available in PMC 2012 January 1. NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author Manuscript onset of the target (stimulus onset asynchrony, SOA) is very short. Similarly, many studies have shown that facial emotional expression is processed quickly and automatically (e.g., Batty & Taylor, 2003;Eimer & Holmes, 2007; for a review, see Vuilleumier & Pourtois, 2007).An important outstanding question is when and how gaze direction information and facial expression information are integrated. It seems reasonable to expect that humans would have the ability to combine these two sources of information for optimal processing of social facial signals. In particular, seeing another person looking off to the side with a frightened expression should enhance one's natural tendency t...
Implicit learning in the serial reaction time (SRT)
Dual-task costs are often significantly reduced or eliminated when both tasks use compatible stimulus-response (S-R) pairs. Either by design or unintentionally, S-R pairs used in dual-task experiments that produce small dual-task costs typically have two properties that may reduce dual-task interference. One property is that they are easy to keep separate; specifically, one task is often visual-spatial and contains little verbal information and the other task is primarily auditory-verbal and has no significant spatial component. The other property is that the two sets of S-R pairs are often compatible at the set-level; specifically, the collection of stimuli for each task is strongly related to the collection of responses for that task, even if there is no direct correspondence between the individual items in the sets. In this paper, we directly test which of these two properties is driving the absence of large dual-task costs. We used stimuli (images of hands and auditory words) that when previously been paired with responses (button presses and vocal utterances) produced minimal dual-task costs, but we manipulated the shape of the hands in the images and the auditory words. If set-level compatibility is driving efficient performance, then these changes should not affect dual-task costs. However, we found large changes in the dual-task costs depending on the specific stimuli and responses. We conclude that set-level compatibility is not sufficient to minimize dual-task costs. We connect these findings to divisions within the working memory system and discuss implications for understanding dual-task performance more broadly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.