The present study rigorously tests whether an arbitrary stimulus that signals threat affects attentional selection and perception. Thirty-four volunteers completed a spatial-emotional cueing paradigm to examine how perceptual sensitivity (d') and response times (RTs) were affected by a threatening stimulus. On each side of fixation, 2 colored circles were presented as cues, followed by 2 Gabor patches, 1 of which was tilted and served as target. The color of 1 of the cues was paired with an electric shock, while others remained neutral. The target could be presented at the location of the threat-associated cue (Valid), at the opposite side (Invalid), or following neutral cues. Stimulus onset asynchrony (SOA) between cue and target was either 100 ms or 1,000 ms. Results showed increased perceptual sensitivity (d') and faster RTs for targets appearing at the Valid location relative to the Invalidly cued location, suggesting that immediately after cue presentation, attention was captured by the threat-associated cue. Crucially, following this initial exogenous capture, there was also enhanced perceptual sensitivity at the long SOA, suggesting that attention lingered volitionally at the location that previously contained the threat-associated stimulus. The current results show an effect of threatening stimuli on perceptual sensitivity, providing unequivocal evidence that threatening stimuli modulate the efficacy of sensory processing. (PsycINFO Database Record
Attentional selection depends on the interaction between exogenous (stimulus-driven), endogenous (goal-driven), and selection history (experience-driven) factors. While endogenous and exogenous biases have been widely investigated, less is known about their interplay with value-driven attention. The present study investigated the interaction between reward-history and goal-driven biases on perceptual sensitivity (d’) and response time (RT) in a modified cueing paradigm presenting two coloured cues, followed by sinusoidal gratings. Participants responded to the orientation of one of these gratings. In Experiment 1, one cue signalled reward availability but was otherwise task irrelevant. In Experiment 2, the same cue signalled reward, and indicated the target’s most likely location at the opposite side of the display. This design introduced a conflict between reward-driven biases attracting attention and goal-driven biases directing it away. Attentional effects were examined comparing trials in which cue and target appeared at the same versus opposite locations. Two interstimulus interval (ISI) levels were used to probe the time course of attentional effects. Experiment 1 showed performance benefits at the location of the reward-signalling cue and costs at the opposite for both ISIs, indicating value-driven capture. Experiment 2 showed performance benefits only for the long ISI when the target was at the opposite to the reward-associated cue. At the short ISI, only performance costs were observed. These results reveal the time course of these biases, indicating that reward-driven effects influence attention early but can be overcome later by goal-driven control. This suggests that reward-driven biases are integrated as attentional priorities, just as exogenous and endogenous factors.
The current eye-tracking study examined the influence of reward on oculomotor performance, and the extent to which learned stimulus-reward associations interacted with voluntary oculomotor control with a modified paradigm based on the classical antisaccade task. Participants were shown two equally salient stimuli simultaneously: a gray and a colored circle, and they were instructed to make a fast saccade to one of them. During the first phase of the experiment, participants made a fast saccade toward the colored stimulus, and their performance determined a (cash) bonus. During the second, participants made a saccade toward the gray stimulus, with no rewards available. On each trial, one of three colors was presented, each associated with high, low or no reward during the first phase. Results from the first phase showed improved accuracy and shorter saccade latencies on high-reward trials, while those from the second replicated well-known effects typical of the antisaccade task, namely, decreased accuracy and increased latency during phase II, even despite the absence of abrupt asymmetric onsets. Crucially, performance differences between phases revealed longer latencies and less accurate saccades during the second phase for high-reward trials, compared with the low- and no-reward trials. Further analyses indicated that oculomotor capture by reward signals is mainly found for saccades with short latencies, while this automatic capture can be overridden through voluntary control with longer ones. These results highlight the natural flexibility and adaptability of the attentional system, and the role of reward in modulating this plasticity. NEW & NOTEWORTHY Typically, in the antisaccade task, participants need to suppress an automatic orienting reflex toward a suddenly appearing peripheral stimulus. Here, we introduce an alternative antisaccade task without such abrupt onsets. We replicate well-known antisaccade effects (more errors and longer latencies), demonstrating the role of reward in developing selective oculomotor biases. Results highlight how reward and selection history facilitate developing automatic biases from goal-driven behavior, and they suggest that this process responds to individual differences in impulsivity.
Recent work in Human-Robot Interaction (HRI) investigates the role of human users as teachers from which robots can flexibly learn new personalised skills through interaction. However, existing human-robot teaching methods remain largely unintuitive for the end user and require significant effort to adapt to the way the robot learns. This paper envisions the use of dog training methods as a starting point for HRI researchers to develop more intuitive interactions between human teachers and robot learners. We provide a design framework (called FETCH-R) aimed at guiding the conception of interactions between human teachers and robot learners inspired by dog training. This work paves the way towards the use of animal training as an inspiration to create human-robot teaching protocols that promote engagement, ease-of-use, and fosters human-robot relationships.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.