Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures.
Both social and material rewards play a crucial role in daily life and function as strong incentives for various goal-directed behaviors. However, it remains unclear whether the incentive effects of social and material reward are supported by common or distinct neural circuits. Here, we have addressed this issue by quantitatively synthesizing and comparing neural signatures underlying social (21 contrasts, 207 foci, 696 subjects) and monetary (94 contrasts, 1083 foci, 2060 subjects) reward anticipation. We demonstrated that social and monetary reward anticipation engaged a common neural circuit consisting of the ventral tegmental area, ventral striatum, anterior insula, and supplementary motor area, which are intensively connected during both task and resting states. Functional decoding findings indicate that this generic neural pathway mediates positive value, motivational relevance, and action preparation during reward anticipation, which together motivate individuals to prepare well for the response to the upcoming target. Our findings support the *
We investigated the effect of reward expectation on the processing of emotional words in two experiments using event-related potentials (ERPs). A cue indicating the reward condition of each trial (incentive vs non-incentive) was followed by the presentation of a negative or neutral word, the target. Participants were asked to discriminate the emotional content of the target word in Experiment 1 and to discriminate the color of the target word in Experiment 2, rendering the emotionality of the target word task-relevant in Experiment 1, but task-irrelevant in Experiment 2. The negative bias effect, in terms of the amplitude difference between ERPs for negative and neutral targets, was modulated by the task-set. In Experiment 1, P31 and early posterior negativity revealed a larger negative bias effect in the incentive condition than that in the non-incentive condition. However, in Experiment 2, P31 revealed a diminished negative bias effect in the incentive condition compared with that in the non-incentive condition. These results indicate that reward expectation improves top-down attentional concentration to task-relevant information, with enhanced sensitivity to the emotional content of target words when emotionality is task-relevant, but with reduced differential brain responses to emotional words when their content is task-irrelevant.
Recognizing the events and objects in the video sequence are two challenging tasks due to the complex temporal structures and the large appearance variations. In this paper, we propose a 4D human-object interaction model, where the two tasks jointly boost each other. Our human-object interaction is defined in 4D space: i) the cooccurrence and geometric constraints of human pose and object in 3D space; ii) the sub-events transition and objects coherence in 1D temporal dimension. We represent the structure of events, sub-events and objects in a hierarchical graph. For an input RGB-depth video, we design a dynamic programming beam search algorithm to: i) segment the video, ii) recognize the events, and iii) detect the objects simultaneously. For evaluation, we built a large-scale multiview 3D event dataset which contains 3815 video sequences and 383,036 RGBD frames captured by the Kinect cameras. The experiment results on this dataset show the effectiveness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.