2022
DOI: 10.1111/psyp.14059
|View full text |Cite
|
Sign up to set email alerts
|

Attention to affect 2.0: Multiple effects of emotion and attention on event‐related potentials of visual word processing in a valence‐detection task

Abstract: Here we continue recent work on the specific mental processes engaged in a valence-detection task. Fifty-seven participants responded to one predefined target level of valence (negative, neutral, or positive), and ignored the remaining two levels. This enables more precise fine-tuning of neuronal pathways, compared to valence categorization where attention is divided between different levels of valence. Our group recently used valence detection with emotional words. Posterior P1 and N170 effects in the event-r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(23 citation statements)
references
References 102 publications
6
15
0
Order By: Relevance
“…Since the respective arousal ratings were not significantly different for the two kinds of emotional expressions (happy vs. fearful), the additional advantage for happy faces seems to be genuinely driven by valence. This bias toward naturalistic positive faces is well in line with the literature on emotional categorization (Happy Superiority Effect; Kauschke et al, 2019;Nummenmaa & Calvo, 2015) and has only recently been validated in a similar study (Weidner et al, 2022; for an analogous pattern in affective words, see Gibbons, Kirsten, & Seib-Pfeifer, 2022). However, when using affective nonface pictures in a detection task, participants responded fastest to negative pictures .…”
Section: Behavioral Effectssupporting
confidence: 85%
See 3 more Smart Citations
“…Since the respective arousal ratings were not significantly different for the two kinds of emotional expressions (happy vs. fearful), the additional advantage for happy faces seems to be genuinely driven by valence. This bias toward naturalistic positive faces is well in line with the literature on emotional categorization (Happy Superiority Effect; Kauschke et al, 2019;Nummenmaa & Calvo, 2015) and has only recently been validated in a similar study (Weidner et al, 2022; for an analogous pattern in affective words, see Gibbons, Kirsten, & Seib-Pfeifer, 2022). However, when using affective nonface pictures in a detection task, participants responded fastest to negative pictures .…”
Section: Behavioral Effectssupporting
confidence: 85%
“…LPP amplitude should both be larger for emotional than neutral faces and for target in comparison with nontarget faces. Besides these main effects, the processing boost of emotional expressions in target faces should still be visible in the LPP, consistent with earlier studies using other affective stimuli (words and images; Gibbons, Kirsten, & Seib-Pfeifer, 2022;Schindler & Kissler, 2016;Schupp et al, 2007). Regarding the behavioral aspect of the detection task, we expected to find the fastest hit responses to happy faces, which would be in line with a presumptive bias toward happy faces in tasks requiring the extraction of emotions (Kauschke et al, 2019).…”
Section: Introductionsupporting
confidence: 89%
See 2 more Smart Citations
“…In this regard, late emotion effects for negative words were absent in several studies during structural (font consistency) Sommer 2009b), color (González-Villar et al 2014), lexical (Scott et al 2009), or semantic (Kissler et al 2009) tasks. Further, for the LPP, a study reported increasing effects when negative words were target-relevant as compared to neutral words (Schindler and Kissler 2016; but see Gibbons et al 2022). This pattern of findings would be in line with a recent meta-analysis across different visual stimuli (with smaller effect sizes for word stimuli) that reported no reliable late amplitude effects during non-emotional tasks (e.g., watching, reading, or classification according to nonemotional attributes), but reliable effects during explicit emotion decision tasks (Yuan et al 2019).…”
Section: Introductionmentioning
confidence: 99%