2019
DOI: 10.1101/560011
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Negative Affect Induces Rapid Learning of Counterfactual Representations: A Model-based Facial Expression Analysis Approach

Abstract: Regret-an emotion comprised of both a counterfactual, cognitive component and a negative, affective component-is one of the most commonly experienced emotions involved in decision making. For example, people often behave such that their decisions minimize potential regret and therefore maximize subjective pleasure. Importantly, functional accounts of emotion suggest that the experience and future expectation of regret should promote goal-oriented behavioral change. While many studies have confirmed the functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 116 publications
(246 reference statements)
2
10
0
Order By: Relevance
“…In the present study, we found that when making decisions from experience, individuals are indeed risk seeking overall if the riskier and safer options have equal EVs and all outcomes are gains, conceptually replicating prior work (Ahn et al, 2012; Haines et al, 2021). Because the riskier option occasionally produced low outcomes that should have elicited the strongest feelings of regret in our task, this finding conflicts with the assumption of a convex Q (.)…”
Section: Discussionsupporting
confidence: 88%
See 2 more Smart Citations
“…In the present study, we found that when making decisions from experience, individuals are indeed risk seeking overall if the riskier and safer options have equal EVs and all outcomes are gains, conceptually replicating prior work (Ahn et al, 2012; Haines et al, 2021). Because the riskier option occasionally produced low outcomes that should have elicited the strongest feelings of regret in our task, this finding conflicts with the assumption of a convex Q (.)…”
Section: Discussionsupporting
confidence: 88%
“…In summary, although there is a general tendency to choose options that minimize the probability of regret in experience-based decisions, structural features of the choice scenario, such as EV differences and the sign of outcomes, can diminish this tendency. More generally, our results add to a growing body of research suggesting that relative comparisons between obtained and forgone outcomes play a key role in guiding decision-making (Ahn et al, 2012; Coricelli et al, 2007; Haines et al, 2021;, 1999). However, our findings diverge from standard regret theory applied to decisions from the description in which Q (.)…”
Section: Discussionsupporting
confidence: 68%
See 1 more Smart Citation
“…Decades of research show clear links between facial expressions of emotion and cognitive processes in aggregate (see [56,57]), yet the dynamics between cognitive mechanisms and facial expressions are poorly understood in part due to difficulties accompanying manual coding. In fact, we are currently using computational modeling to explore cognition-expression relationships with the aid of CVML [58], which would be infeasible with manual coding of facial expressions. For example, in the current study it took less than three days to automatically extract AUs from 4,648 video recordings and train ML models to generate valence intensity ratings (using a standard desktop computer).…”
Section: Discussionmentioning
confidence: 99%
“…Decades of research show clear links between facial expressions of emotion and cognitive processes in aggregate (see 51,52), yet the dynamics between cognitive mechanisms and facial expressions are poorly understood in part due to difficulties accompanying manual coding. In fact, we are currently using computational modeling to explore cognition-expression relationships with the aid of CVML (53), which would be infeasible with manual coding of facial expressions. For example, in the current study it took less than three days to automatically extract AUs from 4,648 video recordings and train ML models to generate valence intensity ratings (using a standard desktop computer).…”
Section: Discussionmentioning
confidence: 99%