Harvard Data Science Review 2021
DOI: 10.1162/99608f92.d07b8d16
|View full text |Cite
|
Sign up to set email alerts
|

Individualized Decision Making Under Partial Identification: ThreePerspectives, Two Optimality Results, and One Paradox

Abstract: Unmeasured confounding is a threat to causal inference and gives rise to bi-ased estimates. In this paper, we consider the problem of individualized decision making under partial identification. Firstly, we argue that when faced with unmeasured confounding, one should pursue individualized decision making using partial identification in a comprehensive manner. We establish a formal link between individualized decision making under partial identification and classical decision theory by considering a lower boun… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“… 47 Importantly, bounds on the additive effect can be used in formal decision theoretic approaches, even if these bounds are wide or cover null effects. 48 , 49 Furthermore, if the investigator is willing to invoke assumptions about the probability of exposure, the bounds will be narrower, as we describe in Section External Data and Sensitivity Analysis.…”
Section: Discussionmentioning
confidence: 99%
“… 47 Importantly, bounds on the additive effect can be used in formal decision theoretic approaches, even if these bounds are wide or cover null effects. 48 , 49 Furthermore, if the investigator is willing to invoke assumptions about the probability of exposure, the bounds will be narrower, as we describe in Section External Data and Sensitivity Analysis.…”
Section: Discussionmentioning
confidence: 99%
“…In comparison to Equation ( 6), we consider a general alternative policy rather than the only the bestpossible one. As we show below, the choice of this alternative policy will lead to different objectives and optimal solutions (see Cui, 2021, for a recent general discussion).…”
Section: Partial Identification and Minimizing Worst-case Regretmentioning
confidence: 99%
“…The maximum regret objective in Theorem 1 is a weighted average of the expected potential outcomes under treatment and no treatment plus a proxy for the cost. The choice of alternative policy determines these weights c 0 (•), c 1 (•) and cost c (•), all of which potentially vary with the covariates X (see Cui, 2021, for additional discussion on the choice of comparison policy determining the form of objective in instrumental variables settings). The second line of Equation ( 11) shows how to write the worst-case regret R sup (π, ) in terms of observable data using the scoring functions Γ w (either IPW or DR) discussed in Section 2.3.…”
Section: Worst Case Regret Relative To Different Alternative Policiesmentioning
confidence: 99%
See 1 more Smart Citation