2023
DOI: 10.1177/00491241231176850
|View full text |Cite
|
Sign up to set email alerts
|

Linear Probability Model Revisited: Why It Works and How It Should Be Specified

Abstract: A linear model is often used to find the effect of a binary treatment [Formula: see text] on a noncontinuous outcome [Formula: see text] with covariates [Formula: see text]. Particularly, a binary [Formula: see text] gives the popular “linear probability model (LPM),” but the linear model is untenable if [Formula: see text] contains a continuous regressor. This raises the question: what kind of treatment effect does the ordinary least squares estimator (OLS) to LPM estimate? This article shows that the OLS est… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…One might think that (1.4) and (1.5) are "artifacts" due to the restrictive condition E(D j |X) = L(D j |X), but that is not the case. For binary D and any Y , Lee et al (2023) showed that the estimand of the D-slope in the OLS of Y on (D, X) is a weighted average of E(Y 1 − Y 0 |X) plus a bias, and the OLS D-slope is inconsistent because its estimand is not zero even when Angrist (1998) and Angrist and Pischke (2009), however, then the bias is zero, and the OLS estimand becomes a weighted average of E(Y 1 − Y 0 |X). Doing analogously, we imposed the condition…”
Section: Introductionmentioning
confidence: 99%
“…One might think that (1.4) and (1.5) are "artifacts" due to the restrictive condition E(D j |X) = L(D j |X), but that is not the case. For binary D and any Y , Lee et al (2023) showed that the estimand of the D-slope in the OLS of Y on (D, X) is a weighted average of E(Y 1 − Y 0 |X) plus a bias, and the OLS D-slope is inconsistent because its estimand is not zero even when Angrist (1998) and Angrist and Pischke (2009), however, then the bias is zero, and the OLS estimand becomes a weighted average of E(Y 1 − Y 0 |X). Doing analogously, we imposed the condition…”
Section: Introductionmentioning
confidence: 99%