2013
DOI: 10.1093/restud/rdt044
|View full text |Cite
|
Sign up to set email alerts
|

Inference on Treatment Effects after Selection among High-Dimensional Controls

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
1,167
1
3

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,234 publications
(1,178 citation statements)
references
References 41 publications
7
1,167
1
3
Order By: Relevance
“…Second, when applying causal inference models to analyzing big data, there are high-dimensional econometric and machine learning techniques, such as LASSO (least absolute shrinkage and selection operator), the post-double-selection method, random forest, and bagging (bootstrap aggregating), that researchers can use to handle large data sets. Interested readers can refer to Tibshirani (1996), Belloni et al (2013Belloni et al ( , 2014, Varian (2014), Athey and Imbens (2017), and Wager and Athey (2017) for discussions on such methods. These methods have yet to see widespread applications.…”
Section: Resultsmentioning
confidence: 99%
“…Second, when applying causal inference models to analyzing big data, there are high-dimensional econometric and machine learning techniques, such as LASSO (least absolute shrinkage and selection operator), the post-double-selection method, random forest, and bagging (bootstrap aggregating), that researchers can use to handle large data sets. Interested readers can refer to Tibshirani (1996), Belloni et al (2013Belloni et al ( , 2014, Varian (2014), Athey and Imbens (2017), and Wager and Athey (2017) for discussions on such methods. These methods have yet to see widespread applications.…”
Section: Resultsmentioning
confidence: 99%
“…Here, we report differences for all variables that are potentially included. Recall that only the subgroup of variables chosen by the double selection procedure (Belloni et al, 2014) is used in the propensity score estimations. See Table A1 educ abi 0XST 0educ abi 0Xmage90 0educ abi 0X f age40 0educ high voc 0X f age90 0educ high ed 0XAGREE 0 educ high ed 0Xmage40 0educ high ed 0Xmutter lebt 0educ missing 0XBE 0educ missing 0XSH 0 educ missing 0XRPSL 0 married 0XNRW 0divorced 0Xsah good 0divorced 0Xmage40 0widowed 0XRECI pos 0widowed 0X f age90 0 f oreign 0XNEURO 0 f oreign 0XSH 0 f oreign 0XHH 0 f oreign 0Xmage90 0 f oreign 0X f age90 0nkids hh 0Xsah good 0 nkids hh 0XNEURO 0 nkids hh 0XBW 0nkids hh 0Xmutter lebt 0single 0Xpartner 0single 0XCONSC 0single 0XHH 0single 0XBY 0 partner 0Xsah good 0 partner 0Xvater lebt 0partner 0Xmutter lebt 0age partn 0Xmutter lebt 0age partn 0X f ull time 0 sah good 0X f ull time 0sah satis 0Xmutter lebt 0sah vbad 0XHB 0sah vbad 0XBB 0doctor 0XRECI pos 0 NEURO 0Xmage80 0CONSC 0X f age60 0CONSC 0Xvater lebt 0CONSC 0Xmutter lebt 0CONSC 0X f ull time 0 AGREE 0X f age50 0OPENN 0Xmage50 0OPENN 0Xmage60 0EXTRA 0Xmage60 0RECI pos 0X f ull time 0 SH 0Xmage40 0HH 0Xy2012 0HH 0Xmage50 0HH 0Xmage60 0HB 0Xmage40 0HE 0X f age60 0RPSL 0Xmage90 0 BW 0X f ull time 0BB 0Xmage40 0ST 0Xmage90 0ST 0X f age40 0TH 0X f age90 0y2003 0Xmage40 0 y2003 0X f age30 0 y2004 0Xmage40 0y2006 0X f age40 0y2007 0X f ull time 0y2011 0Xmage40 0mage50 0X f age40 0 mage50 0Xmutter lebt 0 mage60 0X f age90 0mage60 0Xmutter lebt 0mage70 0X f age50 0mage70 0X f ull time 0mage80 0Xmutter lebt 0 Note: 0 indicate the time periods.…”
Section: Resultsmentioning
confidence: 99%
“…This procedure works if the "approximate sparsity assumption" (Belloni et al, 2014) holds which, stated verbally, implies the following: the chosen subset using the procedure described above leads to an approximation of the true relationship between outcome and controls, where the approximation error is sufficiently small. Thus, if the CIA holds given the full set of controls and their interactions, it also approximately holds for the chosen subset of controls.…”
Section: Control Variablesmentioning
confidence: 99%
“…These algorithms do detect structure in y ˆ : when predictive quality is high, some structure must have been found. Some econometric results also show the converse: when there is structure, it will be recovered at least asymptotically (for example, for prediction consistency of LASSO-type estimators in an approximately sparse linear framework, see Belloni, Chernozhukov, and Hansen 2011). On the other hand, we have seen the dangers of naively interpreting the estimated β ˆ parameters as indicating the discovered structure.…”
Section: Recovering Structure: Estimation ( β ˆ ) Vs Prediction ( Y ˆ )mentioning
confidence: 91%