2018
DOI: 10.1017/xps.2018.9
|View full text |Cite
|
Sign up to set email alerts
|

Avoiding Post-Treatment Bias in Audit Experiments

Abstract: Audit experiments are used to measure discrimination in a large number of domains (Employment: Bertrand et al. (2004); Legislator responsiveness: Butler et al. (2011); Housing: Fang et al. (2018)). Audit studies all have in common that they estimate the average difference in response rates depending on randomly varied characteristics (such as the race or gender) of a requester. Scholars conducting audit experiments often seek to extend their analyses beyond the effect on response to the effects on the quality … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
47
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 82 publications
(49 citation statements)
references
References 16 publications
1
47
1
Order By: Relevance
“…Yet recent research has raised serious concerns with this latter practice. Specifically, using a post-treatment variable (such as a manipulation check or question timer) to re-estimate treatment effects-e.g., by removing those respondents deemed to be inattentive to the experiment, or by interacting the treatment with the attentiveness measure-can introduce covariate imbalances between the randomized treatment and control groups, therein biasing one's estimated treatment effect (Aronow, Baron, and Pinson 2019;Coppock 2019;Montgomery, Nyhan, and Torres 2018).…”
mentioning
confidence: 99%
“…Yet recent research has raised serious concerns with this latter practice. Specifically, using a post-treatment variable (such as a manipulation check or question timer) to re-estimate treatment effects-e.g., by removing those respondents deemed to be inattentive to the experiment, or by interacting the treatment with the attentiveness measure-can introduce covariate imbalances between the randomized treatment and control groups, therein biasing one's estimated treatment effect (Aronow, Baron, and Pinson 2019;Coppock 2019;Montgomery, Nyhan, and Torres 2018).…”
mentioning
confidence: 99%
“…We look at the helpfulness of the response based on its content. Explorations of the quality of response in audit experiments can raise concerns over posttreatment bias, given that an email reply is a posttreatment outcome (Coppock ; Montgomery, Nyhan, and Torres ). Coppock offers three approaches, and we adopt the third, to “redefine the outcome” (2018, 3).…”
Section: Resultsmentioning
confidence: 99%
“…The overall response rate of 71% is comparable to other surveys of local‐level officials in the United States (Dynes, Hassell, and Miles ; Giulietti, Tonin, and Vlassopoulos ; White, Nathan, and Faller ) and Germany (Grohs, Adam, and Knill ), and higher than those targeting elected officials (Butler and Broockman ) and bureaucrats administering federal programs (Einstein and Glick ) . To avoid posttreatment bias, all response quality measures (including congratulations rates) redefine both nonresponse and nonquality as zeroes (Coppock ). For example, a quality response to the cost question (Cost=1) tells the fictitious emailer about fees associated with obtaining a marriage license, whereas for nonquality outcomes (Cost=0), either the official responded without the cost information or no response was received.…”
Section: Resultsmentioning
confidence: 99%