This article provides a comprehensive overview of how policy makers, practitioners, and scholars can fruitfully use behavioral science to tackle public administration, management, and policy issues. The article systematically reviews 109 articles in the public administration discipline that are inspired by the behavioral sciences to identify emerging research trajectories, significant gaps, and promising applied research directions. In an attempt to systematize and take stock of the nascent behavioral public administration scholarship, the authors trace it back to the seminal works of three Nobel Laureates—Herbert Simon, Daniel Kahneman, and Richard Thaler—and their work on bounded rationality, cognitive biases, and nudging, respectively. The cognitive biases investigated by the studies reviewed fall into the categories of accessibility, loss aversion, and overconfidence/optimism. Nudging and choice architecture are discussed as viable strategies for leveraging these cognitive traps in an attempt to alter behavior for the better, among both citizens and public servants.
What can we learn by applying a meta-analysis to the public administration literature on job satisfaction? More generally, how can public management scholars use this method to capitalize on the decades of research on other topics within our field? This study reports the findings of the first quantitative review of the public administration literature on job satisfaction. We retrieved quantitative data from primary studies published in 42 public administration journals since 1969 and performed a meta-analysis of the relationships between job satisfaction and 43 correlates. The findings include meta-analytically derived effect sizes, measures of the heterogeneity in the effect size underlying all primary studies, and several indicators of publication bias. In presenting the results of our meta-analysis, we address the merits and limitations of this methodology and discuss how public administration scholars could take full advantage of this information to advance knowledge in other areas within the field.
This article uses meta‐analysis to synthesize 137 experiments in 73 articles on the causes of unethical behavior. Results show that exposure to in‐group members who misbehave or to others who benefit from unethical actions, greed, egocentrism, self‐justification, exposure to incremental dishonesty, loss aversion, challenging performance goals, or time pressure increase unethical behavior. In contrast, monitoring of employees, moral reminders, and individuals’ willingness to maintain a positive self‐view decrease unethical conduct. Findings on the effect of self‐control depletion on unethical behavior are mixed. Results also present subgroup analyses and several measures of study heterogeneity and likelihood of publication bias. The implications are of interest to both scholars and practitioners. The article concludes by discussing which of the factors analyzed should gain prominence in public administration research and uncovering several unexplored causes of unethical behavior.
This article tests a broad range of cognitive biases branching out from prospect theory in the context of public policy and management. Results illuminate systematic deviations from rationality. In experiments 1 through 5, the framing of outcomes influenced decisions across policy and management domains. In experiment 6, public employees were prone to an anchoring bias when setting standards for responsiveness. Experiment 7 shows that public workers tend to put more effort into activities that affect higher percentages of beneficiaries, even if the absolute number of affected clients is constant. Experiments 8 and 9 suggest that public employees are more likely to stick to a suboptimal status quo as the number of superior alternatives increases. Experiment 10 provides evidence of an asymmetric dominance effect: decisions changed when a decoy was present. This article contributes to behavioral public administration by replicating and extending previous trials.
Evidence for Practice• Being humans rather than robots, civil servants are prone to cognitive biases that may hinder public service provision. • As these cognitive biases are systematic rather than random, the architects of public organizations and services should account for them when designing the architecture of jobs and tasks. • Supposedly irrelevant factors that predictably affect decisions include the framing of outcomes, anchors, multiple alternatives, and decoy options.
Research Article
A systematic literature review of performance appraisal in a selection of public administration journals revealed a lack of investigations on the cognitive biases that affect raters’ evaluation of ratees’ performance. To address this gap, we conducted two artefactual field experiments on a sample of 600 public sector managers and employees. Results show that anchoring and halo effects systematically biased performance ratings. For the former, average scores were higher when subjects were exposed to a high rather than a low anchor. For the latter, higher ability on one performance dimension led participants to provide a higher average score on another performance dimension. Halo effect was moderated by rater’s gender. We conclude by discussing the study limitations and providing suggestions for future work in this area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.