Is Supreme Court legitimacy affected by the way justices explain their decisions to the public? Existing work shows a link between legitimacy and case outcomes but often overlooks the impact of opinion content. Using a novel experimental design, the author measures the effect of three different types of judicial arguments on public support for the Court. The results suggest that the rationales used by justices in their opinions can affect institutional legitimacy, but to a lesser degree than conventional wisdom suggests. Taken together with other recent legitimacy research, these findings have important implications and set the stage for follow-up research.
Criticism of Supreme Court confirmation hearings has intensified considerably over the past two decades. In particular, there is a growing sense that nominees are now less forthcoming and that the hearings have suffered as a result. In this article, we challenge that conventional wisdom. Based on a comprehensive content analysis of every question and answer in all of the modern confirmation hearings—nearly 11,000 in total—we find only a mild decline in the candor of recent nominees. Moreover, we find that senators ask more probing questions than in the past, and that nominees are now more explicit about their reasons when they choose not to respond—two factors that may be fueling the perception that evasiveness has increased in recent years. We close with a discussion of the normative implications of our findings as well as an outline for future research into this issue.
For well over a century, fi rst-year law students have typically not received any individualized feedback in their core "doctrinal" classes other than their grades on fi nal exams. Although critics have long assailed this pedagogical model, remarkably limited empirical evidence exists regarding the extent to which increased feedback improves law students' outcomes. This article helps fi ll this gap by focusing on a natural experiment at the University of Minnesota Law School. The natural experiment arises from the assignment of fi rst-year law students to one of several "sections," each of which is taught by a common slate of professors. A random subset of these professors provides students with individualized feedback other than their fi nal grades. Meanwhile, students in two diff erent sections are occasionally grouped together in a "doublesection" fi rst-year class. We fi nd that in these double-section classes, students in sections that have previously or concurrently had a professor who provides individualized feedback consistently outperform students in sections that have not received any such feedback. The eff ect is both statistically signifi cant and hardly trivial in magnitude, approaching about one-third of a grade increment after controlling for students' LSAT scores, undergraduate GPA, gender, race, and country of birth. This eff ect corresponds to a 3.7-point increase in students' LSAT scores in our model. Additionally, the positive impact of feedback is stronger among students whose combined LSAT score and undergraduate GPA fall below the median at the University of Minnesota Law School. These fi ndings substantially advance the literature on law school
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.