The advantages and disadvantages of jury simulation research have often been debated in the literature. Critics chiefly argue that jury simulations lack verisimilitude, particularly through their use of student mock jurors, and that this limits the generalizabilty of the findings. In the present article, the question of sample differences (student v. nonstudent) in jury research was meta-analyzed for 6 dependent variables: 3 criminal (guilty verdicts, culpability, and sentencing) and 3 civil (liability verdicts, continuous liability, and damages). In total, 53 studies (N = 17,716) were included in the analysis (40 criminal and 13 civil). The results revealed that guilty verdicts, culpability ratings, and damage awards did not vary with sample. Furthermore, the variables that revealed significant or marginally significant differences, sentencing and liability judgments, had small or contradictory effect sizes (e.g., effects on dichotomous and continuous liability judgments were in opposite directions). In addition, with the exception of trial presentation medium, moderator effects were small and inconsistent. These results may help to alleviate concerns regarding the use of student samples in jury simulation research. (PsycINFO Database Record
Summary
This project employs an experimental design to test theoretical predictions regarding how numeracy can assist jurors in determining damage awards to compensate a plaintiff for pain and suffering and how the use of meaningful numerical anchors may produce similar benefits. Mock jurors (N = 345) reviewed a legal case and were asked to give a dollar award to compensate the plaintiff for pain and suffering. The presence and nature of a numerical anchor and the duration of pain and suffering were manipulated. Participants' numeracy was measured. Results provided support for predictions. Jurors higher in numeracy gave awards that more appropriately reflected the duration of pain and suffering and showed less variability in awards. Similar benefits were obtained by exposing jurors to meaningful numerical anchors to help them contextualize dollar amounts. Thus, introducing meaningful anchors to jurors may provide similar benefits to numeracy, without the drawbacks associated with selecting only numerate jurors.
Theory and practitioner “scaling” advice informed hypotheses that guidance to mock jurors should (a) increase validity (vertical equity), decrease variability (reliability), and improve coherence in awards; (b) improve subjective experience of jurors’ decision-making (rated helpfulness, confidence, and difficulty); and (c) have the greatest impact when it includes both verbal and numerical benchmarks. Three mock juror experiments (N = 197 students, N = 476 Amazon Mechanical Turk workers, and N = 391 students) tested novel scaling approaches and predictions from the Hans-Reyna model of damage award decision-making. Jurors reviewed a legal case and provided a dollar award to compensate plaintiffs for pain and suffering following concussions. Experiments varied injury severity (low vs. high) and the plaintiff attorney’s guidance (no guidance, verbal guidance, numerical guidance, and verbal-plus-numerical guidance) between subjects. Results support predictions that, even without guidance, mock jurors appropriately categorize the gist of injuries as low or high severity, and dollar awards reflect that gist. Participants gave a higher award for more severe injuries, indicating that they extracted the qualitative gist of damages. Also, as expected, guidance, particularly verbal-plus-numerical guidance, had beneficial effects on jurors’ subjective experience, with participants reporting that it was a helpful aid in decision-making. Numerical guidance, both with and without verbal guidance, reduced award variability in severe injury cases in all three experiments. Scaling guidance did not improve the already strong gist-verbatim correspondence or award validity. Both grasping the gist of damages and mapping that gist onto numbers are important, but jurors appear to benefit from assistance with numerical mapping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.