According to the scales of justice, the judge, in an unbiased way and directed by law, attends to all of the available information in a case, weighs it according to its significance, and integrates it to make a decision. By contrast, research suggests that judicial decision-making departs from the cognitive balancing act depicted by the scales of justice. Nevertheless, the research is often dismissed as irrelevant, and the judiciary, legal policymakers and the public remain largely unconvinced that the status quo needs improving. One potential rebuttal to the scientific findings is that they lack validity because researchers did not study judges making decisions on real cases. Another potential argument is that researchers have not pinpointed the psychological processes of any specific judge because they analyzed data over judges and/or used statistical models lacking in psychological plausibility. We review these 2 grounds for appeal against the scientific research on judicial decision-making, and note that it appears researchers' choices of data collection methods and analytic techniques may, indeed, be inappropriate for understanding the phenomena. We offer 2 remedies from the sphere of decision-making research: collecting data on judicial decision-making using representative design, and analyzing judicial decision data using more psychologically plausible models. Used together, we believe these solutions can help researchers better understand and improve legal decision-making.
What is the significance of this article for the general public?We propose that researchers studying judicial decision-making ought to examine decisions made on real(istic) cases using a representative experimental design, and they should analyze individual judge or bench decision data using psychologically plausible models. This will make the research more relevant to the judiciary and thus make it more difficult for judges and legal policy-makers to ignore the findings.