The low reproducibility rate in social sciences has produced hesitation among researchers in accepting published findings at their face value. Despite the advent of initiatives to increase transparency in research reporting, the field is still lacking tools to verify the credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research and allow their peers to verify this credibility. We demonstrate the application of these methods in a multi-laboratory replication of Bem's Experiment 1 (Bem 2011
J. Pers. Soc. Psychol.
100
, 407–425. (
doi:10.1037/a0021524
)) on extrasensory perception (ESP), which was co-designed by a consensus panel including both proponents and opponents of Bem's original hypothesis. In the study we applied direct data deposition in combination with born-open data and real-time research reports to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video-documented trial sessions to ascertain as-intended protocol delivery, and external research auditors to monitor research integrity. We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bem's findings were not replicated in our study. In the paper, we discuss the implementation, feasibility and perceived usefulness of the credibility-enhancing methodologies used throughout the project.
According to the Justified True Belief (JTB) account of knowledge, a person’s ability to know something is defined by having a belief that is both justified and true (i.e., knowledge is justified true belief). However, this account fails to consider the role of luck. In 1963, Gettier argued that JTB is insufficient because it does not account for certain situations, called Gettier cases, wherein a person is justified for believing something true but only because of luck. It is unclear whether lay people’s intuitions about knowledge lead them to agree with Gettier, such that lay people believe that individuals in these cases lack knowledge (referred to as Gettier intuitions). We attempt to provide a robust estimate of the Gettier intuition effect size by replicating Turri and colleagues’ (2015) Experiment 1. The Collaborative Replications and Education Project (CREP) selected this study for replication based on its undergraduate appeal, feasibility, and pedagogical value. However, in light of some inconsistent results, suboptimal designs, and inconsistent evidence for cultural variation (e.g., Machery et al., 2015; Nagel, et al., 2013; Seyedsayamdost et al., 2015; Starman & Friedman, 2012; Weinberg et al., 2001), the improved methodology of Turri et al. (2015) make it an important study to replicate cross-culturally. Therefore, we propose a multisite collaborative preregistered replication of Turri and colleague's (2015) Experiment 1 (35 labs from 14 countries across 4 continents signed up at time of submission; expected minimum N = 1,500). Results of this study are expected to provide a clearer picture of the Gettier intuition effect size, lay people’s theory and practice of knowledge, and potentially cross-cultural similarities and differences. Preprint: [X] Pre-registered protocols: [X]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.