Preregistration, which involves documentation of hypotheses, methods, and plans for data analysis prior to data collection or analysis, has been lauded as 1 potential solution to the replication crisis in psychological science. Yet, many researchers have been slow to adopt preregistration, and the next generation of researchers is offered little formalized instruction in creating comprehensive preregistrations. In this article, we describe a collaborative workshop-based preregistration course designed and taught by Jennifer L. Tackett. We provide a brief overview of preregistration, including resources available, common concerns with preregistration, and responses to these concerns. We then describe the goals, structure, and evolution of our preregistration course and provide examples of enrolled students' final research products. We conclude with reflections on the strengths and opportunities for growth for the 1st iteration of this course and suggestions for others who are interested in implementing similar open science-focused courses in their training programs.
Public Significance StatementPreregistration-or the public posting of plans for a study prior to its completion-is a tool that shows great promise for increasing scientific rigor. However, this practice has not yet been adopted by the majority of psychology researchers. In this article, the authors detail their approach to creating a workshop-based class on preregistration that makes preregistration accessible to multiple areas of psychology.
The measurement of individual differences in cognitive ability has a long and important history in psychology, but it has been impeded by the proprietary nature of most assessment measures. With the development of validated open-source measures of ability (collected in the International Cognitive Ability Resource, or ICAR, available at ICAR-project.com ), it is now possible for many researchers to assess ability in large surveys or small, lab-based studies without the expenses associated with proprietary measures. We review the history of ability measurement and discuss how the growing set of items included in ICAR allows ability assessments to be more generally available to all researchers.
Background and ObjectivesThe NIH Toolbox® for the Assessment of Neurologic and Behavioral Function is a compilation of computerized measures designed to assess sensory, motor, emotional, and cognitive functioning of individuals across the life span. The NIH Toolbox was initially developed for use with the general population and was not originally validated in clinical populations. The objective of this scoping review was to assess the extent to which the NIH Toolbox has been used with clinical populations.MethodsGuided by the Joanna Briggs Methods Manual for Scoping Reviews, records were identified through searches of PubMed MEDLINE, PsycINFO, ClinicalTrials.gov, EMBASE, and ProQuest Dissertations and Theses Global (2008–2020). Database searches yielded 5,693 unique titles of original research that used at least one NIH Toolbox assessment in a sample characterized by any clinical diagnosis. Two reviewers screened titles, abstracts, and full texts for inclusion in duplicate. Conflicts at each stage of the review process were resolved by a group discussion.ResultsUltimately, 281 publication records were included in this scoping review (nJournal Articles = 104, nConference Abstracts = 84, nClinical Trial Registrations = 86, and nTheses/Dissertations = 7). The NIH Toolbox Cognition Battery was by far the most used of the 4 batteries in the measurement system (nCognition = 225, nEmotion = 49, nMotor = 29, and nSensation = 16). The most represented clinical category was neurologic disorders (n = 111), followed by psychological disorders (n = 39) and cancer (n = 31). Most (96.8%) of the journal articles and conference abstracts reporting the use of NIH Toolbox measures with clinical samples were published in 2015 or later. As of May 2021, these records had been cited a total of nearly 1,000 times.DiscussionThe NIH Toolbox measures have been widely used among individuals with various clinical conditions across the life span. Our results lay the groundwork to support the feasibility and utility of administering the NIH Toolbox measures in research conducted with clinical populations and further suggest that these measures may be of value for implementation in fast-paced clinical settings as part of routine practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.