Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of optimal methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using current optimal practices: high statistical power, preregistration, and complete methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (p<.05) in 86% of attempts, slightly exceeding maximum expected replicability based on observed effect size and sample size. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that of the original study. This high replication rate justifies confidence in rigor enhancing methods and suggests that past failures to replicate may be attributable to departures from optimal procedures.
Survey institutes recently have changed their measurement of generalized trust from the standard dichotomous scale to an 11-point scale. Additionally, numerous survey institutes use different question wordings: where most rely on the standard, fully balanced question (asking if "most people can be trusted or that you need to be very careful in dealing with people"), some use minimally balanced questions, asking only if it is "possible to trust people." By using two survey-embedded experiments, one with 12,009 self-selected respondents and the other with a probability sample of 2,947 respondents, this study evaluates the generalized trust question in terms of question wording and number of scale points used. Results show that, contrary to the more commonly used standard question format (used, for example, by the American National Election Studies and the General Social Survey), generalized trust is best measured with a minimally balanced question wording accompanied with either a seven- or an 11-point scale.
Declining response rates have remained a major worry for survey research in the 21st century. In the past decades, the same decline in people’s willingness to participate in surveys (i.e., response propensities) has been seen in virtually all Western nations. Even more worrisome, declining response propensities may increase the risk of extensive nonresponse bias. Therefore, a better understanding of which factors are associated with survey nonresponse and its impact on nonresponse bias is paramount for any survey researcher interested in accurate statistical inferences. Knowing which factors relate to low response propensities enables appropriate models of nonresponse weights and aids the identification of which groups to tailor efforts to for turning non-respondents into respondents. This manuscript draws from previous theories and research on survey nonresponse and investigates nonresponse bias, both cross-sectionally and over time, in two time-series cross-sectional studies administered in Sweden (the National SOM Surveys 1992-2022 and the Swedish National Election Study 2022). Capitalizing on available registry data on all sampled persons and their corresponding neighborhood-level contextual data, a meta-analytical analysis of eight years of data collection finds that educational attainment, age, and migrant status are among the strongest predictors of response propensities. However, contextual factors—such as living in socially disadvantaged neighborhoods—also predict willingness to participate in surveys. Furthermore, utilizing the three decades of data, the declining response rates and growing non-response is shown to have been wholly attributable to a deteriorating survey climate rather than birth cohort replacement or immigration patterns.
Verbalizing visual memories can interfere with later accurate recall. Whereas changes in the magnitude of this Verbal Overshadowing effect as a function of delay have been reported, no study has systematically investigated multiple shorter non-immediate delays. Does VOE happen when verbalization occurs 5-minutes post-encoding? 10-minutes? 15-minutes? We show in a preregistered study involving 7,696 American adults randomly assigned to different timing paradigms, that the effect size of VOE at 5- or 10-minutes is nearly zero, with a stable and significant inhibitory effect from 15- to 20-minutes. We further investigate this nonlinearity in a second study showing a distinct nonlinear ‘effect amplification’ sometime between 12- to 14-minutes. This apparent critical period after stimulus onset where susceptibility to verbal interference dramatically increases may help explain potential difficulties replicating VOE. More importantly, it suggests the possibility that the 12- to 14-minute period may represent a critical window for other interference paradigms as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.