In this paper, we provide a domain-general scoping review of the nudge movement by reviewing 422 choice architecture interventions in 156 empirical studies. We report the distribution of the studies across countries, years, domains, subdomains of applicability, intervention types, and the moderators associated with each intervention category to review the current state of the nudge movement. Furthermore, we highlight certain characteristics of the studies and experimental and reporting practices that can hinder the accumulation of evidence in the field. Specifically, we found that 74% of the studies were mainly motivated to assess the effectiveness of the interventions in one specific setting, while only 24% of the studies focused on the exploration of moderators or underlying processes. We also observed that only 7% of the studies applied power analysis, 2% used guidelines aiming to improve the quality of reporting, no study in our database was preregistered, and the used intervention nomenclatures were non-exhaustive and often have overlapping categories. Building on our current observations and proposed solutions from other fields, we provide directly applicable recommendations for future research to support the evidence accumulation on why and when nudges work.
The flexibility allowed by the mobilization of technology disintegrated the traditional work-life boundary for most professionals. Whether working from home is the key or impediment to academics’ efficiency and work-life balance became a daunting question for both scientists and their employers. The recent pandemic brought into focus the merits and challenges of working from home on a level of personal experience. Using a convenient sampling, we surveyed 704 academics while working from home and found that the pandemic lockdown decreased the work efficiency for almost half of the researchers but around a quarter of them were more efficient during this time compared to the time before. Based on the gathered personal experience, 70% of the researchers think that in the future they would be similarly or more efficient than before if they could spend more of their work-time at home. They indicated that in the office they are better at sharing thoughts with colleagues, keeping in touch with their team, and collecting data, whereas at home they are better at working on their manuscript, reading the literature, and analyzing their data. Taking well-being also into account, 66% of them would find it ideal to work more from home in the future than they did before the lockdown. These results draw attention to how working from home is becoming a major element of researchers’ life and that we have to learn more about its influencer factors and coping tactics in order to optimize its arrangements.
We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository.
Never use the unfortunate expression "accept the null hypothesis."-Wilkinson and the Task Force on Statistical Inference (1999, p. 599) The interpretation of statistically nonsignificant findings is a vexing point of traditional psychological research. 1 Within the framework of null-hypothesis significance testing (NHST; Fisher, 1925; Neyman & Pearson, 1933), decisions about the null hypothesis are based on the p value. Under NHST logic, one is entitled to reject the null hypothesis whenever the p value is smaller than or equal to a predefined α threshold (typically set at .05; but see Benjamin et al., 2018). In contrast, the p value does not entitle one to claim support in favor of the null hypothesis. According to the common interpretation, any p value higher than α indicates that one has to withhold judgment about the null hypothesis (Cohen, 1994). This asymmetric characteristic of the NHST framework frustrates the interpretation and communication of nonsignificant results (Edwards, Lindman, & Savage, 1963; Nickerson, 2000). It is known that results with a p value greater than .05 are subject to misinterpretation among researchers (Goodman, 2008), 773742A MPXXX10.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.