Interpreting a failure to replicate is complicated by the fact that the failure could be due to the original finding being a false positive, unrecognized moderating influences between the original and replication procedures, or faulty implementation of the procedures in the replication. One strategy to maximize replication quality is involving the original authors in study design. We (N = 21 Labs and N = 2,220 participants) experimentally tested whether original author involvement improved replicability of a classic finding from Terror Management Theory (Greenberg et al., 1994). Our results were non-diagnostic of whether original author involvement improves replicability because we were unable to replicate the finding under any conditions. This suggests that the original finding was either a false positive or the conditions necessary to obtain it are not yet understood or no longer exist. Data, materials, analysis code, preregistration, and supplementary documents can be found on the OSF page: https://osf.io/8ccnw/
Interpreting a failure to replicate is complicated by the fact that the failure could be due to the original finding being a false positive, unrecognized moderating influences between the original and replication procedures, or faulty implementation of the procedures in the replication. One strategy to maximize replication quality is involving the original authors in study design. We (N = 17 Labs and N = 1,550 participants, after exclusions) experimentally tested whether original author involvement improved replicability of a classic finding from Terror Management Theory (Greenberg et al., 1994). Our results were non-diagnostic of whether original author involvement improves replicability because we were unable to replicate the finding under any conditions. This suggests that the original finding was either a false positive or the conditions necessary to obtain it are not fully understood or no longer exist. Data, materials, analysis code, preregistration, and supplementary documents can be found on the OSF page: https://osf.io/8ccnw/
Genetic testing is increasingly available in medical settings and direct-to-consumer. However, the large and growing literature on genetic testing decisions is rife with conflicting findings, inconsistent methodology, and uneven attention across test types and across predictors of genetic testing decisions. Existing reviews of the literature draw broad conclusions but sacrifice nuanced analysis that with a closer look reveals far more inconsistency than homogeny across studies. The goals of this paper are to provide a systematic review of the empirical work on predictors of genetic testing decisions, highlight areas of consistency and inconsistency, and suggest productive directions for future research. We included all studies that provided quantitative analysis of subjective (e.g., perceived risk, perceived benefits of testing) and/or objective (e.g., family history, sociodemographic variables) predictors of genetic testing interest, intentions, or uptake, which produced a sample of 115 studies. From this review, we conclude that self-reported and test-related (as opposed to disorder-related or objective) predictors are relatively consistent across studies but that theoretically-driven efforts to examine testing interest across test types are sorely needed.
Information often comes as a mix of good and bad news, prompting the question, "Do you want the good news or the bad news first?" In such cases, news-givers and news-recipients differ in their concerns and considerations, thus creating an obstacle to ideal communication. In three studies, we examined order preferences of news-givers and news-recipients and the consequences of these preferences. Study 1 confirmed that news-givers and news-recipients differ in their news order preferences. Study 2 tested two solutions to close the preference gap between news-givers and recipients and found that both perspective-taking and priming emotion-protection goals shift news-givers' delivery patterns to the preferred order of news-recipients. Study 3 provided evidence that news order has consequences for recipients, such that opening with bad news (as recipients prefer) reduces worry, but this emotional benefit undermines motivation to change behavior.
RateMyProfessors.com (RMP) is becoming an increasingly popular tool among students, faculty and school administrators. The validity of RMP is a point of debate; many would argue that self-selection bias obscures the usefulness of RMP evaluations. In order to test this possibility, we collected three types of evaluations: RMP evaluations that existed at the beginning of our study, traditional in-class evaluations and RMP evaluations that were prompted after we collected in-class evaluations. We found differences in the types of evaluations students provide for their professors for both perceptions of professor clarity and ratings of professor easiness. Based on these results, conclusions drawn from RMP are suspect and indeed may offer a biased view of professors. IntroductionStudent evaluations are an important part of the feedback process for university professors. Implications of positive student evaluations include pay raises for professors, advancement to tenure status and improved marketability when applying for new teaching positions. Student evaluations also offer professors vital feedback to enhance teaching. For these reasons, the importance of student evaluations is evident.Universities and colleges have several options regarding how to administer their student evaluations of professors. It is common for universities to use traditional inclass evaluations in which students use pencil and paper to provide feedback about their professors. However, as more universities begin to move towards a paperless environment, institutions may instead use online evaluations. Most online evaluations offer students a chance to provide feedback about professors at a location and time of their choice, reduce time constraints and reduce potential influence from the professor (Anderson, Cain, and Bird 2005). Online evaluations also decrease the problem of students who do not get the chance to complete in-class evaluations if they miss class during evaluation day.The use of online evaluations versus traditional in-class evaluations raises questions regarding similarity in responses across the two methods of presentation. Student ratings of professors have been shown to be similar across the two formats when the professor requests the evaluations (Donovan, Mader, and Shinsky 2006). However, Donovan and colleagues found that the content of comments provided by students differed depending upon whether the evaluation was online or in class.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.