In describing measures used in their research, authors frequently report having adapted a scale, indicating that they changed something about it. Although such changes can raise concerns about validity, there has been little discussion of this practice in our literature. To estimate the prevalence and identify key forms of scale adaptation, we conducted two studies of the literature. In Study 1, we reviewed the descriptions of all scales ( N = 2,088) in four top journals over a 2-year period. We found that 46% of all scales were reported by authors as adapted and that evidence to support the validity of the adapted scales was presented in 23% of those cases. In Study 2, we chose six scales and examined their use across the literature, which allowed us to identify unreported adaptations. We found that 85% of the administrations of these scales had at least one form of adaptation and many had multiple adaptations. In Study 3, we surveyed editorial board members and a select group of psychometricians to evaluate the extent to which particular adaptations raised concerns about validity and the kinds of evidence needed to support the validity of the adapted scales. To provide guidance for authors who adapt scales and for reviewers and editors who evaluate papers with adapted scales, we present discussions of several forms of adaptations regarding potential threats to validity and recommendations for the kinds of evidence that might best support the validity of the adapted scale (including a reviewer checklist).
Intellectual contribution in the form of authorship is a fundamental component of the academic career. While research has addressed questionable and harmful authorship practices, there has largely been no discussion of how U.S. academic institutions interpret and potentially mitigate such practices through the use of institution-level authorship policies. To gain a better understanding of the role of U.S. academic institutions in authorship practices, we conducted a systematic review of publicly available authorship policies for U.S. doctoral institutions (using the 266 2018 Carnegie-classified R1 and R2 Universities), focusing on components such as specification of authorship criteria, recommendations for discussing authorship, dispute resolution processes, and guidance for faculty-student collaborations. We found that only 24% of the 266 Carnegie R1 and R2 Universities had publicly available authorship policies. Within these policies, the majority (93%) specified criteria for authorship, but provided less guidance about actual processes for applying such criteria (62%), handling authorship disputes (62%), and managing faculty-student author teams (49%). Further, we found that any discussion of dispute resolution practices typically lacked specificity. Recommendations grounded in these findings are offered for institutions to leverage their ability to guide the authorship process by adopting an authorship policy that acknowledges disciplinary diversity while still offering substantive guidance.
Over the last couple of decades, studies using the experience sampling methodology (ESM) have been used with increasing frequency within the management-related sciences as the method allows researchers the opportunity to investigate questions involving ongoing, dynamic, intra-individual processes. Given the longitudinal nature of the methodology and the resulting multi-level data structure, there are sample-and measurementrelated issues that make ESM studies different from other methods commonly used in management research. Consequently, ESM studies have demands for reporting sampleand measurement-related information that differ from more commonly used methods. In the current paper, we review the conceptual foundations of sample and measurement issues in ESM studies and report the findings of a survey of the ESM studies to identify current reporting practices. We then offer clear, easy to implement recommendations for reporting sample-and measurement-related aspects of ESM studies. We hope that these recommendations will improve reporting of ESM studies and allow readers the opportunity to more fully and comprehensively evaluate the research presented.1 There are several variants of the ESM design, including daily diary studies and the event-contingent recoding method. We have chosen to use the term ESM in this paper, though our comments and recommendations are relevant to the other designs as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.