The use of Amazon’s Mechanical Turk (MTurk) in management research has increased over 2,117% in recent years, from 6 papers in 2012 to 133 in 2019. Among scholars, though, there is a mixture of excitement about the practical and logistical benefits of using MTurk and skepticism about the validity of the data. Given that the practice is rapidly increasing but scholarly opinions diverge, the Journal of Management commissioned this review and consideration of best practices. We hope the recommendations provided here will serve as a catalyst for more robust, reproducible, and trustworthy MTurk-based research in management and related fields.
We review the literature on evidence-based best practices on how to enhance methodological transparency, which is the degree of detail and disclosure about the specific steps, decisions, and judgment calls made during a scientific study. We conceptualize lack of transparency as a "research performance problem" because it masks fraudulent acts, serious errors, and questionable research practices, and therefore precludes inferential and results reproducibility. Our recommendations for authors provide guidance on how to increase transparency at each stage of the research process: (1) theory, (2) design, (3) measurement, (4) analysis, and (5) reporting of results. We also offer recommendations for journal editors, reviewers, and publishers on how to motivate authors to be more transparent. We group these recommendations into the following categories: (1) manuscript submission forms requiring authors to certify they have taken actions to enhance transparency, (2) manuscript evaluation forms including additional items to encourage reviewers to assess the degree of transparency, and (3) review process improvements to enhance transparency. Taken together, our recommendations provide a resource for doctoral education and training; researchers conducting empirical studies; journal editors and reviewers evaluating submissions; and journals, publishers, and professional organizations interested in enhancing the credibility and trustworthiness of research. Ravi S. Ramani and Nawaf Alabduljader contributed equally to this work. We thank Daan van Knippenberg, Sharon K. Parker, and two anonymous Academy of Management Annals reviewers for highly constructive feedback on previous drafts. Also, we thank P. Knight Campbell for his assistance with the literature review and data collection during the initial stages of our project. A previous version of this article was presented at the
International business is not immune to science's reproducibility and replicability crisis. We argue that this crisis is not entirely surprising given the methodological practices that enhance systematic capitalization on chance. This occurs when researchers search for a maximally predictive statistical model based on a particular dataset and engage in several trial-and-error steps that are rarely disclosed in published articles. We describe systematic capitalization on chance, distinguish it from unsystematic capitalization on chance, address five common practices that capitalize on chance, and offer actionable strategies to minimize the capitalization on chance and improve the reproducibility and replicability of future IB research.
We categorized and content-analyzed 168 methodological literature reviews published in 42 management and applied psychology journals. First, our categorization uncovered that the majority of published reviews (i.e., 85.10%) belong in three categories (i.e., critical, narrative, and descriptive reviews), which points to opportunities and promising directions for additional types of methodological literature reviews in the future (e.g., meta-analytic and umbrella reviews). Second, our content analysis uncovered implicit features of published methodological literature reviews. Based on the results of our content analysis, we created a checklist of actionable recommendations regarding 10 components to include to enhance a methodological literature review’s thoroughness, clarity, and ultimately, usefulness. Third, we describe choices and judgment calls in published reviews and provide detailed explications of exemplars that illustrate how those choices and judgment calls can be made explicit. Overall, our article offers recommendations that are useful for three methodological literature review stakeholder groups: producers (i.e., potential authors), evaluators (i.e., journal editors and reviewers), and users (i.e., substantive researchers interested in learning about a particular methodological issue and individuals tasked with training the next generation of scholars).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.