This article was originally submitted for publication to the Editor of Advances in Methods and Practices in Psychological Science (AMPPS) in 2015. When the submitted manuscript was subsequently posted online (Silberzahn et al., 2015), it received some media attention, and two of the authors were invited to write a brief commentary in Nature advocating for greater crowdsourcing of data analysis by scientists. This commentary, arguing that crowdsourced research "can balance discussions, validate findings and better inform policy" (Silberzahn & Uhlmann, 2015, p. 189), included a new figure that displayed the analytic teams' effectsize estimates and cited the submitted manuscript as the source of the findings, with a link to the preprint. However, the authors forgot to add a citation of the Nature commentary to the final published version of the AMPPS article or to note that the main findings had been previously publicized via the commentary, the online preprint, research presentations at conferences and universities, and media reports by other people. The authors regret the oversight.
Twenty-nine teams involving 61 analysts used the same dataset to address the same research question: whether soccer referees are more likely to give red cards to dark skin toned players than light skin toned players. Analytic approaches varied widely across teams, and estimated effect sizes ranged from 0.89 to 2.93 in odds ratio units, with a median of 1.31. Twenty teams (69%) found a statistically significant positive effect and nine teams (31%) observed a non-significant relationship. Overall 29 different analyses used 21 unique combinations of covariates. We found that neither analysts' prior beliefs about the effect, nor their level of expertise, nor peer-reviewed quality of analysis readily explained variation in analysis outcomes. This suggests that significant variation in analysis of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy by which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective analytic choices influence research results.
Most studies on problemistic search do not pay sufficient attention to how belowaspiration organizations decide what types of strategic actions to use to cope with performance shortfalls. In this study, we examine the preferences of multinational corporations (MNCs) for selecting foreign investment or divestment as a near-term solution to performance shortfalls. We first argue that foreign divestment is generally a more preferred performance solution. Drawing on the literature on vicarious learning, we further argue that MNCs are more likely to engage in foreign investment or foreign divestment to combat large performance shortfalls if peers recently and actively undertook the same type of strategic action. Moreover, they are less likely to undertake the other type of strategic action simultaneously because they adopt the satisficing principle and time constraints deter them from implementing multiple types of strategic action substantially. The analysis of the data about Japanese manufacturing MNCs reveals that vicarious learning influences MNCs' selection preferences in certain conditions, thereby extending the literature on problemistic search.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.