2016
DOI: 10.1038/sdata.2016.82
|View full text |Cite
|
Sign up to set email alerts
|

Data from a pre-publication independent replication initiative examining ten moral judgement effects

Abstract: We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory’s research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Crowdsourcing is especially useful, we suggest, for fields that rely on local resources that can remain siloed. That said, the data corpus generated by crowdsourced projects often serves as a public resource after the publication of the article (e.g., Open Science Collaboration, 2015; Tierney et al, 2016).…”
Section: Crowdsourcing Science In Actionmentioning
confidence: 99%
“…Crowdsourcing is especially useful, we suggest, for fields that rely on local resources that can remain siloed. That said, the data corpus generated by crowdsourced projects often serves as a public resource after the publication of the article (e.g., Open Science Collaboration, 2015; Tierney et al, 2016).…”
Section: Crowdsourcing Science In Actionmentioning
confidence: 99%
“…Preclinical scientific research has been increasingly criticized for a lack of reproducibility. Recent publications 3,7,14,22,25,32 have demonstrated an alarmingly low level of reproducibility in biomedical research fields. In a survey of 1576 scientists, 1 more than 70% could not reproduce published results and more than 50% could not reproduce findings from their own experiments, leading to discussion of a “reproducibility crisis.” Data from psychology 22 and cancer biology 2 research showed only 40% and 11% of study results were reproducible, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…The outcome measure was excessive infant birthweight defined as a severely LGA infant (birthweight greater than the 97th centile according to the sex‐specific Swedish reference curve for fetal growth) or macrosomia [birthweight of ≥4500 g at term (≥37 weeks)] . The core outcome set was not used because no relevant core outcome set has been developed .…”
Section: Methodsmentioning
confidence: 99%