2015
DOI: 10.1080/19312458.2015.1096334
|View full text |Cite
|
Sign up to set email alerts
|

Questionable Research Practices in Experimental Communication Research: A Systematic Analysis From 1980 to 2013

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
44
1
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(46 citation statements)
references
References 35 publications
0
44
1
1
Order By: Relevance
“…If they find significant results, they may stop data collection at that point (Simmons et al, 2011). Among 239 experimental papers in a selection of communication journals, Matthes et al (2015) find that only four papers reported a priori power analyses. While this does not imply that peeking is common, it does suggest that only a few studies have a pre-specified stopping rule when to halt data collection.…”
Section: Questionable Research Practicesmentioning
confidence: 99%
See 2 more Smart Citations
“…If they find significant results, they may stop data collection at that point (Simmons et al, 2011). Among 239 experimental papers in a selection of communication journals, Matthes et al (2015) find that only four papers reported a priori power analyses. While this does not imply that peeking is common, it does suggest that only a few studies have a pre-specified stopping rule when to halt data collection.…”
Section: Questionable Research Practicesmentioning
confidence: 99%
“…The studies that did examine the prevalence of QRPs in communication have relied on content analyses of research articles, which, for instance, examine whether the distribution of p-values clusters around .05. These analyses demonstrate that QRPs are fairly widespread (Franco, Malhotra, & Simonovits, 2014;Matthes et al, 2015;Vermeulen et al, 2015). However, content analysis has its limitations: (1) they are based upon the reported analyses, while QRPs are, generally, not reported; (2) they cannot distinguish accidental omissions of details from a scholar's intent to do wrong; and (3) they cannot give information about the perceptions researchers have about the field's research practices.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The qualitative analysis indicated that 2.9 % of cases involved falsification, 4.4 % involved fabrication, and 4.4 % involved both fabrication and falsification Matthes et al (2015;p. 193) Communication Various QRPs Journal articles There were indications of small and insufficiently justified sample sizes, a lack of reported effect sizes, an indiscriminate removal of cases and items, an increasing inflation of p-values directly below p \ 0.05, and a rising share of verified (as opposed to falsified) hypotheses J Bus Psychol (2016) (2013-to-1990 ratio of the percentage of papers = 10.3) than those between 0.051 and 0.059 (ratio = 3.6).…”
Section: Cases Of Misconductmentioning
confidence: 99%
“…At present, there is limited information in the literature about the full life cycle of studies in the field of communication (Elson & Przybylski, 2017) as well as in other disciplines. What we typically see is the final product -journal articles in which the bulk of statistical significance tests fall below the magical p < .05 threshold (Vermeulen, Beukeboom, Batenburg, Avramiea, Stoyanov, van de Velde, & Oegema, 2015) with scant information about how exactly it is that we got there (Matthes, Marquart, Naderer, Arendt, Schmuck, & Adam, 2015;Simmons, Nelson, & Simonsohn, 2011). There are numerous reasons for this which I will discuss later; for now, the important thing is that the gaps in insights about how communication science gets produced inhibits our ability to build cumulative knowledge internally within the field (Elson & Przybylski, 2017) and has vast implications for the public's perception of the field and its utility for the world (Przybylski & Weinstein, 2016).…”
mentioning
confidence: 99%