2020
DOI: 10.1111/1740-9713.01444
|View full text |Cite
|
Sign up to set email alerts
|

A Replication Crisis in Methodological Research?

Abstract: Statisticians have been keen to critique statistical aspects of the “replication crisis” in other scientific disciplines. But new statistical tools are often published and promoted without any thought to replicability. This needs to change, argue Anne-Laure Boulesteix, Sabine Hoffmann, Alethea Charlton and Heidi Seibold

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
34
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 34 publications
(34 citation statements)
references
References 13 publications
0
34
0
Order By: Relevance
“…As such, the development of new computational methods is an active field of research, a situation which constantly introduces new methods into the literature, often in a comparison with existing methods. Yet despite the frequency of these types of papers, there is a surprising lack of guidance on the appropriate design and reporting of studies presenting and evaluating new computational methods [ 1 , 2 ]. It is not clear how new methods and their performances should be described and how studies comparing performances of methods should be designed.…”
Section: Introductionmentioning
confidence: 99%
“…As such, the development of new computational methods is an active field of research, a situation which constantly introduces new methods into the literature, often in a comparison with existing methods. Yet despite the frequency of these types of papers, there is a surprising lack of guidance on the appropriate design and reporting of studies presenting and evaluating new computational methods [ 1 , 2 ]. It is not clear how new methods and their performances should be described and how studies comparing performances of methods should be designed.…”
Section: Introductionmentioning
confidence: 99%
“…The characterisation of the different types of errors and their frequencies that we report might be of use to designing better simulation studies that test error detection routines. Such simulation studies should mimic the types of errors that occur in datasets, otherwise results may have little generalisability when applied to datasets that have different error structures (49). Merely simulating normally distributed errors without age (date) errors, keystroke errors (29), duplications, internally inconsistent values etc is unrealistic, and such studies are unlikely to be a useful test of the performance of the method.…”
Section: Discussionmentioning
confidence: 99%
“…The very people who often provide statistical advice to applied researchers and lament the brevity of method sections or lack of rigorous reporting fall prey to the same trappings. Wouldn't it be nice if simulation studies contained not only guidance for data analysis and study design but were additionally exemplary in transparency, data and code sharing (Boulesteix et al, 2020)?…”
Section: ) Research For Research Should Lead By Examplementioning
confidence: 99%