2021
DOI: 10.1136/bmjopen-2020-047709
|View full text |Cite
|
Sign up to set email alerts
|

Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol

Abstract: IntroductionStandards for Reporting of Diagnostic Accuracy Study (STARD) was developed to improve the completeness and transparency of reporting in studies investigating diagnostic test accuracy. However, its current form, STARD 2015 does not address the issues and challenges raised by artificial intelligence (AI)-centred interventions. As such, we propose an AI-specific version of the STARD checklist (STARD-AI), which focuses on the reporting of AI diagnostic test accuracy studies. This paper describes the me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
96
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 179 publications
(98 citation statements)
references
References 18 publications
2
96
0
Order By: Relevance
“…To date, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015 statement remains the most used tool for reporting of studies investigating diagnostic test accuracy and performance [ 43 , 44 ]. However, this tool has some shortcomings when reporting studies evaluating artificial intelligence (AI) driven interventions due to unclear methodological interpretation, lack of standardised nomenclature, use of unfamiliar outcome measures, and other issues, thereby limiting the comprehensive appraisal of these technologies [ 45 ]. Thus, our study findings further reiterate the need for developing an AI-specific STARD guideline to ensure complete and robust reporting of studies evaluating AI-driven technologies and interventions [ 45 , 46 ].…”
Section: Discussionmentioning
confidence: 99%
“…To date, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015 statement remains the most used tool for reporting of studies investigating diagnostic test accuracy and performance [ 43 , 44 ]. However, this tool has some shortcomings when reporting studies evaluating artificial intelligence (AI) driven interventions due to unclear methodological interpretation, lack of standardised nomenclature, use of unfamiliar outcome measures, and other issues, thereby limiting the comprehensive appraisal of these technologies [ 45 ]. Thus, our study findings further reiterate the need for developing an AI-specific STARD guideline to ensure complete and robust reporting of studies evaluating AI-driven technologies and interventions [ 45 , 46 ].…”
Section: Discussionmentioning
confidence: 99%
“…Of particular relevance to diagnostic radiologists is the work to develop STARD‐AI (Standards for Reporting of Diagnostic Accuracy Studies—Artificial Intelligence), an extension of the original STARD Statement, which was updated in 2016. Sounderajah 21 et al . state, in their paper, describing the STARD‐AI methodology: ‘……much of the evidence supporting diagnostic algorithms has been disseminated in the absence of AI‐specific reporting guidelines’.…”
Section: Design and Reporting Of Ai / ML Studies: Standardisation To ...mentioning
confidence: 99%
“…Additional guidelines for AI clinical research quality assessment include, QUADAS-AI, which is an AI-centered diagnostic test accuracy quality assessment tool (27); STARD-AI, which guides AI-centered diagnostic test accuracy studies (28); and DECIDE-AI, which is a reporting guideline to bridge the development-to-implementation gap in clinical AI (29). New quality metrics and guidelines will be developed as AI applications continue to expand.…”
Section: Guidelines For ML Dl and Ai Clinical Trials Design And Repor...mentioning
confidence: 99%