2023
DOI: 10.3389/fradi.2023.1112841
|View full text |Cite
|
Sign up to set email alerts
|

How should studies using AI be reported? lessons from a systematic review in cardiac MRI

Abstract: Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…In recent years, different DL applications have been developed to automate the segmentation of cardiac structures on imaging and may help to improve efficiency and reliability ( 1 ). In a previous systematic review in 2022, 209 studies were included for AI-based cardiac MRI segmentation ( 17 , 18 ). However, our systematic review of DL-based cardiac CT segmentations identified only 18 studies.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, different DL applications have been developed to automate the segmentation of cardiac structures on imaging and may help to improve efficiency and reliability ( 1 ). In a previous systematic review in 2022, 209 studies were included for AI-based cardiac MRI segmentation ( 17 , 18 ). However, our systematic review of DL-based cardiac CT segmentations identified only 18 studies.…”
Section: Discussionmentioning
confidence: 99%
“…Each included study was assessed for compliance with the criteria of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) ( 16 ). The 42 individual criteria were divided into four domains: study description, dataset description, model description and model performance ( 17 , 18 ).…”
Section: Methodsmentioning
confidence: 99%
“…These two studies used publicly available Kaggle datasets for the detection of acute and chronic PE. Although the proportions of acute and chronic PE cases are available from the original data source, these were not stated by the studies themselves, limiting their transparency—it is best practice for publications to provide all relevant clinical characteristics regardless of whether they can be accessed elsewhere ( 17 , 18 ). Public datasets utilised in the studies may not have had all the elements or features required for an accurate PE diagnosis, which may limit the model's ability to identify the spectrum of abnormalities related to PE.…”
Section: Discussionmentioning
confidence: 99%
“…Extracted data included study information (such as location, year and journal type), study design, data selection (such as number of participants, number of CTEPH cases and inclusion criteria), and the AI model being presented (such as validation and performance results). The quality of each included study was appraised by checking compliance with the individual criteria of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) ( 16 ), which were divided into four domains ( 17 , 18 ).…”
Section: Methodsmentioning
confidence: 99%
“…The development and application of deep learning methods is an active research topic in radiology 32 34 . Standards for the reporting of artificial intelligence methods were established for the medical field as a whole in 35 and the topic was specifically discussed for CMR in 36 , 37 .…”
Section: Discussionmentioning
confidence: 99%