2022
DOI: 10.3389/fcvm.2022.956811
|View full text |Cite
|
Sign up to set email alerts
|

Quality of reporting in AI cardiac MRI segmentation studies – A systematic review and recommendations for future studies

Abstract: BackgroundThere has been a rapid increase in the number of Artificial Intelligence (AI) studies of cardiac MRI (CMR) segmentation aiming to automate image analysis. However, advancement and clinical translation in this field depend on researchers presenting their work in a transparent and reproducible manner. This systematic review aimed to evaluate the quality of reporting in AI studies involving CMR segmentation.MethodsMEDLINE and EMBASE were searched for AI CMR segmentation studies in April 2022. Any fully … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…These two studies used publicly available Kaggle datasets for the detection of acute and chronic PE. Although the proportions of acute and chronic PE cases are available from the original data source, these were not stated by the studies themselves, limiting their transparency—it is best practice for publications to provide all relevant clinical characteristics regardless of whether they can be accessed elsewhere ( 17 , 18 ). Public datasets utilised in the studies may not have had all the elements or features required for an accurate PE diagnosis, which may limit the model's ability to identify the spectrum of abnormalities related to PE.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These two studies used publicly available Kaggle datasets for the detection of acute and chronic PE. Although the proportions of acute and chronic PE cases are available from the original data source, these were not stated by the studies themselves, limiting their transparency—it is best practice for publications to provide all relevant clinical characteristics regardless of whether they can be accessed elsewhere ( 17 , 18 ). Public datasets utilised in the studies may not have had all the elements or features required for an accurate PE diagnosis, which may limit the model's ability to identify the spectrum of abnormalities related to PE.…”
Section: Discussionmentioning
confidence: 99%
“…Extracted data included study information (such as location, year and journal type), study design, data selection (such as number of participants, number of CTEPH cases and inclusion criteria), and the AI model being presented (such as validation and performance results). The quality of each included study was appraised by checking compliance with the individual criteria of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) ( 16 ), which were divided into four domains ( 17 , 18 ).…”
Section: Methodsmentioning
confidence: 99%
“…High-quality reporting of model design, training, validation and testing is crucial for understanding the performance and generalisability of AI tools. 32 Transparency of AI tool development and performance is essential for the Open access gaining trust of stakeholders, including the public, and is therefore important for translation of tools into the clinical sphere. The way AI tools are presented should be consistent, enabling direct comparisons of performance, and accessible to all stakeholders, allowing purpose and performance to be understood by those without extensive experience in the field of AI.…”
Section: Discussionmentioning
confidence: 99%
“…The exclusion of semi-automated techniques, unpublished literature and conference abstracts were important to ensure consistent and reproducible evaluation of the included studies Recommendations for studies based on findings of this systematic review. Adapted from Alabed et al 2022 (13) and CLAIM (8). but did narrow the scope of the review and carried the risk of selection bias.…”
Section: The Systematic Reviewmentioning
confidence: 99%