2023
DOI: 10.1016/j.autrev.2023.103294
|View full text |Cite
|
Sign up to set email alerts
|

Current state and completeness of reporting clinical prediction models using machine learning in systemic lupus erythematosus: A systematic review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 64 publications
0
6
0
Order By: Relevance
“…However, systematic reviews evaluating studies of prediction models have shown that they are often poorly conducted (including deficiencies in study design or data collection 37 38 ); use poor methodology 37 38 ; are incompletely reported with key details missing 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ; are consequently at high risk of bias 41 49 55 56 57 ; rarely adhere to open science practices, 58 and are susceptible to overinterpretation or so-called spin. 59 60 These deficiencies cast considerable doubt on models’ usefulness and safety, and raises concerns about their potential to create or widen healthcare disparities.…”
mentioning
confidence: 99%
“…However, systematic reviews evaluating studies of prediction models have shown that they are often poorly conducted (including deficiencies in study design or data collection 37 38 ); use poor methodology 37 38 ; are incompletely reported with key details missing 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ; are consequently at high risk of bias 41 49 55 56 57 ; rarely adhere to open science practices, 58 and are susceptible to overinterpretation or so-called spin. 59 60 These deficiencies cast considerable doubt on models’ usefulness and safety, and raises concerns about their potential to create or widen healthcare disparities.…”
mentioning
confidence: 99%
“…Besides, reporting guidelines on the use of AI in healthcare to ensure fair and transparent research have been recently defined, such as TRIPOD-AI and PROBAST-AI [161], SPIRIT-AI and CONSORT-AI [162, 163] or STARD-AI [164]. A recently published review article has tackled the adherence in diagnostic and prognostic applications of ML in SLE patients using TRIPOD and PROBAST [165]. A more detailed description of these reporting guidelines is addressed in [166].…”
Section: Discussionmentioning
confidence: 99%
“…Since ML methods are relatively new to the field, assessing the quality or technical aspects of these reports may be challenging to most non-ML researchers. A recent systematic review by Munguía-Realpozo et al 14 assessed 45 SLE reports that used ML to build diagnostic and/or predictive algorithms and determined whether they adhered to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) reporting standards. 15 The review concluded that most reports were deficient in multiple domains of the TRIPOD recommendations, often under-reporting relevant details about their data preprocessing, model-building process, model specification and model performance.…”
Section: Reports In Slementioning
confidence: 99%
“…While the aim of the study was not to systematically evaluate the reporting adherences of these reports, in general, we found similar limitations identified by Munguía-Realpozo et al . 14 This highlights that there is a need to improve transparency and reporting of prediction models in future ML SLE studies.…”
Section: Reports In Slementioning
confidence: 99%
See 1 more Smart Citation