Background
Majority of research and commercial efforts have focussed on use of artificial intelligence (AI) for fracture detection in adults, despite the greater long-term clinical and medicolegal implications of missed fractures in children. The objective of this study was to assess the available literature regarding diagnostic performance of AI tools for paediatric fracture assessment on imaging, and where available, how this compares with the performance of human readers.
Materials and methods
MEDLINE, Embase and Cochrane Library databases were queried for studies published between 1 January 2011 and 2021 using terms related to ‘fracture’, ‘artificial intelligence’, ‘imaging’ and ‘children’. Risk of bias was assessed using a modified QUADAS-2 tool. Descriptive statistics for diagnostic accuracies were collated.
Results
Nine eligible articles from 362 publications were included, with most (8/9) evaluating fracture detection on radiographs, with the elbow being the most common body part. Nearly all articles used data derived from a single institution, and used deep learning methodology with only a few (2/9) performing external validation. Accuracy rates generated by AI ranged from 88.8 to 97.9%. In two of the three articles where AI performance was compared to human readers, sensitivity rates for AI were marginally higher, but this was not statistically significant.
Conclusions
Wide heterogeneity in the literature with limited information on algorithm performance on external datasets makes it difficult to understand how such tools may generalise to a wider paediatric population. Further research using a multicentric dataset with real-world evaluation would help to better understand the impact of these tools.