Purpose: Computer vision and artificial intelligence (AI) offer the opportunity to rapidly and accurately interpret standardized x-rays. We trained and validated a machine learning tool that identified key reference points and determined glenoid retroversion and glenohumeral relationships on axillary radiographs.
Methods: Standardized pre and post arthroplasty axillary radiographs were manually annotated locating six reference points and used to train a computer vision model that could identify these reference points without human guidance. The model then used these reference points to determine humeroglenoid alignment in the anterior to posterior direction and glenoid version. The model’s accuracy was tested on a separate test set of axillary images not used in training, comparing its reference point locations, alignment and version to the corresponding values assessed by two surgeons.
Results: On the test set of pre- and post-operative images not used in the training process, the model was able to rapidly identify all six reference point locations to within a mean of 2 mm of the surgeon-assessed points. The mean variation in alignment and version measurements between the surgeon assessors and the model was similar to the variation between the two surgeon assessors.
Conclusions: To our knowledge, this is the first reported development and validation of a computer vision/artificial intelligence model that could independently identify key landmarks and determine the glenohumeral relationship and glenoid version on axillary radiographs. This observer-independent approach has the potential to enable efficient human observer independent assessment of shoulder radiographs, lessening the burden of manual x-ray interpretation and enabling scaling of these measurements across large numbers of patients from multiple centers so that pre and postoperative anatomy can be correlated with patient reported clinical outcomes.
Level of Evidence: Level III Study of Diagnostic Test