Purpose
To assess the capability of deep convolutional neural networks to classify anatomical location and projection from a series of 48 standard views of racehorse limbs.
Materials and methods
Radiographs (
N
= 9504) of horse limbs from image sets made for veterinary inspections by 10 independent veterinary clinics were used to train, validate and test (116, 40 and 42 radiographs, respectively) six deep learning architectures available as part of the open source machine learning framework PyTorch. The deep learning architectures with the best top‐1 accuracy had the batch size further investigated.
Results
Top‐1 accuracy of six deep learning architectures ranged from 0.737 to 0.841. Top‐1 accuracy of the best deep learning architecture (ResNet‐34) ranged from 0.809 to 0.878, depending on batch size. ResNet‐34 (batch size = 8) achieved the highest top‐1 accuracy (0.878) and the majority (91.8%) of misclassification was due to laterality error. Class activation maps indicated that joint morphology, not side markers or other non‐anatomical image regions, drove the model decision.
Conclusions
Deep convolutional neural networks can classify equine pre‐import radiographs into the 48 standard views including moderate discrimination of laterality, independent of side marker presence.