Providing direct feedback to technologists has become challenging for radiologists due to geographic separation and other reasons. As such, there is a need for automated solutions to solve quality issues in radiography. We evaluated the feasibility of using a computer vision artificial intelligence (AI) algorithm to classify hand radiographs into quality categories in order to automate quality assurance processes in radiology. A bounding box was placed over the hand on 300 hand radiographs. These inputs were employed to train the computational neural network (CNN) to automatically detect hand boundaries. The trained CNN detector was used to place bounding boxes over the hands on an additional 100 radiographs, independently of the training or validation sets. A computer algorithm processed each output image to calculate unused air spaces. The same 100 images were classified by two musculoskeletal radiologists into four quality categories. The correlation between the AI-calculated unused space metric and radiologist-assigned quality scores was determined using the Spearman correlation coefficient. The kappa statistic was used to calculate the inter-reader agreement. The best negative correlation between the AI-assigned metric and the radiologists’ assigned quality scores was achieved using the calculation of the unused space at the top of the image. The Spearman correlation coefficients were −0.7 and −0.6 for the two radiologists. The kappa correlation coefficient for interobserver agreement between the two radiologists was 0.6. Automatic calculation of the percentage of unused space or indirect collimation at the top of hand radiographs correlates moderately well with radiographic collimation quality.