Teleradiology involves the creation of a radiographic image that is then transmitted electronically. It has been shown that low-cost teleradiology has a high level of agreement when comparing the original radiograph to the digital image. However, there has been little investigation of the effect of digitization on the score allocated by a grading scheme. Radiographs of 60 canine elbows were selected, each in three projections (mediolateral flexed, mediolateral neutral, craniocaudal). Each radiograph was photographed at 3 megapixel (3 M) and 6 megapixel (6 M) resolution using a digital camera. The images were placed in groups (radiographs, 3 M and 6 M) and randomized. Each elbow was independently graded by a radiologist and an orthopedic surgeon using the BVA elbow scoring scheme, with the different image sets interpreted separately. Intra and interobserver agreement was compared using a kappa analysis. The radiologist had substantial intraobserver agreement for repeated grading of radiographs, and moderate agreement for the other intraobserver tests (3 M vs. radiographs, 6 M vs. radiographs, 3 vs. 6 M). The surgeon had moderate to substantial agreement for the intraobserver tests. There was reduced interobserver agreement for all image groups. These results suggest that low-cost teleradiology may only allow moderate accuracy when used for grading schemes, and this may affect its use for breed scoring schemes. However, there appears to be an inherent subjectivity present in the elbow-grading scheme, seen in both intra and interobserver analysis. Therefore, further study of teleradiology using a different scoring model (e.g., hip dysplasia) may be indicated.