Background: The Oxford Knee Score (OKS); Oxford Hip Score (OHS); Knee injury and Osteoarthritis Outcome Score, Joint Replacement (KOOS JR); and Hip disability and Osteoarthritis Outcome Score, Joint Replacement (HOOS JR) are well-validated and widely used short-form patient-reported outcome measures (PROMs) for assessing outcomes after total knee arthroplasty (TKA) and total hip arthroplasty (THA). We are not aware of the existence of any crosswalks to convert scores between these PROMs. We aimed to develop and validate crosswalks that will permit the comparison of scores between studies using different PROMs and the pooling of results for meta-analyses. Methods: We retrospectively analyzed scores from patients (486 in the knee cohort and 340 in the hip cohort) from the Syracuse Orthopedic Specialists Joint Registry who had completed the appropriate PROMs (OKS and KOOS JR in the knee cohort and OHS and HOOS JR in the hip cohort) as the standard of care before undergoing primary TKA or unicompartmental knee arthroplasty (UKA) between January 9, 2016, and June 19, 2017, or primary THA or hip resurfacing between November 29, 2010, and October 30, 2017, or when returning for postoperative care. Using the equipercentile equating method, we created 4 crosswalks: OKS to KOOS JR, KOOS JR to OKS, OHS to HOOS JR, and HOOS JR to OHS. To assess validity, Spearman coefficients were calculated using bootstrapping methods, and means for actual and crosswalk-derived scores were compared. Results: There were minimal differences between the means of the known and crosswalk-derived scores. As calculated with the use of bootstrapping methods, Spearman coefficients between the actual and derived scores were strong and positive for both knee arthroplasty crosswalks (0.888 to 0.889; 95% confidence interval [CI], 0.887 to 0.891) and hip arthroplasty crosswalks (0.916 to 0.918; 95% CI, 0.914 to 0.919). Conclusions: We successfully created 4 crosswalks that allow conversion of Oxford scores to KOOS and HOOS JR scores and vice versa. These crosswalks will allow harmonization of PROMs assessment regardless of which of the short forms are used, which may facilitate multicenter collaboration or allow sites to switch PROMs without loss of historic comparison data. Level of Evidence: Level III. See Instructions for Authors for a complete description of levels of evidence.
Background: Patient-reported outcome measures (PROMs) are essential tools that are used to assess health status and treatment outcomes in orthopaedic care. Use of PROMs can burden patients with lengthy and cumbersome questionnaires. Predictive models using machine learning known as computerized adaptive testing (CAT) offer a potential solution. The purpose of this study was to evaluate the ability of CAT to improve efficiency of the Veterans RAND 12 Item Health Survey (VR-12) by decreasing the question burden while maintaining the accuracy of the outcome score. Methods: A previously developed CAT model was applied to the responses of 19,523 patients who had completed a full VR-12 survey while presenting to 1 of 5 subspecialty orthopaedic clinics. This resulted in the calculation of both a full-survey and CAT-model physical component summary score (PCS) and mental component summary score (MCS). Several analyses compared the accuracy of the CAT model scores with that of the full scores by comparing the means and standard deviations, calculating a Pearson correlation coefficient and intraclass correlation coefficient, plotting the frequency distributions of the 2 score sets and the score differences, and performing a Bland-Altman assessment of scoring patterns. Results: The CAT model required 4 fewer questions to be answered by each subject (33% decrease in question burden). The mean PCS was 1.3 points lower in the CAT model than with the full VR-12 (41.5 ± 11.0 versus 42.8 ± 10.4), and the mean MCS was 0.3 point higher (57.3 ± 9.4 versus 57.0 ± 9.6). The Pearson correlation coefficients were 0.97 for PCS and 0.98 for MCS, and the intraclass correlation coefficients were 0.96 and 0.97, respectively. The frequency distribution of the CAT and full scores showed significant overlap for both the PCS and the MCS. The difference between the CAT and full scores was less than the minimum clinically important difference (MCID) in >95% of cases for the PCS and MCS. Conclusions: The application of CAT to the VR-12 survey demonstrated an ability to lessen the response burden for patients with a negligible effect on score integrity.
Background: Patient-reported outcome measures enable quantitative and patient-centric assessment of orthopedic interventions; however, increased use of these forms has an associated burden for patients and practices. We examined the utility of a computerized adaptive testing (CAT) method to reduce the number of questions on the American Shoulder and Elbow Surgeons (ASES) instrument. Methods: A previously developed ASES CAT system was applied to the responses of 2763 patients who underwent shoulder evaluation and treatment and had answered all questions on the full ASES instrument. Analyses to assess the accuracy of the CAT score in replicating the full-form score included the mean and standard deviation of both groups of scores, frequency distributions of the 2 sets of scores and score differences, Pearson and intraclass correlation coefficients, and Bland-Altman assessment of patterns in score differences. Results: By tailoring questions according to prior responses, CAT reduced the question burden by 40%. The mean difference between CAT and full ASES scores was À0.14, and the scores were within 5 points in 95% of cases (a 12-point difference is considered the threshold for clinical significance) and were clustered around zero. The correlation coefficients were 0.99, and the frequency distributions of the CAT and full ASES scores were nearly identical. The differences between scores were independent of the overall score, and no significant bias for CAT scores was found in either a positive or negative direction. Conclusion: The ASES CAT system lessens respondent burden with a negligible effect on score integrity. No institutional review board approval was required.
Background: Patient-reported outcome measures are an increasingly important tool for assessing the impact of treatments orthopedic surgeons render. Despite their importance, they can present a burden. We examined the validity and utility of a computerized adaptive testing (CAT) method to reduce the number of questions on the Foot and Ankle Ability Measure (FAAM), a validated anatomy-specific outcome measure. Methods: A previously developed FAAM CAT system was applied to the responses of patients undergoing foot and ankle evaluation and treatment over a 3-year period (2017-2019). A total of 15 902 responses for the Activities of Daily Living (ADL) subscale and a total of 14 344 responses for the Sports subscale were analyzed. The accuracy of the CAT to replicate the full-form score was assessed. Results: The CAT system required 11 questions to be answered for the ADL subscale in 85.1% of cases (range, 11-12). The number of questions answered on the Sports subscale was 6 (range, 5-6) in 66.4% of cases. The mean difference between the full FAAM ADL subscale and CAT was 0.63 of a point. The mean difference between the FAAM Sports subscale and CAT was 0.65 of a point. Conclusion: The FAAM CAT was able to reduce the number of responses a patient would need to answer by nearly 50%, while still providing a valid outcome score. This measure can therefore be directly correlated with previously obtained full FAAM scores in addition to providing a foot/ankle-specific measure, which previously reported CAT systems are not able to do. Level of Evidence: Level IV, case series.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.