Objective. Although use of the American College of Rheumatology 20% improvement criteria (ACR20) has standardized response measurement in rheumatoid arthritis (RA) trials, the ACR20 has been criticized as less sensitive to change than are continuous measures of response, and its threshold for response (>20%) is thought to be low. Our goal was to redefine response in RA in a manner that 1) corresponds to a clinical impression of response (clinical validity), 2) maximizes sensitivity to change, and 3) allows for calculation of the ACR20 to continue standardization of reporting. Methods. We examined multiple different ways of defining response, including dichotomous definitions (patient improved versus not improved), ordinal definitions (degree of response scored on an ordinal scale), disease activity indexes, continuous definitions, and definitions that were hybrids of continuous and ordinal measures. Candidate definitions included the ACR20, ACR50, ACR70, the Disease Activity Score, the Simplified Disease Activity Index, the ACR-N, the nACR, and the European League Against Rheumatism (EULAR) response. We also tested variations on these approaches. To test clinical validity, we administered a survey involving patients from a previous trial who had various levels of improvement and asked rheumatologists whether and by how much these patients improved. To determine sensitivity to change, we collected data from 11 large multicenter trials of disease-modifying antirheumatic drugs (DMARDs) in RA comprising 3,665 patients (7 anti-tumor necrosis factor ␣ arms, 4 conventional DMARD arms, 2 biologic arms) and ranked candidate definitions of response according to their average P value across trials in distinguishing active treatment from placebo or combination therapy versus single-drug therapy. Results. All 135 tested measures had clinical validity based on survey responses, although dichotomous measures did not capture the range of responses (e.g., the ACR20 did not capture the extra clinical improvement between the ACR20 and the ACR50). In trial analyses, continuous measures had the best sensitivity to change. Among the best scoring measures was a hybrid measure that retained information on the ACR20, ACR50, and ACR70 and combined that with the mean percent improvement in core set measures. When comparing 2 treatments, this hybrid measure had an average P value much lower than that for the ACR20. If a trial needed 200 patients to have 80% power (2-sided ␣ ؍ 0.05) to detect a difference between treatments if it used the ACR20, the same trial would need 108 patients if the hybrid measure were used. Conclusion. We suggest use of a new hybrid measure of RA response that maximizes sensitivity to change, correlates well with rheumatologists' impressions of improvement, and preserves the ACR20.