IMPORTANCE Harms and benefits of opioids for chronic noncancer pain remain unclear. OBJECTIVE To systematically review randomized clinical trials (RCTs) of opioids for chronic noncancer pain. DATA SOURCES AND STUDY SELECTION The databases of CENTRAL, CINAHL, EMBASE, MEDLINE, AMED, and PsycINFO were searched from inception to April 2018 for RCTs of opioids for chronic noncancer pain vs any nonopioid control. DATA EXTRACTION AND SYNTHESIS Paired reviewers independently extracted data. The analyses used random-effects models and the Grading of Recommendations Assessment, Development and Evaluation to rate the quality of the evidence. MAIN OUTCOMES AND MEASURES The primary outcomes were pain intensity (score range, 0-10 cm on a visual analog scale for pain; lower is better and the minimally important difference [MID] is 1 cm), physical functioning (score range, 0-100 points on the 36-item Short Form physical component score [SF-36 PCS]; higher is better and the MID is 5 points), and incidence of vomiting. RESULTS Ninety-six RCTs including 26 169 participants (61% female; median age, 58 years [interquartile range, 51-61 years]) were included. Of the included studies, there were 25 trials of neuropathic pain, 32 trials of nociceptive pain, 33 trials of central sensitization (pain present in the absence of tissue damage), and 6 trials of mixed types of pain. Compared with placebo, opioid use was associated with reduced pain (weighted mean difference [WMD], −0.69 cm [95% CI, −0.82 to −0.56 cm] on a 10-cm visual analog scale for pain; modeled risk difference for achieving the MID, 11.9% [95% CI, 9.7% to 14.1%]), improved physical functioning (WMD, 2.04 points [95% CI, 1.41 to 2.68 points] on the 100-point SF-36 PCS; modeled risk difference for achieving the MID, 8.5% [95% CI, 5.9% to 11.2%]), and increased vomiting (5.9% with opioids vs 2.3% with placebo for trials that excluded patients with adverse events during a run-in period). Low-to moderate-quality evidence suggested similar associations of opioids with improvements in pain and physical functioning compared with nonsteroidal anti-inflammatory drugs (pain: WMD, −0.60 cm [95% CI, −1.54 to 0.34 cm]; physical functioning: WMD, −0.90 points [95% CI, −2.69 to 0.89 points]), tricyclic antidepressants (pain: WMD, −0.13 cm [95% CI, −0.99 to 0.74 cm]; physical functioning: WMD, −5.31 points [95% CI, −13.77 to 3.14 points]), and anticonvulsants (pain: WMD, −0.90 cm [95% CI, −1.65 to −0.14 cm]; physical functioning: WMD, 0.45 points [95% CI, −5.77 to 6.66 points]). CONCLUSIONS AND RELEVANCE In this meta-analysis of RCTs of patients with chronic noncancer pain, evidence from high-quality studies showed that opioid use was associated with statistically significant but small improvements in pain and physical functioning, and increased risk of vomiting compared with placebo. Comparisons of opioids with nonopioid alternatives suggested that the benefit for pain and functioning may be similar, although the evidence was from studies of only low to moderate quality.
Well-conducted randomized clinical trials (RCTs) are the gold standard for evaluating the safety and efficacy of medical therapeutics. Yet most often, a single group of individuals who conducted the trial are the only ones who have access to the raw data, conduct the analysis, and publish the study results. This limited access does not typically allow others to replicate the trial findings. Given the time and expense required to conduct an RCT, it is often unlikely that others will independently repeat a similar experiment. Thus, the scientific community and the public often accept the results produced and published by the original research team without an opportunity for reanalysis. Increasingly, however, opinions and empirical data are challenging the assumption that the analysis of a clinical trial is straightforward and that analysis by any other group would obtain the same results. [1][2][3] In this issue of JAMA, Ebrahim et al 4 report their findings based on a rigorous search of previously published reanalyses of RCTs. Their first surprising and discomforting finding was just how infrequently data reanalysis has occurred in medical research. Searching the literature from 1966 to present, the authors found only 37 reports that met their criteria as an RCT reanalysis. Of these few reanalyses performed, the majority (84%) had overlapping authors from the original report. Thus, reanalyses are not only rare, but the majority that were reported were not fully independent of the original research group. Despite this overlap, Ebrahim et al report that about half of the reanalyses differed in statistical or analytic approaches, a third differed in the definitions or measurements of outcomes, and most important, a third led to interpretations and conclusions different than those in the original article. While the definition of what constituted different trial analyses, study end points, findings, and interpretations is subjective, the authors' general conclusions were consistent with an emerging literature that indicates RCT reanalysis can yield different results and conclusions from those originally published. Even when the original investigators are presenting evidence in different venues it is not always consistent. For example, there is evidence from trials that data presented to the US Food and Drug Administration (FDA) may differ in important ways from those originally presented at scientific sessions or published in medical journals. Rising et al 5 assessedclinical trial information provided to the FDA and reported a 9% discordance between the conclusions in the report to the FDA and in the published article. Not unexpectedly, all were in the direction favoring the drug.Another example is discordance between what is reported in ClinicalTrials.gov and what is published in journal articles. Hartung et al 2 showed that in a random sample of phase 3 and 4 trials, in 15% the primary end point in the main article was different from the primary end point the trialists reported in ClinicalTrials.gov. Moreover, 22% re...
Very few studies assessing recovery expectations use a psychometrically valid measure. Current evidence suggests that patients with lower recovery expectations are less likely to resolve their disability claim or return to work versus patients with higher recovery expectations. Further validation of existing measures for assessing patient recovery expectations, or development of a new measure that addresses the limitations of existing ones, is required.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.