Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
EditorialPure or Tarnished: Are Systematic Reviews Blind or Biased?I T SEEMS that the current mindset of the government, payors, and various watchdog groups is to hold the medical and scientific communities to practices and treatments that are "evidence based." This concept is certainly laudable and is definitely not new. We live in an age of rapidly changing technologies, with vast amounts of information being offered via the scientific literature, the lay press, infomercials, the Internet, and various other sources. New therapies, techniques, and technologies are touted, and patients are enticed by the purported benefits. The payors and government are focusing on outcomes, quality, and the ability of treatments to add value. These efforts are truly something that should be embraced and nurtured by all of us, particularly since the cost and complexity of care continues to rise, and since therapies oftentimes carry a significant risk of morbidity and mortality. Indeed, the basis for the randomized controlled trial is to compare alternate modalities, ideally in a double-blind fashion, and thereby to determine the safety and efficacy of the particular strategy. We would thus attempt to determine whether A is "better" than B, and whether C is an effective alternative.We assume that the scientific trials, the interpretation of the data, and its ultimate publication are factual and without bias. However, it is increasingly clear that our assumption is not always correct. This writer has brought these issues to this column before. 1-4 The credibility issue and the issues of editorial and scientific bias merit our careful consideration and attention as we ourselves produce, read, and interpret the literature or offer various treatments to our patients. This task is further compounded by the fact that there may well be several seemingly beneficial treatment strategies for a particular problem. Still worse is the conflicting nature of the literature itself. Studies that report positive results are not infrequently trumped by studies that demonstrate the diametric opposite. Ioannidis 5 found that only 44% of highly cited original clinical research studies were replicated, while 16% were contradicted and an additional 16% had found effects that were stronger than those of subsequent studies. His further analysis demonstrated that 5 of 6 (83%) of highly cited nonrandomized studies were contradicted or had found stronger effects compared to 9 of 39 (23%) of large randomized controlled trials. 5 Ioannidis concluded that contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes. 5 We are driven to consider centralized systematic reviews conducted and published by reputable agencies and groups as being our resource for complete, accurate, transparent and unbiased analysis of data. These groups cull the world literature for Type 1 studies and other high quality data. They then construct their recommendations based on the results of their defined criteria. Th...
EditorialPure or Tarnished: Are Systematic Reviews Blind or Biased?I T SEEMS that the current mindset of the government, payors, and various watchdog groups is to hold the medical and scientific communities to practices and treatments that are "evidence based." This concept is certainly laudable and is definitely not new. We live in an age of rapidly changing technologies, with vast amounts of information being offered via the scientific literature, the lay press, infomercials, the Internet, and various other sources. New therapies, techniques, and technologies are touted, and patients are enticed by the purported benefits. The payors and government are focusing on outcomes, quality, and the ability of treatments to add value. These efforts are truly something that should be embraced and nurtured by all of us, particularly since the cost and complexity of care continues to rise, and since therapies oftentimes carry a significant risk of morbidity and mortality. Indeed, the basis for the randomized controlled trial is to compare alternate modalities, ideally in a double-blind fashion, and thereby to determine the safety and efficacy of the particular strategy. We would thus attempt to determine whether A is "better" than B, and whether C is an effective alternative.We assume that the scientific trials, the interpretation of the data, and its ultimate publication are factual and without bias. However, it is increasingly clear that our assumption is not always correct. This writer has brought these issues to this column before. 1-4 The credibility issue and the issues of editorial and scientific bias merit our careful consideration and attention as we ourselves produce, read, and interpret the literature or offer various treatments to our patients. This task is further compounded by the fact that there may well be several seemingly beneficial treatment strategies for a particular problem. Still worse is the conflicting nature of the literature itself. Studies that report positive results are not infrequently trumped by studies that demonstrate the diametric opposite. Ioannidis 5 found that only 44% of highly cited original clinical research studies were replicated, while 16% were contradicted and an additional 16% had found effects that were stronger than those of subsequent studies. His further analysis demonstrated that 5 of 6 (83%) of highly cited nonrandomized studies were contradicted or had found stronger effects compared to 9 of 39 (23%) of large randomized controlled trials. 5 Ioannidis concluded that contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes. 5 We are driven to consider centralized systematic reviews conducted and published by reputable agencies and groups as being our resource for complete, accurate, transparent and unbiased analysis of data. These groups cull the world literature for Type 1 studies and other high quality data. They then construct their recommendations based on the results of their defined criteria. Th...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.