The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area.Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified.Electronic supplementary materialThe online version of this article (doi:10.1186/s13063-017-1978-4) contains supplementary material, which is available to authorized users.
Pilot studies play an important role in health research, but they can be misused, mistreated and misrepresented. In this paper we focus on pilot studies that are used specifically to plan a randomized controlled trial (RCT). Citing examples from the literature, we provide a methodological framework in which to work, and discuss reasons why a pilot study might be undertaken. A well-conducted pilot study, giving a clear list of aims and objectives within a formal framework will encourage methodological rigour, ensure that the work is scientifically valid and publishable, and will lead to higher quality RCTs. It will also safeguard against pilot studies being conducted simply because of small numbers of available patients.
The selection of appropriate outcomes or domains is crucial when designing clinical trials in order to compare directly the effects of different interventions in ways that minimize bias. If the findings are to influence policy and practice then the chosen outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. There is a growing recognition that insufficient attention has been paid to the outcomes measured in clinical trials. These issues could be addressed through the development and use of an agreed standardized collection of outcomes, known as a core outcome set, which should be measured and reported, as a minimum, in all trials for a specific clinical area. Accumulating work in this area has identified the need for general guidance on the development of core outcome sets. Key issues to consider in the development of a core outcome set include its scope, the stakeholder groups to involve, choice of consensus method and the achievement of a consensus.
Clinical research is necessary for an effective response to an emerging infectious disease outbreak. However, research efforts are often hastily organised and done using various research tools, with the result that pooling data across studies is challenging. In response to the needs of the rapidly evolving COVID-19 outbreak, the Clinical Characterisation and Management Working Group of the WHO Research and Development Blueprint programme, the International Forum for Acute Care Trialists, and the International Severe Acute Respiratory and Emerging Infections Consortium have developed a minimum set of common outcome measures for studies of COVID-19. This set includes three elements: a measure of viral burden (quantitative PCR or cycle threshold), a measure of patient survival (mortality at hospital discharge or at 60 days), and a measure of patient progression through the health-care system by use of the WHO Clinical Progression Scale, which reflects patient trajectory and resource use over the course of clinical illness. We urge investigators to include these key data elements in ongoing and future studies to expedite the pooling of data during this immediate threat, and to hone a tool for future needs.
BackgroundThe increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.Methodology/Principal FindingsWe review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.ConclusionsRecent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.