BackgroundDespite the problem of inadequate recruitment to randomised trials, there is little evidence to guide researchers on decisions about how people are effectively recruited to take part in trials. The PRioRiTy study aimed to identify and prioritise important unanswered trial recruitment questions for research. The PRioRiTy study - Priority Setting Partnership (PSP) included members of the public approached to take part in a randomised trial or who have represented participants on randomised trial steering committees, health professionals and research staff with experience of recruiting to randomised trials, people who have designed, conducted, analysed or reported on randomised trials and people with experience of randomised trials methodology.MethodsThis partnership was aided by the James Lind Alliance and involved eight stages: (i) identifying a unique, relevant prioritisation area within trial methodology; (ii) establishing a steering group (iii) identifying and engaging with partners and stakeholders; (iv) formulating an initial list of uncertainties; (v) collating the uncertainties into research questions; (vi) confirming that the questions for research are a current recruitment challenge; (vii) shortlisting questions and (viii) final prioritisation through a face-to-face workshop.ResultsA total of 790 survey respondents yielded 1693 open-text answers to 6 questions, from which 1880 potential questions for research were identified. After merging duplicates, the number of questions was reduced to 496. Questions were combined further, and those that were submitted by fewer than 15 people and/or fewer than 6 of the 7 stakeholder groups were excluded from the next round of prioritisation resulting in 31 unique questions for research. All 31 questions were confirmed as being unanswered after checking relevant, up-to-date research evidence. The 10 highest priority questions were ranked at a face-to-face workshop. The number 1 ranked question was “How can randomised trials become part of routine care and best utilise current clinical care pathways?” The top 10 research questions can be viewed at www.priorityresearch.ie.ConclusionThe prioritised questions call for a collective focus on normalising trials as part of clinical care, enhancing communication, addressing barriers, enablers and motivators around participation and exploring greater public involvement in the research process.
BackgroundThis report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review.Objectives(1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment (HTA) programme.Data sourcesWe searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014.Review methodsThis narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015.ResultsThe literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers.DiscussionThe findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence’s remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework (REF), which assesses the quality of universities’ research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish®(researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established.LimitationsThere were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme.ConclusionsResearch funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines.FundingThe National Institute for Health Research HTA programme.
ObjectivesTo assess the value of pilot and feasibility studies to randomised controlled trials (RCTs) funded by the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme. To explore the methodological components of pilot/feasibility studies and how they inform full RCTs.Study designCross-sectional study.SettingBoth groups included NIHR HTA programme funded studies in the period 1 January 2010–31 December 2014 (decision date). Group 1: stand-alone pilot/feasibility studies published in the HTA Journal or accepted for publication. Group 2: all funded RCT applications funded by the HTA programme, including reference to an internal and/or external pilot/feasibility study. The methodological components were assessed using an adapted framework from a previous study.Main outcome measuresThe proportion of stand-alone pilot and feasibility studies which recommended proceeding to full trial and what study elements were assessed. The proportion of ‘HTA funded’ trials which used internal and external pilot and feasibility studies to inform the design of the trial.ResultsGroup 1 identified 15 stand-alone pilot/feasibility studies. Study elements most commonly assessed were testing recruitment (100% in both groups), feasibility (83%, 100%) and suggestions for further study/investigation (83%, 100%). Group 2 identified 161 ‘HTA funded’ applications: 59 cited an external pilot/feasibility study where testing recruitment (50%, 73%) and feasibility (42%, 73%) were the most commonly reported study elements: 92 reported an internal pilot/feasibility study where testing recruitment (93%, 100%) and feasibility (44%, 92%) were the most common study elements reported.Conclusions‘HTA funded’ research which includes pilot and feasibility studies assesses a variety of study elements. Pilot and feasibility studies serve an important role when determining the most appropriate trial design. However, how they are reported and in what context requires caution when interpreting the findings and delivering a definitive trial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.