Background
Stringent requirements exist regarding the transparency of the study selection process and the reliability of results. A 2-step selection process is generally recommended; this is conducted by 2 reviewers independently of each other (conventional double-screening). However, the approach is resource intensive, which can be a problem, as systematic reviews generally need to be completed within a defined period with a limited budget. The aim of the following methodological systematic review was to analyse the evidence available on whether single screening is equivalent to double screening in the screening process conducted in systematic reviews.
Methods
We searched Medline, PubMed and the Cochrane Methodology Register (last search 10/2018). We also used supplementary search techniques and sources (“similar articles” function in PubMed, conference abstracts and reference lists). We included all evaluations comparing single with double screening. Data were summarized in a structured, narrative way.
Results
The 4 evaluations included investigated a total of 23 single screenings (12 sets for screening involving 9 reviewers). The median proportion of missed studies was 5% (range 0 to 58%). The median proportion of missed studies was 3% for the 6 experienced reviewers (range: 0 to 21%) and 13% for the 3 reviewers with less experience (range: 0 to 58%).
The impact of missing studies on the findings of meta-analyses had been reported in 2 evaluations for 7 single screenings including a total of 18,148 references. In 3 of these 7 single screenings – all conducted by the same reviewer (with less experience) – the findings would have changed substantially. The remaining 4 of these 7 screenings were conducted by experienced reviewers and the missing studies had no impact or a negligible on the findings of the meta-analyses.
Conclusions
Single screening of the titles and abstracts of studies retrieved in bibliographic searches is not equivalent to double screening, as substantially more studies are missed. However, in our opinion such an approach could still represent an appropriate methodological shortcut in rapid reviews, as long as it is conducted by an experienced reviewer. Further research on single screening is required, for instance, regarding factors influencing the number of studies missed.
Electronic supplementary material
The online version of this article (10.1186/s12874-019-0782-0) contains supplementary material, which is available to authorized users.
BackgroundVarious types of framing can influence risk perceptions, which may have an impact on treatment decisions and adherence. One way of framing is the use of verbal terms in communicating the probabilities of treatment effects. We systematically reviewed the comparative effects of words versus numbers in communicating the probability of adverse effects to consumers in written health information.MethodsNine electronic databases were searched up to November 2012. Teams of two reviewers independently assessed studies. Inclusion criteria: randomised controlled trials; verbal versus numerical presentation; context: written consumer health information.ResultsTen trials were included. Participants perceived probabilities presented in verbal terms as higher than in numeric terms: commonly used verbal descriptors systematically led to an overestimation of the absolute risk of adverse effects (Range of means: 3% - 54%). Numbers also led to an overestimation of probabilities, but the overestimation was smaller (2% – 20%). The difference in means ranged from 3.8% to 45.9%, with all but one comparison showing significant results. Use of numbers increased satisfaction with the information (MD: 0.48 [CI: 0.32 to 0.63], p < 0.00001, I2 = 0%) and likelihood of medication use (MD for very common side effects: 1.45 [CI: 0.78 to 2.11], p = 0.0001, I2 = 68%; MD for common side effects: 0.90 [CI: 0.61 to 1.19], p < 0.00001, I2 = 1%; MD for rare side effects: 0.39 [0.02 to 0.76], p = 0.04, I2 = not applicable). Outcomes were measured on a 6-point Likert scale, suggesting small to moderate effects.ConclusionsVerbal descriptors including “common”, “uncommon” and “rare” lead to an overestimation of the probability of adverse effects compared to numerical information, if used as previously suggested by the European Commission. Numbers result in more accurate estimates and increase satisfaction and likelihood of medication use. Our review suggests that providers of consumer health information should quantify treatment effects numerically. Future research should focus on the impact of personal and contextual factors, use representative samples or be conducted in real life settings, measure behavioral outcomes and address whether benefit information can be described verbally.
Almost all relevant RCTs on newly approved drugs will probably be identified in CT.gov alone. A sensitive search in CT.gov can be conducted using single search terms. The searches in ICTRP and EU-CTR should include several search terms (e.g., derived via text analysis).
Background:The Institute for Quality and Efficiency in Health Care (IQWiG) was established in 2003 by the German parliament. Its legislative responsibilities are health technology assessment, mostly to support policy making and reimbursement decisions. It also has a mandate to serve patients' interests directly, by assessing and communicating evidence for the general public. Objectives: To develop a priority-setting framework based on the interests of patients and the general public. Methods: A theoretical framework for priority setting from a patient/consumer perspective was developed. The process of development began with a poll to determine level of lay and health professional interest in the conclusions of 124 systematic reviews (194 responses). Data sources to identify patients' and consumers' information needs and interests were identified. Results: IQWiG's theoretical framework encompasses criteria for quality of evidence and interest, as well as being explicit about editorial considerations, including potential for harm. Dimensions of "patient interest" were identified, such as patients' concerns, information seeking, and use. Rather than being a single item capable of measurement by
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.