BackgroundOverviews of reviews (i.e., overviews) compile information from multiple systematic reviews to provide a single synthesis of relevant evidence for healthcare decision-making. Despite their increasing popularity, there are currently no systematically developed reporting guidelines for overviews. This is problematic because the reporting of published overviews varies considerably and is often substandard. Our objective is to use explicit, systematic, and transparent methods to develop an evidence-based and agreement-based reporting guideline for overviews of reviews of healthcare interventions (PRIOR, Preferred Reporting Items for Overviews of Reviews).MethodsWe will develop the PRIOR reporting guideline in four stages, using established methods for developing reporting guidelines in health research. First, we will establish an international and multidisciplinary expert advisory board that will oversee the conduct of the project and provide methodological support. Second, we will use the results of comprehensive literature reviews to develop a list of prospective checklist items for the reporting guideline. Third, we will use a modified Delphi exercise to achieve a high level of expert agreement on the list of items to be included in the PRIOR reporting guideline. We will identify and recruit a group of up to 100 international experts who will provide input into the guideline in three Delphi rounds: the first two rounds will occur via online survey, and the third round will occur during a smaller (8 to 10 participants) in-person meeting that will use a nominal group technique. Fourth, we will produce and publish the PRIOR reporting guideline.DiscussionA systematically developed reporting guideline for overviews could help to improve the accuracy, completeness, and transparency of overviews. This, in turn, could help maximize the value and impact of overviews by allowing more efficient interpretation and use of their research findings.
BackgroundMedication nonadherence has a significant impact on the health and wellbeing of individuals with chronic disease. Several mobile medication management applications are available to help users track, remember, and read about their medication therapy.ObjectiveThe objective of this study was to explore the usability and usefulness of existing medication management applications for older adults.MethodsWe recruited 35 participants aged 50 and over to participate in a 2-hour usability session. The average age ranged from 52-78 years (mean 67 years) and 71% (25/35) of participants were female. Each participant was provided with an iPad loaded with four medication management applications: MyMedRec, DrugHub, Pillboxie, and PocketPharmacist. These applications were evaluated using the 10 item System Usability Scale (SUS) and visual analog scale. An investigator-moderated 30-minute discussion followed, and was recorded. We used a grounded theory (GT) approach to analyze qualitative data.ResultsWhen assessing mobile medication management applications, participants struggled to think of a need for the applications in their own lives. Many were satisfied with their current management system and proposed future use only if cognition and health declined. Most participants felt capable of using the applications after a period of time and training, but were frustrated by their initial experiences with the applications. The early experiences of participants highlighted the benefits of linear navigation and clear wording (eg, “undo” vs “cancel”) when designing for older users. While there was no order effect, participants attributed their poor performance to the order in which they tried the applications. They also described being a part of a technology generation that did not encounter the computer until adulthood. Of the four applications, PocketPharmacist was found to be the least usable with a score of 42/100 (P<.0001) though it offered a drug interaction feature that was among the favorite features of participants. The usability scores for MyMedRec (56/100), DrugHub (57/100), and Pillboxie (52/100) were not significantly different and participants preferred MyMedRec and DrugHub for their simple, linear interfaces.ConclusionsWith training, adults aged 50 and over can be capable and interested in using mHealth applications for their medication management. However, in order to adopt such technology, they must find a need that their current medication management system cannot fill. Interface diversity and multimodal reminder methods should be considered to increase usability for older adults. Lastly, regulation or the involvement of older adults in development may help to alleviate generation bias and mistrust for applications.
Objective To develop a reporting guideline for overviews of reviews of healthcare interventions. Design Development of the preferred reporting items for overviews of reviews (PRIOR) statement. Participants Core team (seven individuals) led day-to-day operations, and an expert advisory group (three individuals) provided methodological advice. A panel of 100 experts (authors, editors, readers including members of the public or patients) was invited to participate in a modified Delphi exercise. 11 expert panellists (chosen on the basis of expertise, and representing relevant stakeholder groups) were invited to take part in a virtual face-to-face meeting to reach agreement (≥70%) on final checklist items. 21 authors of recently published overviews were invited to pilot test the checklist. Setting International consensus. Intervention Four stage process established by the EQUATOR Network for developing reporting guidelines in health research: project launch (establish a core team and expert advisory group, register intent), evidence reviews (systematic review of published overviews to describe reporting quality, scoping review of methodological guidance and author reported challenges related to undertaking overviews of reviews), modified Delphi exercise (two online Delphi surveys to reach agreement (≥70%) on relevant reporting items followed by a virtual face-to-face meeting), and development of the reporting guideline. Results From the evidence reviews, we drafted an initial list of 47 potentially relevant reporting items. An international group of 52 experts participated in the first Delphi survey (52% participation rate); agreement was reached for inclusion of 43 (91%) items. 44 experts (85% retention rate) completed the second Delphi survey, which included the four items lacking agreement from the first survey and five new items based on respondent comments. During the second round, agreement was not reached for the inclusion or exclusion of the nine remaining items. 19 individuals (6 core team and 3 expert advisory group members, and 10 expert panellists) attended the virtual face-to-face meeting. Among the nine items discussed, high agreement was reached for the inclusion of three and exclusion of six. Six authors participated in pilot testing, resulting in minor wording changes. The final checklist includes 27 main items (with 19 sub-items) across all stages of an overview of reviews. Conclusions PRIOR fills an important gap in reporting guidance for overviews of reviews of healthcare interventions. The checklist, along with rationale and example for each item, provides guidance for authors that will facilitate complete and transparent reporting. This will allow readers to assess the methods used in overviews of reviews of healthcare interventions and understand the trustworthiness and applicability of their findings.
BackgroundWe explored the performance of three machine learning tools designed to facilitate title and abstract screening in systematic reviews (SRs) when used to (a) eliminate irrelevant records (automated simulation) and (b) complement the work of a single reviewer (semi-automated simulation). We evaluated user experiences for each tool.MethodsWe subjected three SRs to two retrospective screening simulations. In each tool (Abstrackr, DistillerSR, RobotAnalyst), we screened a 200-record training set and downloaded the predicted relevance of the remaining records. We calculated the proportion missed and workload and time savings compared to dual independent screening. To test user experiences, eight research staff tried each tool and completed a survey.ResultsUsing Abstrackr, DistillerSR, and RobotAnalyst, respectively, the median (range) proportion missed was 5 (0 to 28) percent, 97 (96 to 100) percent, and 70 (23 to 100) percent for the automated simulation and 1 (0 to 2) percent, 2 (0 to 7) percent, and 2 (0 to 4) percent for the semi-automated simulation. The median (range) workload savings was 90 (82 to 93) percent, 99 (98 to 99) percent, and 85 (85 to 88) percent for the automated simulation and 40 (32 to 43) percent, 49 (48 to 49) percent, and 35 (34 to 38) percent for the semi-automated simulation. The median (range) time savings was 154 (91 to 183), 185 (95 to 201), and 157 (86 to 172) hours for the automated simulation and 61 (42 to 82), 92 (46 to 100), and 64 (37 to 71) hours for the semi-automated simulation. Abstrackr identified 33–90% of records missed by a single reviewer. RobotAnalyst performed less well and DistillerSR provided no relative advantage. User experiences depended on user friendliness, qualities of the user interface, features and functions, trustworthiness, ease and speed of obtaining predictions, and practicality of the export file(s).ConclusionsThe workload savings afforded in the automated simulation came with increased risk of missing relevant records. Supplementing a single reviewer’s decisions with relevance predictions (semi-automated simulation) sometimes reduced the proportion missed, but performance varied by tool and SR. Designing tools based on reviewers’ self-identified preferences may improve their compatibility with present workflows.Systematic review registrationNot applicable.
BackgroundMachine learning tools can expedite systematic review (SR) processes by semi-automating citation screening. Abstrackr semi-automates citation screening by predicting relevant records. We evaluated its performance for four screening projects.MethodsWe used a convenience sample of screening projects completed at the Alberta Research Centre for Health Evidence, Edmonton, Canada: three SRs and one descriptive analysis for which we had used SR screening methods. The projects were heterogeneous with respect to search yield (median 9328; range 5243 to 47,385 records; interquartile range (IQR) 15,688 records), topic (Antipsychotics, Bronchiolitis, Diabetes, Child Health SRs), and screening complexity. We uploaded the records to Abstrackr and screened until it made predictions about the relevance of the remaining records. Across three trials for each project, we compared the predictions to human reviewer decisions and calculated the sensitivity, specificity, precision, false negative rate, proportion missed, and workload savings.ResultsAbstrackr’s sensitivity was > 0.75 for all projects and the mean specificity ranged from 0.69 to 0.90 with the exception of Child Health SRs, for which it was 0.19. The precision (proportion of records correctly predicted as relevant) varied by screening task (median 26.6%; range 14.8 to 64.7%; IQR 29.7%). The median false negative rate (proportion of records incorrectly predicted as irrelevant) was 12.6% (range 3.5 to 21.2%; IQR 12.3%). The workload savings were often large (median 67.2%, range 9.5 to 88.4%; IQR 23.9%). The proportion missed (proportion of records predicted as irrelevant that were included in the final report, out of the total number predicted as irrelevant) was 0.1% for all SRs and 6.4% for the descriptive analysis. This equated to 4.2% (range 0 to 12.2%; IQR 7.8%) of the records in the final reports.ConclusionsAbstrackr’s reliability and the workload savings varied by screening task. Workload savings came at the expense of potentially missing relevant records. How this might affect the results and conclusions of SRs needs to be evaluated. Studies evaluating Abstrackr as the second reviewer in a pair would be of interest to determine if concerns for reliability would diminish. Further evaluations of Abstrackr’s performance and usability will inform its refinement and practical utility.Electronic supplementary materialThe online version of this article (10.1186/s13643-018-0707-8) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.