Digital and online symptom checkers are an increasingly adopted class of health technologies that enable patients to input their symptoms and biodata to produce a set of likely diagnoses and associated triage advice. However, concerns regarding the accuracy and safety of these symptom checkers have been raised. This systematic review evaluates the accuracy of symptom checkers in providing diagnoses and appropriate triage advice. MEDLINE and Web of Science were searched for studies that used either real or simulated patients to evaluate online or digital symptom checkers. The primary outcomes were the diagnostic and triage accuracy of the symptom checkers. The QUADAS-2 tool was used to assess study quality. Of the 177 studies retrieved, 10 studies met the inclusion criteria. Researchers evaluated the accuracy of symptom checkers using a variety of medical conditions, including ophthalmological conditions, inflammatory arthritides and HIV. A total of 50% of the studies recruited real patients, while the remainder used simulated cases. The diagnostic accuracy of the primary diagnosis was low across included studies (range: 19–37.9%) and varied between individual symptom checkers, despite consistent symptom data input. Triage accuracy (range: 48.8–90.1%) was typically higher than diagnostic accuracy. Overall, the diagnostic and triage accuracy of symptom checkers are variable and of low accuracy. Given the increasing push towards adopting this class of technologies across numerous health systems, this study demonstrates that reliance upon symptom checkers could pose significant patient safety hazards. Large-scale primary studies, based upon real-world data, are warranted to demonstrate the adequate performance of these technologies in a manner that is non-inferior to current best practices. Moreover, an urgent assessment of how these systems are regulated and implemented is required.
There is a limited number of studies evaluating the accuracy of the Mini-Cog for the diagnosis of dementia in primary care settings. Given the small number of studies, the wide range in estimates of the accuracy of the Mini-Cog, and methodological limitations identified in most of the studies, at the present time there is insufficient evidence to recommend that the Mini-Cog be used as a screening test for dementia in primary care. Further studies are required to determine the accuracy of Mini-Cog in primary care and whether this tool has sufficient diagnostic test accuracy to be useful as a screening test in this setting.
Background Recent emergency authorization and rollout of COVID-19 vaccines by regulatory bodies has generated global attention. As the most popular video-sharing platform globally, YouTube is a potent medium for the dissemination of key public health information. Understanding the nature of available content regarding COVID-19 vaccination on this widely used platform is of substantial public health interest. Objective This study aimed to evaluate the reliability and quality of information on COVID-19 vaccination in YouTube videos. Methods In this cross-sectional study, the phrases “coronavirus vaccine” and “COVID-19 vaccine” were searched on the UK version of YouTube on December 10, 2020. The 200 most viewed videos of each search were extracted and screened for relevance and English language. Video content and characteristics were extracted and independently rated against Health on the Net Foundation Code of Conduct and DISCERN quality criteria for consumer health information by 2 authors. Results Forty-eight videos, with a combined total view count of 30,100,561, were included in the analysis. Topics addressed comprised the following: vaccine science (n=18, 58%), vaccine trials (n=28, 58%), side effects (n=23, 48%), efficacy (n=17, 35%), and manufacturing (n=8, 17%). Ten (21%) videos encouraged continued public health measures. Only 2 (4.2%) videos made nonfactual claims. The content of 47 (98%) videos was scored to have low (n=27, 56%) or moderate (n=20, 42%) adherence to Health on the Net Foundation Code of Conduct principles. Median overall DISCERN score per channel type ranged from 40.3 (IQR 34.8-47.0) to 64.3 (IQR 58.5-66.3). Educational channels produced by both medical and nonmedical professionals achieved significantly higher DISCERN scores than those of other categories. The highest median DISCERN scores were achieved by educational videos produced by medical professionals (64.3, IQR 58.5-66.3) and the lowest median scores by independent users (18, IQR 18-20). Conclusions The overall quality and reliability of information on COVID-19 vaccines on YouTube remains poor. Videos produced by educational channels, especially by medical professionals, were higher in quality and reliability than those produced by other sources, including health-related organizations. Collaboration between health-related organizations and established medical and educational YouTube content producers provides an opportunity for the dissemination of high-quality information on COVID-19 vaccination. Such collaboration holds potential as a rapidly implementable public health intervention aiming to engage a wide audience and increase public vaccination awareness and knowledge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.