Objective. To determine whether quality measures based on computer-extracted EHR data can reproduce findings based on data manually extracted by reviewers. Data Sources. We studied 12 measures of care indicated for adolescent well-care visits for 597 patients in three pediatric health systems. Study Design. Observational study. Data Collection/Extraction Methods. Manual reviewers collected quality data from the EHR. Site personnel programmed their EHR systems to extract the same data from structured fields in the EHR according to national health ITstandards. Principal Findings. Overall performance measured via computer-extracted data was 21.9 percent, compared with 53.2 percent for manual data. Agreement measures were high for immunizations. Otherwise, agreement between computer extraction and manual review was modest (Kappa = 0.36) because computer-extracted data frequently missed care events (sensitivity = 39.5 percent). Measure validity varied by health care domain and setting. A limitation of our findings is that we studied only three domains and three sites. Conclusions. The accuracy of computer-extracted EHR quality reporting depends on the use of structured data fields, with the highest agreement found for measures and in the setting that had the greatest concentration of structured fields. We need to improve documentation of care, data extraction, and adaptation of EHR systems to practice workflow.
Chatbots have become an increasingly popular tool in the field of health services and communications. Despite chatbots’ significance amid the COVID-19 pandemic, few studies have performed a rigorous evaluation of the effectiveness of chatbots in improving vaccine confidence and acceptance. In Thailand, Hong Kong, and Singapore, from February 11th to June 30th, 2022, we conducted multisite randomised controlled trials (RCT) on 2,045 adult guardians of children and seniors who were unvaccinated or had delayed vaccinations. After a week of using COVID-19 vaccine chatbots, the differences in vaccine confidence and acceptance were compared between the intervention and control groups. Compared to non-users, fewer chatbot users reported decreased confidence in vaccine effectiveness in the Thailand child group [Intervention: 4.3 % vs. Control: 17%, P = 0.023]. However, more chatbot users reported decreased vaccine acceptance [26% vs. 12%, P = 0.028] in Hong Kong child group and decreased vaccine confidence in safety [29% vs. 10%, P = 0.041] in Singapore child group. There was no statistically significant change in vaccine confidence or acceptance in the Hong Kong senior group. Employing the RE-AIM framework, process evaluation indicated strong acceptance and implementation support for vaccine chatbots from stakeholders, with high levels of sustainability and scalability. This multisite, parallel RCT study on vaccine chatbots found mixed success in improving vaccine confidence and acceptance among unvaccinated Asian subpopulations. Further studies that link chatbot usage and real-world vaccine uptake are needed to augment evidence for employing vaccine chatbots to advance vaccine confidence and acceptance.
This study examined both individual and combined effects of race, education, and health-based risk factors on health maintenance services among Medicare plan members. Data were from 110 238 elderly completing the 2006 Medicare Health Outcomes Survey. Receipt of recommended patient-physician communication and interventions for urinary incontinence, physical activity, falls, and osteoporosis was modeled as a function of risk factors. Low education decreased the odds of receiving services; poor health increased odds. Race had little effect. Evidence suggested moderation among competing effects. While clinicians target services to most at-risk elderly individuals, patients with low education experience gaps. Synergies among co-occurring risks warrant further research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.