Introduction: Patient satisfaction has become an essential metric in addition to the type of care they receive. Phone calls, emails, and text to patients after their healthcare visit are the typical way of obtaining the data reflecting patient satisfaction. The purpose of this retrospective quality improvement study is to compare the traditional post-outpatient clinic survey method with an onsite concise two-question survey using a tablet method immediately after the patient visit using Net Promoter Score (NPS) questions. Methods: Data were collected retrospectively from February to August 2018 from an outpatient subspecialty clinic in southern California using an existing database from two different sources: the traditional method (TM) and the tablet-based tool (TBT), using NPS. The TM data were obtained from a third-party company using two questions via phone, email, and text collected 2-4 weeks after the patient's visit. The TBT has only two questions that were given to patients upon their visit check-out. These two questions assessed both provider and clinic's performance using the NPS method. Results: In total, there were 1708 patients seen from February to August 2018. In the TM, the total outgoing messages during this period were 580 (34.0%) with 156 responses (27%). In the TBT, 648 out of 1708 (37.9%) surveys were collected with a 100% response rate. The NPS score showed that 99.2% of the providers were promoters. The NPS score for the clinic was 96% which reflects a promoter score. Conclusion: Our results indicate that when using the TBT immediately after their visit to the clinic, a higher response rate was noted. In addition, both methods had similar outcomes in terms of patient satisfaction NPS scores. Future prospective studies with a larger sample size are warranted to evaluate the effectiveness of the TBT tool in assessing patient satisfaction.
Background and Objective
Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician’s point of view.
Methods
Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks.
Results
While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks.
Conclusion
This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.