BackgroundUnsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability.ObjectiveThe aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate.MethodsWe implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results.ResultsWe achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively.ConclusionsWe demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action.
Abstract. The collaborative creation of value is the central tenet of services science. In particular, then, the quality of a service encounter would depend on the mutual expectations of the participants. Specifically, the quality of experience that a consumer derives from a service encounter would depend on how the consumer's expectations are refined and how well they are met by the provider during the encounter. We postulate that incorporating expectations ought therefore be a crucial element of business service selection. Unfortunately, today's technical approaches to service selection disregard the above. They emphasize reputation measured via numeric ratings that consumers provide about service providers. Such ratings are easy to process computationally, but beg the question as to what the raters' frames of reference, i.e., expectations. When the frames of reference are not modeled, the resulting reputation scores are often not sufficiently predictive of a consumer's satisfaction. We investigate the notion of expectations from a computational perspective. We claim that (1) expectations, despite being subjective, are a well-formed, reliably computable notion and (2) we can compute expectations and use them as a basis for improving the effectiveness of service selection. Our approach is as follows. First, we mine textual assessments of service encounters given by consumers to build a model of each consumer's expectations along with a model of each provider's ability to satisfy such expectations. Second, we apply expectations to predict a consumer's satisfaction for engaging a particular provider. We validate our claims based on real data obtained from eBay.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.