A hospital readmission occurs when a patient has an unplanned admission to a hospital within a specific time period of discharge from an earlier or initial hospital stay. Preventable readmissions have turned into a critical challenge for the healthcare system globally, and hospitals seek care strategies that reduce the readmission burden. Some countries have developed hospital readmission reduction policies, and in some cases, these policies impose financial penalties for hospitals with high readmission rates. Decision models are needed to help hospitals identify care strategies that avoid financial penalties, yet maintain balance among quality of care, the cost of care, and the hospital’s readmission reduction goals. We develop a multi-condition care strategy model to help hospitals prioritize treatment plans and allocate resources. The stochastic programming model has probabilistic constraints to control the expected readmission probability for a set of patients. The model determines which care strategies will be the most cost-effective and the extent to which resources should be allocated to those initiatives to reach the desired readmission reduction targets and maintain high quality of care. A sensitivity analysis was conducted to explore the value of the model for low- and high-performing hospitals and multiple health conditions. Model outputs are valuable to hospitals as they examine the expected cost of hitting its target and the expected improvement to its readmission rates.
Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the “black box” and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
The aim of her engineering education research is to develop new methods and best practices of flipped classroom video development for simulation and programming courses to better fit the needs of Generation Z engineering students.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.