The increased use of algorithmic predictions in sensitive domains has been accompanied by both enthusiasm and concern. To understand the opportunities and risks of these technologies, it is key to study how experts alter their decisions when using such tools. In this paper, we study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions. We focus on the question: Are humans capable of identifying cases in which the machine is wrong, and of overriding those recommendations? We first show that humans do alter their behavior when the tool is deployed. Then, we show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk, even when overriding the recommendation requires supervisory approval. These results highlight the risks of full automation and the importance of designing decision pipelines that provide humans with autonomy.
Transparency of algorithmic systems entails exposing system properties to various stakeholders for purposes that include understanding, improving, and/or contesting predictions. The machine learning (ML) community has mostly considered explainability as a proxy for transparency. With this work, we seek to encourage researchers to study uncertainty as a form of transparency and practitioners to communicate uncertainty estimates to stakeholders. First, we discuss methods for assessing uncertainty. Then, we describe the utility of uncertainty for mitigating model unfairness, augmenting decision-making, and building trustworthy systems. We also review methods for displaying uncertainty to stakeholders and discuss how to collect information required for incorporating uncertainty into existing ML pipelines. Our contribution is an interdisciplinary review to inform how to measure, communicate, and use uncertainty as a form of transparency.
We performed a retrospective cohort study that aimed to identify one or more groups that followed a pattern of chronic, high prescription use and quantify individuals’ time-dependent probabilities of belonging to a high-utilizer group. We analyzed data from 52,456 adults age 18–45 who enrolled in Medicaid from 2009–2017 in Allegheny County, Pennsylvania who filled at least one prescription for an opioid analgesic. We used group-based trajectory modeling to identify groups of individuals with distinct patterns of prescription opioid use over time. We found the population to be comprised of three distinct trajectory groups. The first group comprised 83% of the population and filled few, if any, opioid prescriptions after their index prescription. The second group (12%) initially filled an average of one prescription per month, but declined over two years to near-zero. The third group (6%) demonstrated sustained high opioid prescriptions utilization. Using individual patients’ posterior probability of membership in the high utilization group, which can be updated iteratively over time as new information become available, we defined a sensitive threshold predictive of sustained future opioid utilization. We conclude that individuals at risk of sustained opioid utilization can be identified early in their clinical course from limited observational data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.