Algorithmic risk assessments are increasingly used to help humans make decisions in high-stakes settings, such as medicine, criminal justice and education. In each of these cases, the purpose of the risk assessment tool is to inform actions, such as medical treatments or release conditions, often with the aim of reducing the likelihood of an adverse event such as hospital readmission or recidivism. Problematically, most tools are trained and evaluated on historical data in which the outcomes observed depend on the historical decision-making policy. These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform. Even when tools are constructed to predict risk under a specific decision, they are often improperly evaluated as predictors of the target outcome. Focusing on the evaluation task, in this paper we define counterfactual analogues of common predictive performance and algorithmic fairness metrics that we argue are better suited for the decision-making context. We introduce a new method for estimating the proposed metrics using doubly robust estimation. We provide theoretical results that show that only under strong conditions can fairness according to the standard metric and the counterfactual metric simultaneously hold. Consequently, fairness-promoting methods that target parity in a standard fairness metric may-and as we show empirically, do-induce greater imbalance in the counterfactual analogue. We provide empirical comparisons on both synthetic data and a real world child welfare dataset to demonstrate how the proposed method improves upon standard practice.
Cognitive control refers to adjusting thoughts and actions when confronted with conflict during information processing. We tested whether this ability is causally linked to performance on certain language and memory tasks by using cognitive control training to systematically modulate people's ability to resolve information-conflict across domains. Different groups of subjects trained on 1 of 3 minimally different versions of an n-back task: n-back-with-lures (High-Conflict), n-back-without-lures (Low-Conflict), or 3-back-without-lures (3-Back). Subjects completed a battery of recognition memory and language processing tasks that comprised both high- and low-conflict conditions before and after training. We compared the transfer profiles of (a) the High- versus Low-Conflict groups to test how conflict resolution training contributes to transfer effects, and (b) the 3-Back versus Low-Conflict groups to test for differences not involving cognitive control. High-Conflict training-but not Low-Conflict training-produced discernable benefits on several untrained transfer tasks, but only under selective conditions requiring cognitive control. This suggests that the conflict-focused intervention influenced functioning on ostensibly different outcome measures across memory and language domains. 3-Back training resulted in occasional improvements on the outcome measures, but these were not selective for conditions involving conflict resolution. We conclude that domain-general cognitive control mechanisms are plastic, at least temporarily, and may play a causal role in linguistic and nonlinguistic performance. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.