Fig. 1. We propose a design framework to protect individuals and groups from discrimination in algorithm-assisted decision making. A visual analytic system, FairSight, is implemented based on our proposed framework, to help data scientists and practitioners make fair decisions. The decision is made through ranking individuals who are either members of a protected group (orange bars) or a non-protected group (green bars). (a) The system provides a pipeline to help users understand the possible bias in a machine learning task as a mapping from the input space to the output space. (b) Different notions of fairnessindividual fairness and group fairnessare measured and summarized numerically and visually. For example, the individual fairness is quantified by how pairwise distances between individuals are preserved through the mapping. The group fairness is quantified by the extent to which it leads to fair outcome distribution across groups, with (i) a 2D plot, (ii) a color-coded matrix , and (iii) a ranked-list plot capturing the pattern of potential biases.The system provides diagnostic modules to help (iv) identify and (v) mitigate biases through (c) investigating features before running a model, and (d) leveraging fairness-aware algorithms during and after the training step.Abstract-Data-driven decision making related to individuals has become increasingly pervasive, but the issue concerning the potential discrimination has been raised by recent studies. In response, researchers have made efforts to propose and implement fairness measures and algorithms, but those efforts have not been translated to the real-world practice of data-driven decision making. As such, there is still an urgent need to create a viable tool to facilitate fair decision making. We propose FairSight, a visual analytic system to address this need; it is designed to achieve different notions of fairness in ranking decisions through identifying the required actions -understanding, measuring, diagnosing and mitigating biases -that together lead to fairer decision making. Through a case study and user study, we demonstrate that the proposed visual analytic and diagnostic modules in the system are effective in understanding the fairness-aware decision pipeline and obtaining more fair outcomes.
Stability in social, technical, and financial systems, as well as the capacity of organizations to work across borders, requires consistency in public policy across jurisdictions. The diffusion of laws and regulations across political boundaries can reduce the tension that arises between innovation and consistency. Policy diffusion has been a topic of focus across the social sciences for several decades, but due to limitations of data and computational capacity, researchers have not taken a comprehensive and data-intensive look at the aggregate, cross-policy patterns of diffusion. This work combines visual analytics and text and network analyses to help understand how policies, as represented in digitized text, spread across states. As a result, our approach can quickly guide analysts to progressively gain insights into policy adoption data. We evaluate the effectiveness of our system via case studies with a real-world policy dataset and qualitative interviews with domain experts.
With the rise of AI and data mining techniques, group profiling and group-level analysis have been increasingly used in many domains, including policy making and direct marketing. In some cases, the statistics extracted from data may provide insights to a group’s shared characteristics; in others, the group-level analysis can lead to problems, including stereotyping and systematic oppression. How can analytic tools facilitate a more conscientious process in group analysis? In this work, we identify a set of accountable group analytics design guidelines to explicate the needs for group differentiation and preventing overgeneralization of a group. Following the design guidelines, we develop TribalGram , a visual analytic suite that leverages interpretable machine learning algorithms and visualization to offer inference assessment, model explanation, data corroboration, and sense-making. Through the interviews with domain experts, we showcase how our design and tools can bring a richer understanding of “groups” mined from the data.
Trovato and Tobin, et al. approaches, relative concept association to better quantify the associations between a concept and instances, and debias method to mitigate spurious associations. We demonstrate the utility of our proposed ESCAPE system and statistical measures through extensive evaluation including quantitative experiments, usage scenarios, expert interviews, and controlled user experiments. CCS CONCEPTS• Human-centered computing → Visualization toolkits; Visualization systems and tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.